Artificial Intelligence: Underlying Assumptions and Basic Objectives.
ERIC Educational Resources Information Center
Cercone, Nick; McCalla, Gordon
1984-01-01
Presents perspectives on methodological assumptions underlying research efforts in artificial intelligence (AI) and charts activities, motivations, methods, and current status of research in each of the major AI subareas: natural language understanding; computer vision; expert systems; search, problem solving, planning; theorem proving and logic…
Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui
2017-12-01
Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.
Industrial Demand Module - NEMS Documentation
2014-01-01
Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.
International Natural Gas Model 2011, Model Documentation Report
2013-01-01
This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
NASA Astrophysics Data System (ADS)
Liu, Qimao
2018-02-01
This paper proposes an assumption that the fibre is elastic material and polymer matrix is viscoelastic material so that the energy dissipation depends only on the polymer matrix in dynamic response process. The damping force vectors in frequency and time domains, of FRP (Fibre-Reinforced Polymer matrix) laminated composite plates, are derived based on this assumption. The governing equations of FRP laminated composite plates are formulated in both frequency and time domains. The direct inversion method and direct time integration method for nonviscously damped systems are employed to solve the governing equations and achieve the dynamic responses in frequency and time domains, respectively. The computational procedure is given in detail. Finally, dynamic responses (frequency responses with nonzero and zero initial conditions, free vibration, forced vibrations with nonzero and zero initial conditions) of a FRP laminated composite plate are computed using the proposed methodology. The proposed methodology in this paper is easy to be inserted into the commercial finite element analysis software. The proposed assumption, based on the theory of material mechanics, needs to be further proved by experiment technique in the future.
Residential Demand Module - NEMS Documentation
2017-01-01
Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.
Electromagnetic Simulation of the Near-Field Distribution around a Wind Farm
Yang, Shang-Te; Ling, Hao
2013-01-01
An efficienmore » t approach to compute the near-field distribution around and within a wind farm under plane wave excitation is proposed. To make the problem computationally tractable, several simplifying assumptions are made based on the geometry problem. By comparing the approximations against full-wave simulations at 500 MHz, it is shown that the assumptions do not introduce significant errors into the resulting near-field distribution. The near fields around a 3 × 3 wind farm are computed using the developed methodology at 150 MHz, 500 MHz, and 3 GHz. Both the multipath interference patterns and the forward shadows are predicted by the proposed method.« less
World Energy Projection System Plus Model Documentation: Coal Module
2011-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Transportation Module
2017-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Residential Module
2016-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Refinery Module
2016-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Main Module
2016-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
Transportation Sector Module - NEMS Documentation
2017-01-01
Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.
World Energy Projection System Plus Model Documentation: Electricity Module
2017-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Greenhouse Gases Module
2011-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Natural Gas Module
2011-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: District Heat Module
2017-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
World Energy Projection System Plus Model Documentation: Industrial Module
2016-01-01
This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.
An immersed boundary method for modeling a dirty geometry data
NASA Astrophysics Data System (ADS)
Onishi, Keiji; Tsubokura, Makoto
2017-11-01
We present a robust, fast, and low preparation cost immersed boundary method (IBM) for simulating an incompressible high Re flow around highly complex geometries. The method is achieved by the dispersion of the momentum by the axial linear projection and the approximate domain assumption satisfying the mass conservation around the wall including cells. This methodology has been verified against an analytical theory and wind tunnel experiment data. Next, we simulate the problem of flow around a rotating object and demonstrate the ability of this methodology to the moving geometry problem. This methodology provides the possibility as a method for obtaining a quick solution at a next large scale supercomputer. This research was supported by MEXT as ``Priority Issue on Post-K computer'' (Development of innovative design and production processes) and used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science.
Macroeconomic Activity Module - NEMS Documentation
2016-01-01
Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code
Commercial Demand Module - NEMS Documentation
2017-01-01
Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.
A normative price for a manufactured product: The SAMICS methodology. Volume 2: Analysis
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.
1979-01-01
The Solar Array Manufacturing Industry Costing Standards provide standard formats, data, assumptions, and procedures for determining the price a hypothetical solar array manufacturer would have to be able to obtain in the market to realize a specified after-tax rate of return on equity for a specified level of production. The methodology and its theoretical background are presented. The model is sufficiently general to be used in any production-line manufacturing environment. Implementation of this methodology by the Solar Array Manufacturing Industry Simultation computer program is discussed.
NASA Technical Reports Server (NTRS)
Seidman, T. I.; Munteanu, M. J.
1979-01-01
The relationships of a variety of general computational methods (and variances) for treating illposed problems such as geophysical inverse problems are considered. Differences in approach and interpretation based on varying assumptions as to, e.g., the nature of measurement uncertainties are discussed along with the factors to be considered in selecting an approach. The reliability of the results of such computation is addressed.
Guidelines for Using the "Q" Test in Meta-Analysis
ERIC Educational Resources Information Center
Maeda, Yukiko; Harwell, Michael R.
2016-01-01
The "Q" test is regularly used in meta-analysis to examine variation in effect sizes. However, the assumptions of "Q" are unlikely to be satisfied in practice prompting methodological researchers to conduct computer simulation studies examining its statistical properties. Narrative summaries of this literature are available but…
Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and
Center: Vehicle Cost Calculator Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Google Bookmark Alternative Fuels
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...
10 CFR 436.14 - Methodological assumptions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Methodological assumptions. 436.14 Section 436.14 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION FEDERAL ENERGY MANAGEMENT AND PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.14 Methodological assumptions. (a) Each Federal Agency shall...
Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions
Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Google Bookmark
Probability calculations for three-part mineral resource assessments
Ellefsen, Karl J.
2017-06-27
Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.
Shielding of medical imaging X-ray facilities: a simple and practical method.
Bibbo, Giovanni
2017-12-01
The most widely accepted method for shielding design of X-ray facilities is that contained in the National Council on Radiation Protection and Measurements Report 147 whereby the computation of the barrier thickness for primary, secondary and leakage radiations is based on the knowledge of the distances from the radiation sources, the assumptions of the clinical workload, and usage and occupancy of adjacent areas. The shielding methodology used in this report is complex. With this methodology, the shielding designers need to make assumptions regarding the use of the X-ray room and the adjoining areas. Different shielding designers may make different assumptions resulting in different shielding requirements for a particular X-ray room. A more simple and practical method is to base the shielding design on the shielding principle used to shield X-ray tube housing to limit the leakage radiation from the X-ray tube. In this case, the shielding requirements of the X-ray room would depend only on the maximum radiation output of the X-ray equipment regardless of workload, usage or occupancy of the adjacent areas of the room. This shielding methodology, which has been used in South Australia since 1985, has proven to be practical and, to my knowledge, has not led to excess shielding of X-ray installations.
Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.
Bathke, Arne C.; Friedrich, Sarah; Pauly, Markus; Konietschke, Frank; Staffen, Wolfgang; Strobl, Nicolas; Höller, Yvonne
2018-01-01
ABSTRACT To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer’s disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved. PMID:29565679
Graphical tools for network meta-analysis in STATA.
Chaimani, Anna; Higgins, Julian P T; Mavridis, Dimitris; Spyridonos, Panagiota; Salanti, Georgia
2013-01-01
Network meta-analysis synthesizes direct and indirect evidence in a network of trials that compare multiple interventions and has the potential to rank the competing treatments according to the studied outcome. Despite its usefulness network meta-analysis is often criticized for its complexity and for being accessible only to researchers with strong statistical and computational skills. The evaluation of the underlying model assumptions, the statistical technicalities and presentation of the results in a concise and understandable way are all challenging aspects in the network meta-analysis methodology. In this paper we aim to make the methodology accessible to non-statisticians by presenting and explaining a series of graphical tools via worked examples. To this end, we provide a set of STATA routines that can be easily employed to present the evidence base, evaluate the assumptions, fit the network meta-analysis model and interpret its results.
Graphical Tools for Network Meta-Analysis in STATA
Chaimani, Anna; Higgins, Julian P. T.; Mavridis, Dimitris; Spyridonos, Panagiota; Salanti, Georgia
2013-01-01
Network meta-analysis synthesizes direct and indirect evidence in a network of trials that compare multiple interventions and has the potential to rank the competing treatments according to the studied outcome. Despite its usefulness network meta-analysis is often criticized for its complexity and for being accessible only to researchers with strong statistical and computational skills. The evaluation of the underlying model assumptions, the statistical technicalities and presentation of the results in a concise and understandable way are all challenging aspects in the network meta-analysis methodology. In this paper we aim to make the methodology accessible to non-statisticians by presenting and explaining a series of graphical tools via worked examples. To this end, we provide a set of STATA routines that can be easily employed to present the evidence base, evaluate the assumptions, fit the network meta-analysis model and interpret its results. PMID:24098547
Motion and Stability of Saturated Soil Systems under Dynamic Loading.
1985-04-04
12 7.3 Experimental Verification of Theories ............................. 13 8. ADDITIONAL COMMENTS AND OTHER WORK, AT THE OHIO...theoretical/computational models. The continuing rsearch effort will extend and refine the theoretical models, allow for compressibility of soil as...motion of soil and water and, therefore, a correct theory of liquefaction should not include this assumption. Finite element methodologies have been
Adaptive System Modeling for Spacecraft Simulation
NASA Technical Reports Server (NTRS)
Thomas, Justin
2011-01-01
This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).
Model documentation Renewable Fuels Module of the National Energy Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-01-01
This report documents the objectives, analaytical approach and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1996 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, P. T.; Dickson, T. L.; Yin, S.
The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include themore » NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.« less
Comparative study of solar optics for paraboloidal concentrators
NASA Technical Reports Server (NTRS)
Wen, L.; Poon, P.; Carley, W.; Huang, L.
1979-01-01
Different analytical methods for computing the flux distribution on the focal plane of a paraboloidal solar concentrator are reviewed. An analytical solution in algebraic form is also derived for an idealized model. The effects resulting from using different assumptions in the definition of optical parameters used in these methodologies are compared and discussed in detail. These parameters include solar irradiance distribution (limb darkening and circumsolar), reflector surface specular spreading, surface slope error, and concentrator pointing inaccuracy. The type of computational method selected for use depends on the maturity of the design and the data available at the time the analysis is made.
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2015-02-01
Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Hall, Joel; Karelse, Robert N.
2017-11-01
Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.
NASA Astrophysics Data System (ADS)
Elishakoff, I.; Sarlin, N.
2016-06-01
In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.
Efficient Computation of Info-Gap Robustness for Finite Element Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.
2012-07-05
A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers anmore » alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.« less
A New Computational Methodology for Structural Dynamics Problems
2008-04-01
by approximating the geometry of the midsurface of the shell (as in continuum-based finite element models), are prevented from the beginning...iiθ , such that the surface 03=θ defines the midsurface ( )R tM M of the region ( )R tB B . The coordinate 3θ is the measure of the distance...assumption for the shell model: “the displacement field is considered as a linear expansion of the thickness coordinate around the midsurface . The
Standardized input for Hanford environmental impact statements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, B.A.
1981-05-01
Models and computer programs for simulating the environmental behavior of radionuclides in the environment and the resulting radiation dose to humans have been developed over the years by the Environmental Analysis Section staff, Ecological Sciences Department at the Pacific Northwest Laboratory (PNL). Methodologies have evolved for calculating raidation doses from many exposure pathways for any type of release mechanism. Depending on the situation or process being simulated, different sets of computer programs, assumptions, and modeling techniques must be used. This report is a compilation of recommended computer programs and necessary input information for use in calculating doses to members ofmore » the general public for environmental impact statements prepared for DOE activities to be conducted on or near the Hanford Reservation.« less
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Landry, Nicholas W.; Knezevic, Marko
2015-01-01
Property closures are envelopes representing the complete set of theoretically feasible macroscopic property combinations for a given material system. In this paper, we present a computational procedure based on fast Fourier transforms (FFTs) for delineation of elastic property closures for hexagonal close packed (HCP) metals. The procedure consists of building a database of non-zero Fourier transforms for each component of the elastic stiffness tensor, calculating the Fourier transforms of orientation distribution functions (ODFs), and calculating the ODF-to-elastic property bounds in the Fourier space. In earlier studies, HCP closures were computed using the generalized spherical harmonics (GSH) representation and an assumption of orthotropic sample symmetry; here, the FFT approach allowed us to successfully calculate the closures for a range of HCP metals without invoking any sample symmetry assumption. The methodology presented here facilitates for the first time computation of property closures involving normal-shear coupling stiffness coefficients. We found that the representation of these property linkages using FFTs need more terms compared to GSH representations. However, the use of FFT representations reduces the computational time involved in producing the property closures due to the use of fast FFT algorithms. Moreover, FFT algorithms are readily available as opposed to GSH codes. PMID:28793566
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Rigas, Georgios; Esclapez, Lucas; Magri, Luca; Blonigan, Patrick
2016-11-01
Bluff body flows are of fundamental importance to many engineering applications involving massive flow separation and in particular the transport industry. Coherent flow structures emanating in the wake of three-dimensional bluff bodies, such as cars, trucks and lorries, are directly linked to increased aerodynamic drag, noise and structural fatigue. For low Reynolds laminar and transitional regimes, hydrodynamic stability theory has aided the understanding and prediction of the unstable dynamics. In the same framework, sensitivity analysis provides the means for efficient and optimal control, provided the unstable modes can be accurately predicted. However, these methodologies are limited to laminar regimes where only a few unstable modes manifest. Here we extend the stability analysis to low-dimensional chaotic regimes by computing the Lyapunov covariant vectors and their associated Lyapunov exponents. We compare them to eigenvectors and eigenvalues computed in traditional hydrodynamic stability analysis. Computing Lyapunov covariant vectors and Lyapunov exponents also enables the extension of sensitivity analysis to chaotic flows via the shadowing method. We compare the computed shadowing sensitivities to traditional sensitivity analysis. These Lyapunov based methodologies do not rely on mean flow assumptions, and are mathematically rigorous for calculating sensitivities of fully unsteady flow simulations.
1992-09-01
ease with which a model is employed, may depend on several factors, among them the users’ past experience in modeling, preferences for menu driven...partially on our knowledge of important logistics factors, partially on the past work of Diener (12), and partially on the assumption that comparison of...flexibility in output report selection. The minimum output was used in each instance 74 to conserve computer storage and to minimize the consumption of paper
2009-04-01
Uncertainties, Gaps , and Issues for the Use of GWP to Examine Emissions From Aviation That Impact Global Climate Change. (Wuebbles, Yang and Herman 2008...selecting time periods and spatial scales for data gathering, strategies for filling data gaps , and computational considerations for managing the...Fuels Assumptions, methodological choices, strategies for filling data gaps , and other factors throughout the life cycle substantially influence the
Crovelli, R.A.
1988-01-01
The geologic appraisal model that is selected for a petroleum resource assessment depends upon purpose of the assessment, basic geologic assumptions of the area, type of available data, time available before deadlines, available human and financial resources, available computer facilities, and, most importantly, the available quantitative methodology with corresponding computer software and any new quantitative methodology that would have to be developed. Therefore, different resource assessment projects usually require different geologic models. Also, more than one geologic model might be needed in a single project for assessing different regions of the study or for cross-checking resource estimates of the area. Some geologic analyses used in the past for petroleum resource appraisal involved play analysis. The corresponding quantitative methodologies of these analyses usually consisted of Monte Carlo simulation techniques. A probabilistic system of petroleum resource appraisal for play analysis has been designed to meet the following requirements: (1) includes a variety of geologic models, (2) uses an analytic methodology instead of Monte Carlo simulation, (3) possesses the capacity to aggregate estimates from many areas that have been assessed by different geologic models, and (4) runs quickly on a microcomputer. Geologic models consist of four basic types: reservoir engineering, volumetric yield, field size, and direct assessment. Several case histories and present studies by the U.S. Geological Survey are discussed. ?? 1988 International Association for Mathematical Geology.
Methodology for estimating helicopter performance and weights using limited data
NASA Technical Reports Server (NTRS)
Baserga, Claudio; Ingalls, Charles; Lee, Henry; Peyran, Richard
1990-01-01
Methodology is developed and described for estimating the flight performance and weights of a helicopter for which limited data are available. The methodology is based on assumptions which couple knowledge of the technology of the helicopter under study with detailed data from well documented helicopters thought to be of similar technology. The approach, analysis assumptions, technology modeling, and the use of reference helicopter data are discussed. Application of the methodology is illustrated with an investigation of the Agusta A129 Mangusta.
2016-03-24
McCarthy, Blood Meridian 1.1 General Issue Violent conflict between competing groups has been a pervasive and driving force for all of human history...It has evolved from small skirmishes between unarmed groups , wielding rudimentary weapons, to industrialized global conflagrations. Global...methodology is presented in Figure 2. Figure 2: Study Methodology 5 1.6 Study Assumptions and Limitations Assumptions Four underlying assumptions were
Experimental Methodology for Measuring Combustion and Injection-Coupled Responses
NASA Technical Reports Server (NTRS)
Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.
2006-01-01
A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.
Comparison of a 3-D CFD-DSMC Solution Methodology With a Wind Tunnel Experiment
NASA Technical Reports Server (NTRS)
Glass, Christopher E.; Horvath, Thomas J.
2002-01-01
A solution method for problems that contain both continuum and rarefied flow regions is presented. The methodology is applied to flow about the 3-D Mars Sample Return Orbiter (MSRO) that has a highly compressed forebody flow, a shear layer where the flow separates from a forebody lip, and a low density wake. Because blunt body flow fields contain such disparate regions, employing a single numerical technique to solve the entire 3-D flow field is often impractical, or the technique does not apply. Direct simulation Monte Carlo (DSMC) could be employed to solve the entire flow field; however, the technique requires inordinate computational resources for continuum and near-continuum regions, and is best suited for the wake region. Computational fluid dynamics (CFD) will solve the high-density forebody flow, but continuum assumptions do not apply in the rarefied wake region. The CFD-DSMC approach presented herein may be a suitable way to obtain a higher fidelity solution.
Ontological, Epistemological and Methodological Assumptions: Qualitative versus Quantitative
ERIC Educational Resources Information Center
Ahmed, Abdelhamid
2008-01-01
The review to follow is a comparative analysis of two studies conducted in the field of TESOL in Education published in "TESOL QUARTERLY." The aspects to be compared are as follows. First, a brief description of each study will be presented. Second, the ontological, epistemological and methodological assumptions underlying each study…
Anderson, G F; Han, K C; Miller, R H; Johns, M E
1997-01-01
OBJECTIVE: To compare three methods of computing the national requirements for otolaryngologists in 1994 and 2010. DATA SOURCES: Three large HMOs, a Delphi panel, the Bureau of Health Professions (BHPr), and published sources. STUDY DESIGN: Three established methods of computing requirements for otolaryngologists were compared: managed care, demand-utilization, and adjusted needs assessment. Under the managed care model, a published method based on reviewing staffing patterns in HMOs was modified to estimate the number of otolaryngologists. We obtained from BHPr estimates of work force projections from their demand model. To estimate the adjusted needs model, we convened a Delphi panel of otolaryngologists using the methodology developed by the Graduate Medical Education National Advisory Committee (GMENAC). DATA COLLECTION/EXTRACTION METHODS: Not applicable. PRINCIPAL FINDINGS: Wide variation in the estimated number of otolaryngologists required occurred across the three methods. Within each model it was possible to alter the requirements for otolaryngologists significantly by changing one or more of the key assumptions. The managed care model has a potential to obtain the most reliable estimates because it reflects actual staffing patterns in institutions that are attempting to use physicians efficiently. CONCLUSIONS: Estimates of work force requirements can vary considerably if one or more assumptions are changed. In order for the managed care approach to be useful for actual decision making concerning the appropriate number of otolaryngologists required, additional research on the methodology used to extrapolate the results to the general population is necessary. PMID:9180613
Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew D.; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.
2015-01-01
Components of the methodology are based on simplifying assumptions and require information that, for many species, may be sparse or unreliable. These assumptions are presented in the report and should be carefully considered when using output from the methodology. In addition, this methodology can be used to recommend species for more intensive demographic modeling or highlight those species that may not require any additional protection because effects of wind energy development on their populations are projected to be small.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-27
... burden of this collection of information is accurate, and based on valid assumptions and methodology... assumptions and methodologies. The respondents requested that GSA and OMB publish additional information about... seen as an intelligence gathering, they recommended that OMB exempt primary recipients from having to...
Development of a New Methodology for Computing Surface Sensible Heat Fluxes using Thermal Imagery
NASA Astrophysics Data System (ADS)
Morrison, T. J.; Calaf, M.; Fernando, H. J.; Price, T. A.; Pardyjak, E.
2017-12-01
Current numerical weather predication models utilize similarity to characterize momentum, moisture, and heat fluxes. Such formulations are only valid under the ideal assumptions of spatial homogeneity, statistical stationary, and zero subsidence. However, recent surface temperature measurements from the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program on the Salt Flats of Utah's West desert, show that even under the most a priori ideal conditions, heterogeneity of the aforementioned variables exists. We present a new method to extract spatially-distributed measurements of surface sensible heat flux from thermal imagery. The approach consists of using a surface energy budget, where the ground heat flux is easily computed from limited measurements using a force-restore-type methodology, the latent heat fluxes are neglected, and the energy storage is computed using a lumped capacitance model. Preliminary validation of the method is presented using experimental data acquired from a nearby sonic anemometer during the MATERHORN campaign. Additional evaluation is required to confirm the method's validity. Further decomposition analysis of on-site instrumentation (thermal camera, cold-hotwire probes, and sonic anemometers) using Proper Orthogonal Decomposition (POD), and wavelet analysis, reveals time scale similarity between the flow and surface fluctuations.
Bickel, David R.; Montazeri, Zahra; Hsieh, Pei-Chun; Beatty, Mary; Lawit, Shai J.; Bate, Nicholas J.
2009-01-01
Motivation: Measurements of gene expression over time enable the reconstruction of transcriptional networks. However, Bayesian networks and many other current reconstruction methods rely on assumptions that conflict with the differential equations that describe transcriptional kinetics. Practical approximations of kinetic models would enable inferring causal relationships between genes from expression data of microarray, tag-based and conventional platforms, but conclusions are sensitive to the assumptions made. Results: The representation of a sufficiently large portion of genome enables computation of an upper bound on how much confidence one may place in influences between genes on the basis of expression data. Information about which genes encode transcription factors is not necessary but may be incorporated if available. The methodology is generalized to cover cases in which expression measurements are missing for many of the genes that might control the transcription of the genes of interest. The assumption that the gene expression level is roughly proportional to the rate of translation led to better empirical performance than did either the assumption that the gene expression level is roughly proportional to the protein level or the Bayesian model average of both assumptions. Availability: http://www.oisb.ca points to R code implementing the methods (R Development Core Team 2004). Contact: dbickel@uottawa.ca Supplementary information: http://www.davidbickel.com PMID:19218351
Analysis of satellite servicing cost benefits
NASA Technical Reports Server (NTRS)
Builteman, H. O.
1982-01-01
Under the auspices of NASA/JSC a methodology was developed to estimate the value of satellite servicing to the user community. Time and funding precluded the development of an exhaustive computer model; instead, the concept of Design Reference Missions was involved. In this approach, three space programs were analyzed for various levels of servicing. The programs selected fall into broad categories which include 80 to 90% of the missions planned between now and the end of the century. Of necessity, the extrapolation of the three program analyses to the user community as a whole depends on an average mission model and equivalency projections. The value of the estimated cost benefits based on this approach depends largely on how well the equivalency assumptions and the mission model match the real world. A careful definition of all assumptions permits the analysis to be extended to conditions beyond the scope of this study.
Identifying fMRI Model Violations with Lagrange Multiplier Tests
Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor
2013-01-01
The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665
Model documentation report: Commercial Sector Demand Module of the National Energy Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-01-01
This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components. The NEMS Commercial Sector Demand Module is a simulation tool based upon economic and engineering relationships that models commercial sector energy demands at the nine Census Division level of detail for eleven distinct categories of commercial buildings. Commercial equipment selections are performed for the major fuels of electricity, natural gas,more » and distillate fuel, for the major services of space heating, space cooling, water heating, ventilation, cooking, refrigeration, and lighting. The algorithm also models demand for the minor fuels of residual oil, liquefied petroleum gas, steam coal, motor gasoline, and kerosene, the renewable fuel sources of wood and municipal solid waste, and the minor services of office equipment. Section 2 of this report discusses the purpose of the model, detailing its objectives, primary input and output quantities, and the relationship of the Commercial Module to the other modules of the NEMS system. Section 3 of the report describes the rationale behind the model design, providing insights into further assumptions utilized in the model development process to this point. Section 3 also reviews alternative commercial sector modeling methodologies drawn from existing literature, providing a comparison to the chosen approach. Section 4 details the model structure, using graphics and text to illustrate model flows and key computations.« less
Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G
2009-04-03
To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
Fault-tolerant clock synchronization validation methodology. [in computer systems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.
1987-01-01
A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.
NASA Technical Reports Server (NTRS)
1979-01-01
Graphite/polyimide (Gr/PI) bolted and bonded joints were investigated. Possible failure modes and the design loads for the four generic joint types are discussed. Preliminary sizing of a type 1 joint, bonded and bolted configuration is described, including assumptions regarding material properties and sizing methodology. A general purpose finite element computer code is described that was formulated to analyze single and double lap joints, with and without tapered adherends, and with user-controlled variable element size arrangements. An initial order of Celion 6000/PMR-15 prepreg was received and characterized.
Cogeneration Technology Alternatives Study (CTAS). Volume 2: Analytical approach
NASA Technical Reports Server (NTRS)
Gerlaugh, H. E.; Hall, E. W.; Brown, D. H.; Priestley, R. R.; Knightly, W. F.
1980-01-01
The use of various advanced energy conversion systems were compared with each other and with current technology systems for their savings in fuel energy, costs, and emissions in individual plants and on a national level. The ground rules established by NASA and assumptions made by the General Electric Company in performing this cogeneration technology alternatives study are presented. The analytical methodology employed is described in detail and is illustrated with numerical examples together with a description of the computer program used in calculating over 7000 energy conversion system-industrial process applications. For Vol. 1, see 80N24797.
Haegele, Justin A; Hodge, Samuel Russell
2015-10-01
There are basic philosophical and paradigmatic assumptions that guide scholarly research endeavors, including the methods used and the types of questions asked. Through this article, kinesiology faculty and students with interests in adapted physical activity are encouraged to understand the basic assumptions of applied behavior analysis (ABA) methodology for conducting, analyzing, and presenting research of high quality in this paradigm. The purposes of this viewpoint paper are to present information fundamental to understanding the assumptions undergirding research methodology in ABA, describe key aspects of single-subject research designs, and discuss common research designs and data-analysis strategies used in single-subject studies.
Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency
NASA Astrophysics Data System (ADS)
Aikens, Kurt; Craft, Kyle; Redman, Andrew
2015-11-01
The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
Allen, Vivian; Paxton, Heather; Hutchinson, John R
2009-09-01
Inertial properties of animal bodies and segments are critical input parameters for biomechanical analysis of standing and moving, and thus are important for paleobiological inquiries into the broader behaviors, ecology and evolution of extinct taxa such as dinosaurs. But how accurately can these be estimated? Computational modeling was used to estimate the inertial properties including mass, density, and center of mass (COM) for extant crocodiles (adult and juvenile Crocodylus johnstoni) and birds (Gallus gallus; junglefowl and broiler chickens), to identify the chief sources of variation and methodological errors, and their significance. High-resolution computed tomography scans were segmented into 3D objects and imported into inertial property estimation software that allowed for the examination of variable body segment densities (e.g., air spaces such as lungs, and deformable body outlines). Considerable biological variation of inertial properties was found within groups due to ontogenetic changes as well as evolutionary changes between chicken groups. COM positions shift in variable directions during ontogeny in different groups. Our method was repeatable and the resolution was sufficient for accurate estimations of mass and density in particular. However, we also found considerable potential methodological errors for COM related to (1) assumed body segment orientation, (2) what frames of reference are used to normalize COM for size-independent comparisons among animals, and (3) assumptions about tail shape. Methods and assumptions are suggested to minimize these errors in the future and thereby improve estimation of inertial properties for extant and extinct animals. In the best cases, 10%-15% errors in these estimates are unavoidable, but particularly for extinct taxa errors closer to 50% should be expected, and therefore, cautiously investigated. Nonetheless in the best cases these methods allow rigorous estimation of inertial properties. (c) 2009 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk
2016-10-15
Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allowsmore » us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.« less
Using Model Replication to Improve the Reliability of Agent-Based Models
NASA Astrophysics Data System (ADS)
Zhong, Wei; Kim, Yushim
The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
James Francfort; Kevin Morrow; Dimitri Hochard
2007-02-01
This report documents efforts to develop a computer tool for modeling the economic payback for comparative airport ground support equipment (GSE) that are propelled by either electric motors or gasoline and diesel engines. The types of GSE modeled are pushback tractors, baggage tractors, and belt loaders. The GSE modeling tool includes an emissions module that estimates the amount of tailpipe emissions saved by replacing internal combustion engine GSE with electric GSE. This report contains modeling assumptions, methodology, a user’s manual, and modeling results. The model was developed based on the operations of two airlines at four United States airports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klasky, Marc Louis; Myers, Steven Charles; James, Michael R.
To facilitate the timely execution of System Threat Reviews (STRs) for DNDO, and also to develop a methodology for performing STRs, LANL performed comparisons of several radiation transport codes (MCNP, GADRAS, and Gamma-Designer) that have been previously utilized to compute radiation signatures. While each of these codes has strengths, it is of paramount interest to determine the limitations of each of the respective codes and also to identify the most time efficient means by which to produce computational results, given the large number of parametric cases that are anticipated in performing STR's. These comparisons serve to identify regions of applicabilitymore » for each code and provide estimates of uncertainty that may be anticipated. Furthermore, while performing these comparisons, examination of the sensitivity of the results to modeling assumptions was also examined. These investigations serve to enable the creation of the LANL methodology for performing STRs. Given the wide variety of radiation test sources, scenarios, and detectors, LANL calculated comparisons of the following parameters: decay data, multiplicity, device (n,γ) leakages, and radiation transport through representative scenes and shielding. This investigation was performed to understand potential limitations utilizing specific codes for different aspects of the STR challenges.« less
Eigenspace perturbations for uncertainty estimation of single-point turbulence closures
NASA Astrophysics Data System (ADS)
Iaccarino, Gianluca; Mishra, Aashwin Ananda; Ghili, Saman
2017-02-01
Reynolds-averaged Navier-Stokes (RANS) models represent the workhorse for predicting turbulent flows in complex industrial applications. However, RANS closures introduce a significant degree of epistemic uncertainty in predictions due to the potential lack of validity of the assumptions utilized in model formulation. Estimating this uncertainty is a fundamental requirement for building confidence in such predictions. We outline a methodology to estimate this structural uncertainty, incorporating perturbations to the eigenvalues and the eigenvectors of the modeled Reynolds stress tensor. The mathematical foundations of this framework are derived and explicated. Thence, this framework is applied to a set of separated turbulent flows, while compared to numerical and experimental data and contrasted against the predictions of the eigenvalue-only perturbation methodology. It is exhibited that for separated flows, this framework is able to yield significant enhancement over the established eigenvalue perturbation methodology in explaining the discrepancy against experimental observations and high-fidelity simulations. Furthermore, uncertainty bounds of potential engineering utility can be estimated by performing five specific RANS simulations, reducing the computational expenditure on such an exercise.
Effective Information Systems: What's the Secret?
ERIC Educational Resources Information Center
Kirkham, Sandi
1994-01-01
Argues that false assumptions about user needs implicit in methodologies for building information systems have resulted in inadequate and inflexible systems. Checkland's Soft Systems Methodology is examined as a useful alternative. Its fundamental features are described, and examples of models demonstrate how the methodology can facilitate…
NASA Astrophysics Data System (ADS)
Pascual-Gutiérrez, José A.; Murthy, Jayathi Y.; Viskanta, Raymond
2009-09-01
Silicon thermal conductivities are obtained from the solution of the linearized phonon Boltzmann transport equation without the use of any parameter-fitting. Perturbation theory is used to compute the strength of three-phonon and isotope scattering mechanisms. Matrix elements based on Fermi's golden rule are computed exactly without assuming either average or mode-dependent Grüeisen parameters, and with no underlying assumptions of crystal isotropy. The environment-dependent interatomic potential is employed to describe the interatomic force constants and the perturbing Hamiltonians. A detailed methodology to accurately find three-phonon processes satisfying energy- and momentum-conservation rules is also described. Bulk silicon thermal conductivity values are computed across a range of temperatures and shown to match experimental data very well. It is found that about two-thirds of the heat transport in bulk silicon may be attributed to transverse acoustic modes. Effective relaxation times and mean free paths are computed in order to provide a more complete picture of the detailed transport mechanisms and for use with carrier transport models based on the Boltzmann transport equation.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
43 CFR 11.83 - Damage determination phase-use value methodologies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...
ERIC Educational Resources Information Center
Roscigno, Vincent J.
2011-01-01
Power is a core theoretical construct in the field with amazing utility across substantive areas, levels of analysis and methodologies. Yet, its use along with associated assumptions--assumptions surrounding constraint vs. action and specifically organizational structure and rationality--remain problematic. In this article, and following an…
NASA Astrophysics Data System (ADS)
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy
Schroll, Henning; Hamker, Fred H.
2013-01-01
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2008-03-01
Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenzie-Carter, M.A.; Lyon, R.E.; Rope, S.K.
This report contains information to support the Environmental Assessment for the Burning Plasma Experiment (BPX) Project proposed for the Princeton Plasma Physics Laboratory (PPPL). The assumptions and methodology used to assess the impact to members of the public from operational and accidental releases of radioactive material from the proposed BPX during the operational period of the project are described. A description of the tracer release tests conducted at PPPL by NOAA is included; dispersion values from these tests are used in the dose calculations. Radiological releases, doses, and resulting health risks are calculated and summarized. The computer code AIRDOS- EPA,more » which is part of the computer code system CAP-88, is used to calculate the individual and population doses for routine releases; FUSCRAC3 is used to calculate doses resulting from off-normal releases where direct application of the NOAA tracer test data is not practical. Where applicable, doses are compared to regulatory limits and guideline values. 48 refs., 16 tabs.« less
Qualitative Approaches to Mixed Methods Practice
ERIC Educational Resources Information Center
Hesse-Biber, Sharlene
2010-01-01
This article discusses how methodological practices can shape and limit how mixed methods is practiced and makes visible the current methodological assumptions embedded in mixed methods practice that can shut down a range of social inquiry. The article argues that there is a "methodological orthodoxy" in how mixed methods is practiced…
Mohiuddin, Syed; Busby, John; Savović, Jelena; Richards, Alison; Northstone, Kate; Hollingworth, William; Donovan, Jenny L; Vasilakis, Christos
2017-01-01
Objectives Overcrowding in the emergency department (ED) is common in the UK as in other countries worldwide. Computer simulation is one approach used for understanding the causes of ED overcrowding and assessing the likely impact of changes to the delivery of emergency care. However, little is known about the usefulness of computer simulation for analysis of ED patient flow. We undertook a systematic review to investigate the different computer simulation methods and their contribution for analysis of patient flow within EDs in the UK. Methods We searched eight bibliographic databases (MEDLINE, EMBASE, COCHRANE, WEB OF SCIENCE, CINAHL, INSPEC, MATHSCINET and ACM DIGITAL LIBRARY) from date of inception until 31 March 2016. Studies were included if they used a computer simulation method to capture patient progression within the ED of an established UK National Health Service hospital. Studies were summarised in terms of simulation method, key assumptions, input and output data, conclusions drawn and implementation of results. Results Twenty-one studies met the inclusion criteria. Of these, 19 used discrete event simulation and 2 used system dynamics models. The purpose of many of these studies (n=16; 76%) centred on service redesign. Seven studies (33%) provided no details about the ED being investigated. Most studies (n=18; 86%) used specific hospital models of ED patient flow. Overall, the reporting of underlying modelling assumptions was poor. Nineteen studies (90%) considered patient waiting or throughput times as the key outcome measure. Twelve studies (57%) reported some involvement of stakeholders in the simulation study. However, only three studies (14%) reported on the implementation of changes supported by the simulation. Conclusions We found that computer simulation can provide a means to pretest changes to ED care delivery before implementation in a safe and efficient manner. However, the evidence base is small and poorly developed. There are some methodological, data, stakeholder, implementation and reporting issues, which must be addressed by future studies. PMID:28487459
Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity
NASA Astrophysics Data System (ADS)
Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.
As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.
2016-12-22
assumptions of behavior. This research proposes an information theoretic methodology to discover such complex network structures and dynamics while overcoming...the difficulties historically associated with their study. Indeed, this was the first application of an information theoretic methodology as a tool...1 Research Objectives and Questions..............................................................................2 Methodology
Radiation Transport Calculation of the UGXR Collimators for the Jules Horowitz Reactor (JHR)
NASA Astrophysics Data System (ADS)
Chento, Yelko; Hueso, César; Zamora, Imanol; Fabbri, Marco; Fuente, Cristina De La; Larringan, Asier
2017-09-01
Jules Horowitz Reactor (JHR), a major infrastructure of European interest in the fission domain, will be built and operated in the framework of an international cooperation, including the development and qualification of materials and nuclear fuel used in nuclear industry. For this purpose UGXR Collimators, two multi slit gamma and X-ray collimation mechatronic systems, will be installed at the JHR pool and at the Irradiated Components Storage pool. Expected amounts of radiation produced by the spent fuel and X-ray accelerator implies diverse aspects need to be verified to ensure adequate radiological zoning and personnel radiation protection. A computational methodology was devised to validate the Collimators design by means of coupling different engineering codes. In summary, several assessments were performed by means of MCNP5v1.60 to fulfil all the radiological requirements in Nominal scenario (TEDE < 25µSv/h) and in Maintenance scenario (TEDE < 2mSv/h) among others, detailing the methodology, hypotheses and assumptions employed.
Calculating stage duration statistics in multistage diseases.
Komarova, Natalia L; Thalhauser, Craig J
2011-01-01
Many human diseases are characterized by multiple stages of progression. While the typical sequence of disease progression can be identified, there may be large individual variations among patients. Identifying mean stage durations and their variations is critical for statistical hypothesis testing needed to determine if treatment is having a significant effect on the progression, or if a new therapy is showing a delay of progression through a multistage disease. In this paper we focus on two methods for extracting stage duration statistics from longitudinal datasets: an extension of the linear regression technique, and a counting algorithm. Both are non-iterative, non-parametric and computationally cheap methods, which makes them invaluable tools for studying the epidemiology of diseases, with a goal of identifying different patterns of progression by using bioinformatics methodologies. Here we show that the regression method performs well for calculating the mean stage durations under a wide variety of assumptions, however, its generalization to variance calculations fails under realistic assumptions about the data collection procedure. On the other hand, the counting method yields reliable estimations for both means and variances of stage durations. Applications to Alzheimer disease progression are discussed.
Walking through the statistical black boxes of plant breeding.
Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin
2016-10-01
The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.
NASA Technical Reports Server (NTRS)
Moser, Louise; Melliar-Smith, Michael; Schwartz, Richard
1987-01-01
A SIFT reliable aircraft control computer system, designed to meet the ultrahigh reliability required for safety critical flight control applications by use of processor replications and voting, was constructed for SRI, and delivered to NASA Langley for evaluation in the AIRLAB. To increase confidence in the reliability projections for SIFT, produced by a Markov reliability model, SRI constructed a formal specification, defining the meaning of reliability in the context of flight control. A further series of specifications defined, in increasing detail, the design of SIFT down to pre- and post-conditions on Pascal code procedures. Mechanically checked mathematical proofs were constructed to demonstrate that the more detailed design specifications for SIFT do indeed imply the formal reliability requirement. An additional specification defined some of the assumptions made about SIFT by the Markov model, and further proofs were constructed to show that these assumptions, as expressed by that specification, did indeed follow from the more detailed design specifications for SIFT. This report provides an outline of the methodology used for this hierarchical specification and proof, and describes the various specifications and proofs performed.
ERIC Educational Resources Information Center
Haegele, Justin A.; Hodge, Samuel R.
2015-01-01
Emerging professionals, particularly senior-level undergraduate and graduate students in kinesiology who have an interest in physical education for individuals with and without disabilities, should understand the basic assumptions of the quantitative research paradigm. Knowledge of basic assumptions is critical for conducting, analyzing, and…
Is herpes zoster vaccination likely to be cost-effective in Canada?
Peden, Alexander D; Strobel, Stephenson B; Forget, Evelyn L
2014-05-30
To synthesize the current literature detailing the cost-effectiveness of the herpes zoster (HZ) vaccine, and to provide Canadian policy-makers with cost-effectiveness measurements in a Canadian context. This article builds on an existing systematic review of the HZ vaccine that offers a quality assessment of 11 recent articles. We first replicated this study, and then two assessors reviewed the articles and extracted information on vaccine effectiveness, cost of HZ, other modelling assumptions and QALY estimates. Then we transformed the results into a format useful for Canadian policy decisions. Results expressed in different currencies from different years were converted into 2012 Canadian dollars using Bank of Canada exchange rates and a Consumer Price Index deflator. Modelling assumptions that varied between studies were synthesized. We tabled the results for comparability. The Szucs systematic review presented a thorough methodological assessment of the relevant literature. However, the various studies presented results in a variety of currencies, and based their analyses on disparate methodological assumptions. Most of the current literature uses Markov chain models to estimate HZ prevalence. Cost assumptions, discount rate assumptions, assumptions about vaccine efficacy and waning and epidemiological assumptions drove variation in the outcomes. This article transforms the results into a table easily understood by policy-makers. The majority of the current literature shows that HZ vaccination is cost-effective at the price of $100,000 per QALY. Few studies showed that vaccination cost-effectiveness was higher than this threshold, and only under conservative assumptions. Cost-effectiveness was sensitive to vaccine price and discount rate.
The influence of computational assumptions on analysing abdominal aortic aneurysm haemodynamics.
Ene, Florentina; Delassus, Patrick; Morris, Liam
2014-08-01
The variation in computational assumptions for analysing abdominal aortic aneurysm haemodynamics can influence the desired output results and computational cost. Such assumptions for abdominal aortic aneurysm modelling include static/transient pressures, steady/transient flows and rigid/compliant walls. Six computational methods and these various assumptions were simulated and compared within a realistic abdominal aortic aneurysm model with and without intraluminal thrombus. A full transient fluid-structure interaction was required to analyse the flow patterns within the compliant abdominal aortic aneurysms models. Rigid wall computational fluid dynamics overestimates the velocity magnitude by as much as 40%-65% and the wall shear stress by 30%-50%. These differences were attributed to the deforming walls which reduced the outlet volumetric flow rate for the transient fluid-structure interaction during the majority of the systolic phase. Static finite element analysis accurately approximates the deformations and von Mises stresses when compared with transient fluid-structure interaction. Simplifying the modelling complexity reduces the computational cost significantly. In conclusion, the deformation and von Mises stress can be approximately found by static finite element analysis, while for compliant models a full transient fluid-structure interaction analysis is required for acquiring the fluid flow phenomenon. © IMechE 2014.
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.
2014-01-01
The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.
NASA Astrophysics Data System (ADS)
Redonnet, S.; Ben Khelil, S.; Bulté, J.; Cunha, G.
2017-09-01
With the objective of aircraft noise mitigation, we here address the numerical characterization of the aeroacoustics by a simplified nose landing gear (NLG), through the use of advanced simulation and signal processing techniques. To this end, the NLG noise physics is first simulated through an advanced hybrid approach, which relies on Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) calculations. Compared to more traditional hybrid methods (e.g. those relying on the use of an Acoustic Analogy), and although it is used here with some approximations made (e.g. design of the CFD-CAA interface), the present approach does not rely on restrictive assumptions (e.g. equivalent noise source, homogeneous propagation medium), which allows to incorporate more realism into the prediction. In a second step, the outputs coming from such CFD-CAA hybrid calculations are processed through both traditional and advanced post-processing techniques, thus offering to further investigate the NLG's noise source mechanisms. Among other things, this work highlights how advanced computational methodologies are now mature enough to not only simulate realistic problems of airframe noise emission, but also to investigate their underlying physics.
Some observations on computer lip-reading: moving from the dream to the reality
NASA Astrophysics Data System (ADS)
Bear, Helen L.; Owen, Gari; Harvey, Richard; Theobald, Barry-John
2014-10-01
In the quest for greater computer lip-reading performance there are a number of tacit assumptions which are either present in the datasets (high resolution for example) or in the methods (recognition of spoken visual units called "visemes" for example). Here we review these and other assumptions and show the surprising result that computer lip-reading is not heavily constrained by video resolution, pose, lighting and other practical factors. However, the working assumption that visemes, which are the visual equivalent of phonemes, are the best unit for recognition does need further examination. We conclude that visemes, which were defined over a century ago, are unlikely to be optimal for a modern computer lip-reading system.
Root Source Analysis/ValuStream[Trade Mark] - A Methodology for Identifying and Managing Risks
NASA Technical Reports Server (NTRS)
Brown, Richard Lee
2008-01-01
Root Source Analysis (RoSA) is a systems engineering methodology that has been developed at NASA over the past five years. It is designed to reduce costs, schedule, and technical risks by systematically examining critical assumptions and the state of the knowledge needed to bring to fruition the products that satisfy mission-driven requirements, as defined for each element of the Work (or Product) Breakdown Structure (WBS or PBS). This methodology is sometimes referred to as the ValuStream method, as inherent in the process is the linking and prioritizing of uncertainties arising from knowledge shortfalls directly to the customer's mission driven requirements. RoSA and ValuStream are synonymous terms. RoSA is not simply an alternate or improved method for identifying risks. It represents a paradigm shift. The emphasis is placed on identifying very specific knowledge shortfalls and assumptions that are the root sources of the risk (the why), rather than on assessing the WBS product(s) themselves (the what). In so doing RoSA looks forward to anticipate, identify, and prioritize knowledge shortfalls and assumptions that are likely to create significant uncertainties/ risks (as compared to Root Cause Analysis, which is most often used to look back to discover what was not known, or was assumed, that caused the failure). Experience indicates that RoSA, with its primary focus on assumptions and the state of the underlying knowledge needed to define, design, build, verify, and operate the products, can identify critical risks that historically have been missed by the usual approaches (i.e., design review process and classical risk identification methods). Further, the methodology answers four critical questions for decision makers and risk managers: 1. What s been included? 2. What's been left out? 3. How has it been validated? 4. Has the real source of the uncertainty/ risk been identified, i.e., is the perceived problem the real problem? Users of the RoSA methodology have characterized it as a true bottoms up risk assessment.
Didactics and History of Mathematics: Knowledge and Self-Knowledge
ERIC Educational Resources Information Center
Fried, Michael N.
2007-01-01
The basic assumption of this paper is that mathematics and history of mathematics are both forms of knowledge and, therefore, represent different ways of knowing. This was also the basic assumption of Fried (2001) who maintained that these ways of knowing imply different conceptual and methodological commitments, which, in turn, lead to a conflict…
Estimation of the Prevalence of Autism Spectrum Disorder in South Korea, Revisited
ERIC Educational Resources Information Center
Pantelis, Peter C.; Kennedy, Daniel P.
2016-01-01
Two-phase designs in epidemiological studies of autism prevalence introduce methodological complications that can severely limit the precision of resulting estimates. If the assumptions used to derive the prevalence estimate are invalid or if the uncertainty surrounding these assumptions is not properly accounted for in the statistical inference…
Assumptions Underlying Curriculum Decisions in Australia: An American Perspective.
ERIC Educational Resources Information Center
Willis, George
An analysis of the cultural and historical context in which curriculum decisions are made in Australia and a comparison with educational assumptions in the United States is the purpose of this paper. Methodology is based on personal teaching experience and observation in Australia. Seven factors are identified upon which curricular decisions in…
Chatterji, Madhabi
2016-12-01
This paper explores avenues for navigating evaluation design challenges posed by complex social programs (CSPs) and their environments when conducting studies that call for generalizable, causal inferences on the intervention's effectiveness. A definition is provided of a CSP drawing on examples from different fields, and an evaluation case is analyzed in depth to derive seven (7) major sources of complexity that typify CSPs, threatening assumptions of textbook-recommended experimental designs for performing impact evaluations. Theoretically-supported, alternative methodological strategies are discussed to navigate assumptions and counter the design challenges posed by the complex configurations and ecology of CSPs. Specific recommendations include: sequential refinement of the evaluation design through systems thinking, systems-informed logic modeling; and use of extended term, mixed methods (ETMM) approaches with exploratory and confirmatory phases of the evaluation. In the proposed approach, logic models are refined through direct induction and interactions with stakeholders. To better guide assumption evaluation, question-framing, and selection of appropriate methodological strategies, a multiphase evaluation design is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.
Twin studies in psychiatry and psychology: science or pseudoscience?
Joseph, Jay
2002-01-01
Twin studies are frequently cited in support of the influence of genetic factors for a wide range of psychiatric conditions and psychological trait differences. The most common method, known as the classical twin method, compares the concordance rates or correlations of reared-together identical (MZ) vs. reared-together same-sex fraternal (DZ) twins. However, drawing genetic inferences from MZ-DZ comparisons is problematic due to methodological problems and questionable assumptions. It is argued that the main theoretical assumption of the twin method--known as the "equal environment assumption"--is not tenable. The twin method is therefore of doubtful value as an indicator of genetic influences. Studies of reared-apart twins are discussed, and it is noted that these studies are also vulnerable to methodological problems and environmental confounds. It is concluded that there is little reason to believe that twin studies provide evidence in favor of genetic influences on psychiatric disorders and human behavioral differences.
Studying the Education of Educators: Methodology.
ERIC Educational Resources Information Center
Sirotnik, Kenneth A.
1988-01-01
Describes the methodology and research design of SEE, the study of the Education of Educators. The approach is multimethodological, exploratory, descriptive, and evaluative. The research design permits examination of working assumptions and concentration on the individual site--the college, the education departments, and specific programs within…
Perceived Managerial and Leadership Effectiveness in Colombia
ERIC Educational Resources Information Center
Torres, Luis Eduardo; Ruiz, Carlos Enrique; Hamlin, Bob; Velez-Calle, Andres
2015-01-01
Purpose: The purpose of this study was to identify what Colombians perceive as effective and least effective/ineffective managerial behavior. Design/methodology/approach: This study was conducted following a qualitative methodology based on the philosophical assumptions of pragmatism and the "pragmatic approach" (Morgan, 2007). The…
Suzuoka, Daiki; Takahashi, Hideaki; Ishiyama, Tatsuya; Morita, Akihiro
2012-12-07
We have developed a method of molecular simulations utilizing a polarizable force field in combination with the theory of energy representation (ER) for the purpose of establishing an efficient and accurate methodology to compute solvation free energies. The standard version of the ER method is, however, based on the assumption that the solute-solvent interaction is pairwise additive for its construction. A crucial step in the present method is to introduce an intermediate state in the solvation process to treat separately the many-body interaction associated with the polarizable model. The intermediate state is chosen so that the solute-solvent interaction can be formally written in the pairwise form, though the solvent molecules are interacting with each other with polarizable charges dependent on the solvent configuration. It is, then, possible to extract the free energy contribution δμ due to the many-body interaction between solute and solvent from the total solvation free energy Δμ. It is shown that the free energy δμ can be computed by an extension of the recent development implemented in quantum mechanical∕molecular mechanical simulations. To assess the numerical robustness of the approach, we computed the solvation free energies of a water and a methanol molecule in water solvent, where two paths for the solvation processes were examined by introducing different intermediate states. The solvation free energies of a water molecule associated with the two paths were obtained as -5.3 and -5.8 kcal∕mol. Those of a methanol molecule were determined as -3.5 and -3.7 kcal∕mol. These results of the ER simulations were also compared with those computed by a numerically exact approach. It was demonstrated that the present approach produces the solvation free energies in comparable accuracies to simulations of thermodynamic integration (TI) method within a tenth of computational time used for the TI simulations.
ERIC Educational Resources Information Center
Taylor, Bryan; Kroth, Michael
2009-01-01
This article creates the Teaching Methodology Instrument (TMI) to help determine the level of adult learning principles being used by a particular teaching methodology in a classroom. The instrument incorporates the principles and assumptions set forth by Malcolm Knowles of what makes a good adult learning environment. The Socratic method as used…
The Ideal Oriented Co-design Approach Revisited
NASA Astrophysics Data System (ADS)
Johnstone, Christina
There exist a large number of different methodologies for developing information systems on the market. This implies that there also are a large number of "best" ways of developing those information systems. Avison and Fitzgerald (2003) states that every methodology is built on a philosophy. With philosophy they refer to the underlying attitudes and viewpoints, and the different assumptions and emphases to be found within the specific methodology.
2017-02-06
and methodology for transitioning craft acceleration data to laboratory shock test requirements are summarized and example requirements for...engineering rationale, assumptions, and methodology for transitioning craft acceleration data to laboratory shock test requirements are summarized and... Methodologies for Small High-Speed Craft Structure, Equipment, Shock Isolation Seats, and Human Performance At-Sea, 10 th Symposium on High
ERIC Educational Resources Information Center
Johnson, Bonnie McD.; Leck, Glorianne M.
The philosophical proposition axiomatic in all gender difference research is examined in this paper. Research on gender differences is that which attempts to describe categorical differences between males and females, based on a designated potential for sexual reproduction. The methodological problems raised by this assumption include the…
10 CFR 436.17 - Establishing energy or water cost data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... escalation rate assumptions under § 436.14. When energy costs begin to accrue at a later time, subtract the... assumptions under § 436.14. When water costs begin to accrue at a later time, subtract the present value of... Methodology and Procedures for Life Cycle Cost Analyses § 436.17 Establishing energy or water cost data. (a...
ERIC Educational Resources Information Center
Toma, J. Douglas
This paper examines whether the social science-based typology of Yvonne Lincoln and Egon Guba (1994), in which social science scholars are divided into positivist, postpositivist, critical, and constructivist paradigms based on ontological, epistemological, and methodological assumptions in the discipline, can be adapted to the academic discipline…
Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm.
Arber, Madeleine M; Ireland, Michael J; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald
2017-01-01
Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.
Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm
Arber, Madeleine M.; Ireland, Michael J.; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald
2017-01-01
Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research. PMID:29018390
NASA Technical Reports Server (NTRS)
Bast, Callie Corinne Scheidt
1994-01-01
This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.
Network Analysis in Comparative Social Sciences
ERIC Educational Resources Information Center
Vera, Eugenia Roldan; Schupp, Thomas
2006-01-01
This essay describes the pertinence of Social Network Analysis (SNA) for the social sciences in general, and discusses its methodological and conceptual implications for comparative research in particular. The authors first present a basic summary of the theoretical and methodological assumptions of SNA, followed by a succinct overview of its…
Modern Psychometric Methodology: Applications of Item Response Theory
ERIC Educational Resources Information Center
Reid, Christine A.; Kolakowsky-Hayner, Stephanie A.; Lewis, Allen N.; Armstrong, Amy J.
2007-01-01
Item response theory (IRT) methodology is introduced as a tool for improving assessment instruments used with people who have disabilities. Need for this approach in rehabilitation is emphasized; differences between IRT and classical test theory are clarified. Concepts essential to understanding IRT are defined, necessary data assumptions are…
Centroid and Theoretical Rotation: Justification for Their Use in Q Methodology Research
ERIC Educational Resources Information Center
Ramlo, Sue
2016-01-01
This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…
"Reading" Mixed Methods Research: Contexts for Criticism
ERIC Educational Resources Information Center
Freshwater, Dawn
2007-01-01
Health and social care researchers, in their haste to "belong" to academia, have adopted the system of mixed methodology research, overestimating its ability to reveal the truth and occasionally imprisoning their thought in one system. In this article, some of the assumptions underpinning mixed methodology research and its discourse are subjected…
Rebraiding Photovoice: Methodological Métissage at the Cultural Interface
ERIC Educational Resources Information Center
Higgins, Marc
2014-01-01
Photovoice, the most prevalent participatory visual research methodology utilised within social science research, has begun making its way into Indigenous contexts in light of its critical and pedagogical potential. However, this potential is not always actualised as the assumptions that undergird photovoice are often the same ones that…
The Nature of Educational Research
ERIC Educational Resources Information Center
Gillett, Simon G.
2011-01-01
The paper is in two parts. The first part of the paper is a critique of current methodology in educational research: scientific, critical and interpretive. The ontological and epistemological assumptions of those methodologies are described from the standpoint of John Searle's analytic philosophy. In the second part two research papers with…
Invited Commentary: The Need for Cognitive Science in Methodology.
Greenland, Sander
2017-09-15
There is no complete solution for the problem of abuse of statistics, but methodological training needs to cover cognitive biases and other psychosocial factors affecting inferences. The present paper discusses 3 common cognitive distortions: 1) dichotomania, the compulsion to perceive quantities as dichotomous even when dichotomization is unnecessary and misleading, as in inferences based on whether a P value is "statistically significant"; 2) nullism, the tendency to privilege the hypothesis of no difference or no effect when there is no scientific basis for doing so, as when testing only the null hypothesis; and 3) statistical reification, treating hypothetical data distributions and statistical models as if they reflect known physical laws rather than speculative assumptions for thought experiments. As commonly misused, null-hypothesis significance testing combines these cognitive problems to produce highly distorted interpretation and reporting of study results. Interval estimation has so far proven to be an inadequate solution because it involves dichotomization, an avenue for nullism. Sensitivity and bias analyses have been proposed to address reproducibility problems (Am J Epidemiol. 2017;186(6):646-647); these methods can indeed address reification, but they can also introduce new distortions via misleading specifications for bias parameters. P values can be reframed to lessen distortions by presenting them without reference to a cutoff, providing them for relevant alternatives to the null, and recognizing their dependence on all assumptions used in their computation; they nonetheless require rescaling for measuring evidence. I conclude that methodological development and training should go beyond coverage of mechanistic biases (e.g., confounding, selection bias, measurement error) to cover distortions of conclusions produced by statistical methods and psychosocial forces. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Rousis, Damon A.
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts. This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts. A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.
Product pricing in the Solar Array Manufacturing Industry - An executive summary of SAMICS
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.
1978-01-01
Capabilities, methodology, and a description of input data to the Solar Array Manufacturing Industry Costing Standards (SAMICS) are presented. SAMICS were developed to provide a standardized procedure and data base for comparing manufacturing processes of Low-cost Solar Array (LSA) subcontractors, guide the setting of research priorities, and assess the progress of LSA toward its hundred-fold cost reduction goal. SAMICS can be used to estimate the manufacturing costs and product prices and determine the impact of inflation, taxes, and interest rates, but it is limited by its ignoring the effects of the market supply and demand and an assumption that all factories operate in a production line mode. The SAMICS methodology defines the industry structure, hypothetical supplier companies, and manufacturing processes and maintains a body of standardized data which is used to compute the final product price. The input data includes the product description, the process characteristics, the equipment cost factors, and production data for the preparation of detailed cost estimates. Activities validating that SAMICS produced realistic price estimates and cost breakdowns are described.
Idiographic versus Nomothetic Approaches to Research in Organizations.
1981-07-01
alternative methodologic assumption based on intensive examination of one or a few cases under the theoretic assumption of dynamic interactionism is, with...phenomenological studies the researcher may not enter the actual setting but instead examines symbolic meanings as they constitute themselves in...B. Interactionism in personality from a historical perspective. Psychological Bulletin, 1974, 81, 1026-l148. Elashoff, J.D.; & Thoresen, C.E
ERIC Educational Resources Information Center
Hasrati, Mostafa
2013-01-01
This article reports the results of a mixed methodology analysis of the assumptions of academic staff and Masters students in an Iranian university regarding various aspects of the assessment of the Masters degree thesis, including the main objective for writing the thesis, the role of the students, supervisors and advisors in writing the…
A Metric to Evaluate Mobile Satellite Systems
NASA Technical Reports Server (NTRS)
Young, Elizabeth L.
1997-01-01
The concept of a "cost per billable minute" methodology to analyze mobile satellite systems is reviewed. Certain assumptions, notably those about the marketplace and regulatory policies, may need to be revisited. Fading and power control assumptions need to be tested. Overall, the metric would seem to have value in the design phase of a system and for comparisons between and among alternative systems.
Direct numerical simulation of leaky dielectrics with application to electrohydrodynamic atomization
NASA Astrophysics Data System (ADS)
Owkes, Mark; Desjardins, Olivier
2013-11-01
Electrohydrodynamics (EHD) have the potential to greatly enhance liquid break-up, as demonstrated in numerical simulations by Van Poppel et al. (JCP (229) 2010). In liquid-gas EHD flows, the ratio of charge mobility to charge convection timescales can be used to determine whether the charge can be assumed to exist in the bulk of the liquid or at the surface only. However, for EHD-aided fuel injection applications, these timescales are of similar magnitude and charge mobility within the fluid might need to be accounted for explicitly. In this work, a computational approach for simulating two-phase EHD flows including the charge transport equation is presented. Under certain assumptions compatible with a leaky dielectric model, charge transport simplifies to a scalar transport equation that is only defined in the liquid phase, where electric charges are present. To ensure consistency with interfacial transport, the charge equation is solved using a semi-Lagrangian geometric transport approach, similar to the method proposed by Le Chenadec and Pitsch (JCP (233) 2013). This methodology is then applied to EHD atomization of a liquid kerosene jet, and compared to results produced under the assumption of a bulk volumetric charge.
Reflecting on the challenges of choosing and using a grounded theory approach.
Markey, Kathleen; Tilki, Mary; Taylor, Georgina
2014-11-01
To explore three different approaches to grounded theory and consider some of the possible philosophical assumptions underpinning them. Grounded theory is a comprehensive yet complex methodology that offers a procedural structure that guides the researcher. However, divergent approaches to grounded theory present dilemmas for novice researchers seeking to choose a suitable research method. This is a methodology paper. This is a reflexive paper that explores some of the challenges experienced by a PhD student when choosing and operationalising a grounded theory approach. Before embarking on a study, novice grounded theory researchers should examine their research beliefs to assist them in selecting the most suitable approach. This requires an insight into the approaches' philosophical assumptions, such as those pertaining to ontology and epistemology. Researchers need to be clear about the philosophical assumptions underpinning their studies and the effects that different approaches will have on the research results. This paper presents a personal account of the journey of a novice grounded theory researcher who chose a grounded theory approach and worked within its theoretical parameters. Novice grounded theory researchers need to understand the different philosophical assumptions that influence the various grounded theory approaches, before choosing one particular approach.
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2016-04-01
Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.
1989-11-01
The operation of a nuclear power plant must be regularly supported by various reactor dynamics and thermal-hydraulic analyses, which may include final safety analysis report (FSAR) design-basis calculations, and conservative and best-estimate analyses. The development and improvement of computer codes and analysis methodologies provide many advantages, including the ability to evaluate the effect of modeling simplifications and assumptions made in previous reactor kinetics and thermal-hydraulic calculations. This paper describes the results of using the RETRAN, MCPWR, and STAR codes in a tandem, predictive-corrective manner for three pressurized water reactor (PWR) transients: (a) loss of feedwater (LOF) anticipated transient without scrammore » (ATWS), (b) station blackout ATWS, and (c) loss of total reactor coolant system (RCS) flow with a scram.« less
Model documentation report: Transportation sector model of the National Energy Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-03-01
This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Model documentation report: Residential sector demand module of the national energy modeling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code. This reference document provides a detailed description for energy analysts, other users, and the public. The NEMS Residential Sector Demand Module is currently used for mid-term forecasting purposes and energy policy analysis over the forecast horizon of 1993 through 2020. The model generates forecasts of energy demand for the residential sector by service, fuel, and Census Division. Policy impacts resulting from new technologies,more » market incentives, and regulatory changes can be estimated using the module. 26 refs., 6 figs., 5 tabs.« less
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
Marzocchini, Manrico; Tatàno, Fabio; Moretti, Michela Simona; Antinori, Caterina; Orilisi, Stefano
2018-06-05
A possible approach for determining soil and groundwater quality criteria for contaminated sites is the comparative risk assessment. Originating from but not limited to Italian interest in a decentralised (regional) implementation of comparative risk assessment, this paper first addresses the proposal of an original methodology called CORIAN REG-M , which was created with initial attention to the context of potentially contaminated sites in the Marche Region (Central Italy). To deepen the technical-scientific knowledge and applicability of the comparative risk assessment, the following characteristics of the CORIAN REG-M methodology appear to be relevant: the simplified but logical assumption of three categories of factors (source and transfer/transport of potential contamination, and impacted receptors) within each exposure pathway; the adaptation to quality and quantity of data that are available or derivable at the given scale of concern; the attention to a reliable but unsophisticated modelling; the achievement of a conceptual linkage to the absolute risk assessment approach; and the potential for easy updating and/or refining of the methodology. Further, the application of the CORIAN REG-M methodology to some case-study sites located in the Marche Region indicated the following: a positive correlation can be expected between air and direct contact pathway scores, as well as between individual pathway scores and the overall site scores based on a root-mean-square algorithm; the exposure pathway, which presents the highest variability of scores, tends to be dominant at sites with the highest computed overall site scores; and the adoption of a root-mean-square algorithm can be expected to emphasise the overall site scoring.
Computation in generalised probabilisitic theories
NASA Astrophysics Data System (ADS)
Lee, Ciarán M.; Barrett, Jonathan
2015-08-01
From the general difficulty of simulating quantum systems using classical systems, and in particular the existence of an efficient quantum algorithm for factoring, it is likely that quantum computation is intrinsically more powerful than classical computation. At present, the best upper bound known for the power of quantum computation is that {{BQP}}\\subseteq {{AWPP}}, where {{AWPP}} is a classical complexity class (known to be included in {{PP}}, hence {{PSPACE}}). This work investigates limits on computational power that are imposed by simple physical, or information theoretic, principles. To this end, we define a circuit-based model of computation in a class of operationally-defined theories more general than quantum theory, and ask: what is the minimal set of physical assumptions under which the above inclusions still hold? We show that given only an assumption of tomographic locality (roughly, that multipartite states and transformations can be characterized by local measurements), efficient computations are contained in {{AWPP}}. This inclusion still holds even without assuming a basic notion of causality (where the notion is, roughly, that probabilities for outcomes cannot depend on future measurement choices). Following Aaronson, we extend the computational model by allowing post-selection on measurement outcomes. Aaronson showed that the corresponding quantum complexity class, {{PostBQP}}, is equal to {{PP}}. Given only the assumption of tomographic locality, the inclusion in {{PP}} still holds for post-selected computation in general theories. Hence in a world with post-selection, quantum theory is optimal for computation in the space of all operational theories. We then consider whether one can obtain relativized complexity results for general theories. It is not obvious how to define a sensible notion of a computational oracle in the general framework that reduces to the standard notion in the quantum case. Nevertheless, it is possible to define computation relative to a ‘classical oracle’. Then, we show there exists a classical oracle relative to which efficient computation in any theory satisfying the causality assumption does not include {{NP}}.
Speelman, Craig P.; McGann, Marek
2013-01-01
In this paper we voice concerns about the uncritical manner in which the mean is often used as a summary statistic in psychological research. We identify a number of implicit assumptions underlying the use of the mean and argue that the fragility of these assumptions should be more carefully considered. We examine some of the ways in which the potential violation of these assumptions can lead us into significant theoretical and methodological error. Illustrations of alternative models of research already extant within Psychology are used to explore methods of research less mean-dependent and suggest that a critical assessment of the assumptions underlying its use in research play a more explicit role in the process of study design and review. PMID:23888147
NASA Astrophysics Data System (ADS)
Zolfaghari, Mohammad R.
2009-07-01
Recent achievements in computer and information technology have provided the necessary tools to extend the application of probabilistic seismic hazard mapping from its traditional engineering use to many other applications. Examples for such applications are risk mitigation, disaster management, post disaster recovery planning and catastrophe loss estimation and risk management. Due to the lack of proper knowledge with regard to factors controlling seismic hazards, there are always uncertainties associated with all steps involved in developing and using seismic hazard models. While some of these uncertainties can be controlled by more accurate and reliable input data, the majority of the data and assumptions used in seismic hazard studies remain with high uncertainties that contribute to the uncertainty of the final results. In this paper a new methodology for the assessment of seismic hazard is described. The proposed approach provides practical facility for better capture of spatial variations of seismological and tectonic characteristics, which allows better treatment of their uncertainties. In the proposed approach, GIS raster-based data models are used in order to model geographical features in a cell-based system. The cell-based source model proposed in this paper provides a framework for implementing many geographically referenced seismotectonic factors into seismic hazard modelling. Examples for such components are seismic source boundaries, rupture geometry, seismic activity rate, focal depth and the choice of attenuation functions. The proposed methodology provides improvements in several aspects of the standard analytical tools currently being used for assessment and mapping of regional seismic hazard. The proposed methodology makes the best use of the recent advancements in computer technology in both software and hardware. The proposed approach is well structured to be implemented using conventional GIS tools.
Dimensions of the Feminist Research Methodology Debate: Impetus, Definitions, Dilemmas & Stances.
ERIC Educational Resources Information Center
Reinharz, Shulamit
For various well-documented reasons, the feminist social movement has been critical of academia as a worksetting and of the social sciences as a set of disciplines. For these reasons, feminists claim that the assumptions underlying several research designs and procedures are sexist. They have developed a feminist methodology to examine these…
ERIC Educational Resources Information Center
Bulfin, Scott; Henderson, Michael; Johnson, Nicola F.; Selwyn, Neil
2014-01-01
The academic study of educational technology is often characterised by critics as methodologically limited. In order to test this assumption, the present paper reports on data collected from a survey of 462 "research active" academic researchers working in the broad areas of educational technology and educational media. The paper…
ERIC Educational Resources Information Center
Jakeman, Rick C.; Henderson, Markesha M.; Howard, Lionel C.
2017-01-01
This article presents a critical reflection on how we, instructors of a graduate-level course in higher education administration, sought to integrate theoretical and subject-matter content and research methodology. Our reflection, guided by autoethnography and teacher reflection, challenged both our assumptions about curriculum design and our…
78 FR 47677 - DOE Activities and Methodology for Assessing Compliance With Building Energy Codes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
... construction. Post- construction evaluations were implemented in one study in an effort to reduce these costs... these pilot studies have led to a number of recommendations and potential changes to the DOE methodology... fundamental assumptions and approaches to measuring compliance with building energy codes. This notice...
Wessells, K. Ryan; Singh, Gitanjali M.; Brown, Kenneth H.
2012-01-01
Background The prevalence of inadequate zinc intake in a population can be estimated by comparing the zinc content of the food supply with the population’s theoretical requirement for zinc. However, assumptions regarding the nutrient composition of foods, zinc requirements, and zinc absorption may affect prevalence estimates. These analyses were conducted to: (1) evaluate the effect of varying methodological assumptions on country-specific estimates of the prevalence of dietary zinc inadequacy and (2) generate a model considered to provide the best estimates. Methodology and Principal Findings National food balance data were obtained from the Food and Agriculture Organization of the United Nations. Zinc and phytate contents of these foods were estimated from three nutrient composition databases. Zinc absorption was predicted using a mathematical model (Miller equation). Theoretical mean daily per capita physiological and dietary requirements for zinc were calculated using recommendations from the Food and Nutrition Board of the Institute of Medicine and the International Zinc Nutrition Consultative Group. The estimated global prevalence of inadequate zinc intake varied between 12–66%, depending on which methodological assumptions were applied. However, country-specific rank order of the estimated prevalence of inadequate intake was conserved across all models (r = 0.57–0.99, P<0.01). A “best-estimate” model, comprised of zinc and phytate data from a composite nutrient database and IZiNCG physiological requirements for absorbed zinc, estimated the global prevalence of inadequate zinc intake to be 17.3%. Conclusions and Significance Given the multiple sources of uncertainty in this method, caution must be taken in the interpretation of the estimated prevalence figures. However, the results of all models indicate that inadequate zinc intake may be fairly common globally. Inferences regarding the relative likelihood of zinc deficiency as a public health problem in different countries can be drawn based on the country-specific rank order of estimated prevalence of inadequate zinc intake. PMID:23209781
Indirect Comparisons: A Review of Reporting and Methodological Quality
Donegan, Sarah; Williamson, Paula; Gamble, Carrol; Tudur-Smith, Catrin
2010-01-01
Background The indirect comparison of two interventions can be valuable in many situations. However, the quality of an indirect comparison will depend on several factors including the chosen methodology and validity of underlying assumptions. Published indirect comparisons are increasingly more common in the medical literature, but as yet, there are no published recommendations of how they should be reported. Our aim is to systematically review the quality of published indirect comparisons to add to existing empirical data suggesting that improvements can be made when reporting and applying indirect comparisons. Methodology/Findings Reviews applying statistical methods to indirectly compare the clinical effectiveness of two interventions using randomised controlled trials were eligible. We searched (1966–2008) Database of Abstracts and Reviews of Effects, The Cochrane library, and Medline. Full review publications were assessed for eligibility. Specific criteria to assess quality were developed and applied. Forty-three reviews were included. Adequate methodology was used to calculate the indirect comparison in 41 reviews. Nineteen reviews assessed the similarity assumption using sensitivity analysis, subgroup analysis, or meta-regression. Eleven reviews compared trial-level characteristics. Twenty-four reviews assessed statistical homogeneity. Twelve reviews investigated causes of heterogeneity. Seventeen reviews included direct and indirect evidence for the same comparison; six reviews assessed consistency. One review combined both evidence types. Twenty-five reviews urged caution in interpretation of results, and 24 reviews indicated when results were from indirect evidence by stating this term with the result. Conclusions This review shows that the underlying assumptions are not routinely explored or reported when undertaking indirect comparisons. We recommend, therefore, that the quality of indirect comparisons should be improved, in particular, by assessing assumptions and reporting the assessment methods applied. We propose that the quality criteria applied in this article may provide a basis to help review authors carry out indirect comparisons and to aid appropriate interpretation. PMID:21085712
NASA Astrophysics Data System (ADS)
Bièvre, Grégory; Oxarango, Laurent; Günther, Thomas; Goutaland, David; Massardi, Michael
2018-06-01
In the framework of earth-filled dykes characterization and monitoring, Electrical Resistivity Tomography (ERT) turns out to be a commonly used method. 2D sections are generally acquired along the dyke crest thus putting forward the question of 3D artefacts in the inversion process. This paper proposes a methodology based on 3D direct numerical simulations of the ERT acquisition using a realistic topography of the study site. It allows computing ad hoc geometrical factors which can be used for the inversion of experimental ERT data. The method is first evaluated on a set of synthetic dyke configurations. Then, it is applied to experimental static and time-lapse ERT data set acquired before and after repair works carried out on a leaking zone of an earth-filled canal dyke in the centre of France. The computed geometric factors are lower than the analytic geometric factors in a range between -8% and - 18% for measurements conducted on the crest of the dyke. They exhibit a maximum under-estimation for intermediate electrode spacings in the Wenner and Schlumberger configurations. In the same way, for measurements conducted on the mid-slope of the dyke, the computed geometric factors are higher for short electrode spacings (+18%) and lower for lower for large electrode spacings (-8%). The 2D inversion of the synthetic data with these computed geometric factors provides a significant improvement of the agreement with the original resistivity. Two experimental profiles conducted on the same portion of the dyke but at different elevations also reveal a better agreement using this methodology. The comparison with apparent resistivity from EM31 profiling along the stretch of the dyke also supports this evidence. In the same way, some spurious effects which affected the time-lapse data were removed and improved the global readability of the time-lapse resistivity sections. The benefit on the structural interpretation of ERT images remains moderate but allows a better delineation of the repair work location. Therefore, and even if the 2D assumption cannot be considered valid in such a context, the proposed methodology could be applied easily to any dyke or strongly 3D-shaped structure using a realistic topographic model. It appears suitable for practical application.
The software analysis project for the Office of Human Resources
NASA Technical Reports Server (NTRS)
Tureman, Robert L., Jr.
1994-01-01
There were two major sections of the project for the Office of Human Resources (OHR). The first section was to conduct a planning study to analyze software use with the goal of recommending software purchases and determining whether the need exists for a file server. The second section was analysis and distribution planning for retirement planning computer program entitled VISION provided by NASA Headquarters. The software planning study was developed to help OHR analyze the current administrative desktop computing environment and make decisions regarding software acquisition and implementation. There were three major areas addressed by the study: current environment new software requirements, and strategies regarding the implementation of a server in the Office. To gather data on current environment, employees were surveyed and an inventory of computers were produced. The surveys were compiled and analyzed by the ASEE fellow with interpretation help by OHR staff. New software requirements represented a compilation and analysis of the surveyed requests of OHR personnel. Finally, the information on the use of a server represents research done by the ASEE fellow and analysis of survey data to determine software requirements for a server. This included selection of a methodology to estimate the number of copies of each software program required given current use and estimated growth. The report presents the results of the computing survey, a description of the current computing environment, recommenations for changes in the computing environment, current software needs, management advantages of using a server, and management considerations in the implementation of a server. In addition, detailed specifications were presented for the hardware and software recommendations to offer a complete picture to OHR management. The retirement planning computer program available to NASA employees will aid in long-range retirement planning. The intended audience is the NASA civil service employee with several years until retirement. The employee enters current salary and savings information as well as goals concerning salary at retirement, assumptions on inflation, and the return on investments. The program produces a picture of the employee's retirement income from all sources based on the assumptions entered. A session showing features of the program was conducted for key personnel at the Center. After analysis, it was decided to offer the program through the Learning Center starting in August 1994.
Vicente J. Monleon
2009-01-01
Currently, Forest Inventory and Analysis estimation procedures use Smalian's formula to compute coarse woody debris (CWD) volume and assume that logs lie horizontally on the ground. In this paper, the impact of those assumptions on volume and biomass estimates is assessed using 7 years of Oregon's Phase 2 data. Estimates of log volume computed using Smalian...
Daniel R. Williams; Michael E. Patterson
2007-01-01
Place ideas in natural resource management have grown in recent years. But with that growth have come greater complexity and diversity in thinking and mounting confusion about the ontological and epistemological assumptions underlying any specific investigation. Beckley et al. (2007) contribute to place research by proposing a new methodological approach to analyzing...
ERIC Educational Resources Information Center
Kelly, Kathleen; Lee, Seung Hwan; Bowen Ray, Heather; Kandaurova, Maria
2018-01-01
Barriers to cross-cultural instruction challenge even experienced educators and their students. To increase cross-cultural competence and bridge learning gaps, professors in two countries adapted the Photovoice methodology to develop shared visual vocabularies with students and unearth hidden assumptions. Results from an anonymous evaluation…
(heating, cooling, water heating, major appliances, small appliances, and lighting) are included. HES ;black box"-we extensively document all methodologies and assumptions. Users begin their exploration
Management of health care expenditure by soft computing methodology
NASA Astrophysics Data System (ADS)
Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad
2017-01-01
In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.
Uher, Jana
2015-12-01
As science seeks to make generalisations, a science of individual peculiarities encounters intricate challenges. This article explores these challenges by applying the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) and by exploring taxonomic "personality" research as an example. Analyses of researchers' interpretations of the taxonomic "personality" models, constructs and data that have been generated in the field reveal widespread erroneous assumptions about the abilities of previous methodologies to appropriately represent individual-specificity in the targeted phenomena. These assumptions, rooted in everyday thinking, fail to consider that individual-specificity and others' minds cannot be directly perceived, that abstract descriptions cannot serve as causal explanations, that between-individual structures cannot be isomorphic to within-individual structures, and that knowledge of compositional structures cannot explain the process structures of their functioning and development. These erroneous assumptions and serious methodological deficiencies in widely used standardised questionnaires have effectively prevented psychologists from establishing taxonomies that can comprehensively model individual-specificity in most of the kinds of phenomena explored as "personality", especially in experiencing and behaviour and in individuals' functioning and development. Contrary to previous assumptions, it is not universal models but rather different kinds of taxonomic models that are required for each of the different kinds of phenomena, variations and structures that are commonly conceived of as "personality". Consequently, to comprehensively explore individual-specificity, researchers have to apply a portfolio of complementary methodologies and develop different kinds of taxonomies, most of which have yet to be developed. Closing, the article derives some meta-desiderata for future research on individuals' "personality".
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Axisymmetric computational fluid dynamics analysis of Saturn V/S1-C/F1 nozzle and plume
NASA Technical Reports Server (NTRS)
Ruf, Joseph H.
1993-01-01
An axisymmetric single engine Computational Fluid Dynamics calculation of the Saturn V/S 1-C vehicle base region and F1 engine plume is described. There were two objectives of this work, the first was to calculate an axisymmetric approximation of the nozzle, plume and base region flow fields of S1-C/F1, relate/scale this to flight data and apply this scaling factor to a NLS/STME axisymmetric calculations from a parallel effort. The second was to assess the differences in F1 and STME plume shear layer development and concentration of combustible gases. This second piece of information was to be input/supporting data for assumptions made in NLS2 base temperature scaling methodology from which the vehicle base thermal environments were being generated. The F1 calculations started at the main combustion chamber faceplate and incorporated the turbine exhaust dump/nozzle film coolant. The plume and base region calculations were made for ten thousand feet and 57 thousand feet altitude at vehicle flight velocity and in stagnant freestream. FDNS was implemented with a 14 species, 28 reaction finite rate chemistry model plus a soot burning model for the RP-1/LOX chemistry. Nozzle and plume flow fields are shown, the plume shear layer constituents are compared to a STME plume. Conclusions are made about the validity and status of the analysis and NLS2 vehicle base thermal environment definition methodology.
Prediction of XV-15 tilt rotor discrete frequency aeroacoustic noise with WOPWOP
NASA Technical Reports Server (NTRS)
Coffen, Charles D.; George, Albert R.
1990-01-01
The results, methodology, and conclusions of noise prediction calculations carried out to study several possible discrete frequency harmonic noise mechanisms of the XV-15 Tilt Rotor Aircraft in hover and helicopter mode forward flight are presented. The mechanisms studied were thickness and loading noise. In particular, the loading noise caused by flow separation and the fountain/ground plane effect were predicted with calculations made using WOPWOP, a noise prediction program developed by NASA Langley. The methodology was to model the geometry and aerodynamics of the XV-15 rotor blades in hover and steady level flight and then create corresponding FORTRAN subroutines which were used an input for WOPWOP. The models are described and the simplifying assumptions made in creating them are evaluated, and the results of the computations are presented. The computations lead to the following conclusions: The fountain/ground plane effect is an important source of aerodynamic noise for the XV-15 in hover. Unsteady flow separation from the airfoil passing through the fountain at high angles of attack significantly affects the predicted sound spectra and may be an important noise mechanism for the XV-15 in hover mode. The various models developed did not predict the sound spectra in helicopter forward flight. The experimental spectra indicate the presence of blade vortex interactions which were not modeled in these calculations. A need for further study and development of more accurate aerodynamic models, including unsteady stall in hover and blade vortex interactions in forward flight.
Bayesian analysis of input uncertainty in hydrological modeling: 2. Application
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.
2006-03-01
The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
NASA Astrophysics Data System (ADS)
Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza
2017-03-01
Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
A novel methodology for interpreting air quality measurements from urban streets using CFD modelling
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Vardoulakis, Sotiris; Cai, Xiaoming
2011-09-01
In this study, a novel computational fluid dynamics (CFD) based methodology has been developed to interpret long-term averaged measurements of pollutant concentrations collected at roadside locations. The methodology is applied to the analysis of pollutant dispersion in Stratford Road (SR), a busy street canyon in Birmingham (UK), where a one-year sampling campaign was carried out between August 2005 and July 2006. Firstly, a number of dispersion scenarios are defined by combining sets of synoptic wind velocity and direction. Assuming neutral atmospheric stability, CFD simulations are conducted for all the scenarios, by applying the standard k-ɛ turbulence model, with the aim of creating a database of normalised pollutant concentrations at specific locations within the street. Modelled concentration for all wind scenarios were compared with hourly observed NO x data. In order to compare with long-term averaged measurements, a weighted average of the CFD-calculated concentration fields was derived, with the weighting coefficients being proportional to the frequency of each scenario observed during the examined period (either monthly or annually). In summary the methodology consists of (i) identifying the main dispersion scenarios for the street based on wind speed and directions data, (ii) creating a database of CFD-calculated concentration fields for the identified dispersion scenarios, and (iii) combining the CFD results based on the frequency of occurrence of each dispersion scenario during the examined period. The methodology has been applied to calculate monthly and annually averaged benzene concentration at several locations within the street canyon so that a direct comparison with observations could be made. The results of this study indicate that, within the simplifying assumption of non-buoyant flow, CFD modelling can aid understanding of long-term air quality measurements, and help assessing the representativeness of monitoring locations for population exposure studies.
Coal Market Module - NEMS Documentation
2014-01-01
Documents the objectives and the conceptual and methodological approach used in the development of the National Energy Modeling System's (NEMS) Coal Market Module (CMM) used to develop the Annual Energy Outlook 2014 (AEO2014). This report catalogues and describes the assumptions, methodology, estimation techniques, and source code of CMM's two submodules. These are the Coal Production Submodule (CPS) and the Coal Distribution Submodule (CDS).
ERIC Educational Resources Information Center
Taylor, Tony; Collins, Sue
2012-01-01
This article critiques popular assumptions that underlie the ongoing politicisation of school history curriculum as an agent of social identity and behaviour. It raises some key research questions which need further investigation and suggests a potential methodology for establishing evidence-based understanding of the relationship between history…
ERIC Educational Resources Information Center
Scotland, James
2012-01-01
This paper explores the philosophical underpinnings of three major educational research paradigms: scientific, interpretive, and critical. The aim was to outline and explore the interrelationships between each paradigm's ontology, epistemology, methodology and methods. This paper reveals and then discusses some of the underlying assumptions of…
Errors in reporting on dissolution research: methodological and statistical implications.
Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria
2017-02-01
In vitro dissolution testing provides useful information at clinical and preclinical stages of the drug development process. The study includes pharmaceutical papers on dissolution research published in Polish journals between 2010 and 2015. They were analyzed with regard to information provided by authors about chosen methods, performed validation, statistical reporting or assumptions used to properly compare release profiles considering the present guideline documents addressed to dissolution methodology and its validation. Of all the papers included in the study, 23.86% presented at least one set of validation parameters, 63.64% gave the results of the weight uniformity test, 55.68% content determination, 97.73% dissolution testing conditions, and 50% discussed a comparison of release profiles. The assumptions for methods used to compare dissolution profiles were discussed in 6.82% of papers. By means of example analyses, we demonstrate that the outcome can be influenced by the violation of several assumptions or selection of an improper method to compare dissolution profiles. A clearer description of the procedures would undoubtedly increase the quality of papers in this area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strons, Philip; Bailey, James L.; Davis, John
2016-03-01
In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.
Determination of mean pressure from PIV in compressible flows using the Reynolds-averaging approach
NASA Astrophysics Data System (ADS)
van Gent, Paul L.; van Oudheusden, Bas W.; Schrijer, Ferry F. J.
2018-03-01
The feasibility of computing the flow pressure on the basis of PIV velocity data has been demonstrated abundantly for low-speed conditions. The added complications occurring for high-speed compressible flows have, however, so far proved to be largely inhibitive for the accurate experimental determination of instantaneous pressure. Obtaining mean pressure may remain a worthwhile and realistic goal to pursue. In a previous study, a Reynolds-averaging procedure was developed for this, under the moderate-Mach-number assumption that density fluctuations can be neglected. The present communication addresses the accuracy of this assumption, and the consistency of its implementation, by evaluating of the relevance of the different contributions resulting from the Reynolds-averaging. The methodology involves a theoretical order-of-magnitude analysis, complemented with a quantitative assessment based on a simulated and a real PIV experiment. The assessments show that it is sufficient to account for spatial variations in the mean velocity and the Reynolds-stresses and that temporal and spatial density variations (fluctuations and gradients) are of secondary importance and comparable order-of-magnitude. This result permits to simplify the calculation of mean pressure from PIV velocity data and to validate the approximation of neglecting temporal and spatial density variations without having access to reference pressure data.
Normalized inverse characterization of sound absorbing rigid porous media.
Zieliński, Tomasz G
2015-06-01
This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.
Anderson, Eric C; Ng, Thomas C
2016-02-01
We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.
75 FR 29333 - Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-25
..., including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility..., 2010. Terri T. Lee, Chief Operating Officer, Electricity Delivery and Energy Reliability. [FR Doc. 2010...
Conjugate Analysis of Two-Dimensional Ablation and Pyrolysis in Rocket Nozzles
NASA Astrophysics Data System (ADS)
Cross, Peter G.
The development of a methodology and computational framework for performing conjugate analyses of transient, two-dimensional ablation of pyrolyzing materials in rocket nozzle applications is presented. This new engineering methodology comprehensively incorporates fluid-thermal-chemical processes relevant to nozzles and other high temperature components, making it possible, for the first time, to rigorously capture the strong interactions and interdependencies that exist between the reacting flowfield and the ablating material. By basing thermal protection system engineering more firmly on first principles, improved analysis accuracy can be achieved. The computational framework developed in this work couples a multi-species, reacting flow solver to a two-dimensional material response solver. New capabilities are added to the flow solver in order to be able to model unique aspects of the flow through solid rocket nozzles. The material response solver is also enhanced with new features that enable full modeling of pyrolyzing, anisotropic materials with a true two-dimensional treatment of the porous flow of the pyrolysis gases. Verification and validation studies demonstrating correct implementation of these new models in the flow and material response solvers are also presented. Five different treatments of the surface energy balance at the ablating wall, with increasing levels of fidelity, are investigated. The Integrated Equilibrium Surface Chemistry (IESC) treatment computes the surface energy balance and recession rate directly from the diffusive fluxes at the ablating wall, without making transport coefficient or unity Lewis number assumptions, or requiring pre-computed surface thermochemistry tables. This method provides the highest level of fidelity, and can inherently account for the effects that recession, wall temperature, blowing, and the presence of ablation product species in the boundary layer have on the flowfield and ablation response. Multiple decoupled and conjugate ablation analysis studies for the HIPPO nozzle test case are presented. Results from decoupled simulations show sensitivity to the wall temperature profile used within the flow solver, indicating the need for conjugate analyses. Conjugate simulations show that the thermal response of the nozzle is relatively insensitive to the choice of the surface energy balance treatment. However, the surface energy balance treatment is found to strongly affect the surface recession predictions. Out of all the methods considered, the IESC treatment produces surface recession predictions with the best agreement to experimental data. These results show that the increased fidelity provided by the proposed conjugate ablation modeling methodology produces improved analysis accuracy, as desired.
Birth-death prior on phylogeny and speed dating
2008-01-01
Background In recent years there has been a trend of leaving the strict molecular clock in order to infer dating of speciations and other evolutionary events. Explicit modeling of substitution rates and divergence times makes formulation of informative prior distributions for branch lengths possible. Models with birth-death priors on tree branching and auto-correlated or iid substitution rates among lineages have been proposed, enabling simultaneous inference of substitution rates and divergence times. This problem has, however, mainly been analysed in the Markov chain Monte Carlo (MCMC) framework, an approach requiring computation times of hours or days when applied to large phylogenies. Results We demonstrate that a hill-climbing maximum a posteriori (MAP) adaptation of the MCMC scheme results in considerable gain in computational efficiency. We demonstrate also that a novel dynamic programming (DP) algorithm for branch length factorization, useful both in the hill-climbing and in the MCMC setting, further reduces computation time. For the problem of inferring rates and times parameters on a fixed tree, we perform simulations, comparisons between hill-climbing and MCMC on a plant rbcL gene dataset, and dating analysis on an animal mtDNA dataset, showing that our methodology enables efficient, highly accurate analysis of very large trees. Datasets requiring a computation time of several days with MCMC can with our MAP algorithm be accurately analysed in less than a minute. From the results of our example analyses, we conclude that our methodology generally avoids getting trapped early in local optima. For the cases where this nevertheless can be a problem, for instance when we in addition to the parameters also infer the tree topology, we show that the problem can be evaded by using a simulated-annealing like (SAL) method in which we favour tree swaps early in the inference while biasing our focus towards rate and time parameter changes later on. Conclusion Our contribution leaves the field open for fast and accurate dating analysis of nucleotide sequence data. Modeling branch substitutions rates and divergence times separately allows us to include birth-death priors on the times without the assumption of a molecular clock. The methodology is easily adapted to take data from fossil records into account and it can be used together with a broad range of rate and substitution models. PMID:18318893
Birth-death prior on phylogeny and speed dating.
Akerborg, Orjan; Sennblad, Bengt; Lagergren, Jens
2008-03-04
In recent years there has been a trend of leaving the strict molecular clock in order to infer dating of speciations and other evolutionary events. Explicit modeling of substitution rates and divergence times makes formulation of informative prior distributions for branch lengths possible. Models with birth-death priors on tree branching and auto-correlated or iid substitution rates among lineages have been proposed, enabling simultaneous inference of substitution rates and divergence times. This problem has, however, mainly been analysed in the Markov chain Monte Carlo (MCMC) framework, an approach requiring computation times of hours or days when applied to large phylogenies. We demonstrate that a hill-climbing maximum a posteriori (MAP) adaptation of the MCMC scheme results in considerable gain in computational efficiency. We demonstrate also that a novel dynamic programming (DP) algorithm for branch length factorization, useful both in the hill-climbing and in the MCMC setting, further reduces computation time. For the problem of inferring rates and times parameters on a fixed tree, we perform simulations, comparisons between hill-climbing and MCMC on a plant rbcL gene dataset, and dating analysis on an animal mtDNA dataset, showing that our methodology enables efficient, highly accurate analysis of very large trees. Datasets requiring a computation time of several days with MCMC can with our MAP algorithm be accurately analysed in less than a minute. From the results of our example analyses, we conclude that our methodology generally avoids getting trapped early in local optima. For the cases where this nevertheless can be a problem, for instance when we in addition to the parameters also infer the tree topology, we show that the problem can be evaded by using a simulated-annealing like (SAL) method in which we favour tree swaps early in the inference while biasing our focus towards rate and time parameter changes later on. Our contribution leaves the field open for fast and accurate dating analysis of nucleotide sequence data. Modeling branch substitutions rates and divergence times separately allows us to include birth-death priors on the times without the assumption of a molecular clock. The methodology is easily adapted to take data from fossil records into account and it can be used together with a broad range of rate and substitution models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowdy, M.W.; Couch, M.D.
A vehicle comparison methodology based on the Otto-Engine Equivalent (OEE) vehicle concept is described. As an illustration of this methodology, the concept is used to make projections of the fuel economy potential of passenger cars using various alternative power systems. Sensitivities of OEE vehicle results to assumptions made in the calculational procedure are discussed. Factors considered include engine torque boundary, rear axle ratio, performance criteria, engine transient response, and transmission shift logic.
Particle Filtering Methods for Incorporating Intelligence Updates
2017-03-01
methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non
Parsing Social Network Survey Data from Hidden Populations Using Stochastic Context-Free Grammars
Poon, Art F. Y.; Brouwer, Kimberly C.; Strathdee, Steffanie A.; Firestone-Cruz, Michelle; Lozada, Remedios M.; Kosakovsky Pond, Sergei L.; Heckathorn, Douglas D.; Frost, Simon D. W.
2009-01-01
Background Human populations are structured by social networks, in which individuals tend to form relationships based on shared attributes. Certain attributes that are ambiguous, stigmatized or illegal can create a ÔhiddenÕ population, so-called because its members are difficult to identify. Many hidden populations are also at an elevated risk of exposure to infectious diseases. Consequently, public health agencies are presently adopting modern survey techniques that traverse social networks in hidden populations by soliciting individuals to recruit their peers, e.g., respondent-driven sampling (RDS). The concomitant accumulation of network-based epidemiological data, however, is rapidly outpacing the development of computational methods for analysis. Moreover, current analytical models rely on unrealistic assumptions, e.g., that the traversal of social networks can be modeled by a Markov chain rather than a branching process. Methodology/Principal Findings Here, we develop a new methodology based on stochastic context-free grammars (SCFGs), which are well-suited to modeling tree-like structure of the RDS recruitment process. We apply this methodology to an RDS case study of injection drug users (IDUs) in Tijuana, México, a hidden population at high risk of blood-borne and sexually-transmitted infections (i.e., HIV, hepatitis C virus, syphilis). Survey data were encoded as text strings that were parsed using our custom implementation of the inside-outside algorithm in a publicly-available software package (HyPhy), which uses either expectation maximization or direct optimization methods and permits constraints on model parameters for hypothesis testing. We identified significant latent variability in the recruitment process that violates assumptions of Markov chain-based methods for RDS analysis: firstly, IDUs tended to emulate the recruitment behavior of their own recruiter; and secondly, the recruitment of like peers (homophily) was dependent on the number of recruits. Conclusions SCFGs provide a rich probabilistic language that can articulate complex latent structure in survey data derived from the traversal of social networks. Such structure that has no representation in Markov chain-based models can interfere with the estimation of the composition of hidden populations if left unaccounted for, raising critical implications for the prevention and control of infectious disease epidemics. PMID:19738904
Aging and social expenditures in Italy: some issues associated with population projections.
Terra Abrami, V
1990-01-01
"After describing the main results of the recent Italian population projections, and some possible consequences...aging may have on social expenditures, this paper focuses on attempts to improve the accuracy of development assumptions, with special regard to natural components. Emphasis is placed on the importance of applying specific methodological tools to define self-explanatory assumptions for fertility and mortality and to produce projections which could be considered, with reasonable limitations, as real forecasts." excerpt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shibayama, Y; Umezu, Y; Nakamura, Y
2016-06-15
Purpose: Our assumption was that interfractional shape variations of target volumes could not be negligible for determination of clinical target volume (CTV)-to-planning target volume (PTV) margins. The aim of this study was to investigate this assumption as a simulation study by developing a computational framework of CTV-to-PTV margins with taking the interfractional shape variations into account based on point distribution model (PDM) Methods: The systematic and random errors for interfractional shape variations and translations of target volumes were evaluated for four types of CTV regions (only a prostate, a prostate plus proximal 1-cm seminal vesicles, a prostate plus proximal 2-cmmore » seminal vesicles, and a prostate plus whole seminal vesicles). The CTV regions were delineated depending on prostate cancer risk groups on planning computed tomography (CT) and cone beam CT (CBCT) images of 73 fractions of 10 patients. The random and systematic errors for shape variations of CTV regions were derived from PDMs of CTV surfaces for all fractions of each patient. Systematic errors of shape variations of CTV regions were derived by comparing PDMs between planning CTV surfaces and average CTV surfaces. Finally, anisotropic CTV-to-PTV margins with shape variations in 6 directions (anterior, posterior, superior, inferior, right, and left) were computed by using a van Herk margin formula. Results: Differences between CTV-to-PTV margins with and without shape variations ranged from 0.7 to 1.7 mm in anterior direction, 1.0 to 2.8 mm in posterior direction, 0.8 to 2.8 mm in superior direction, 0.6 to 1.6 mm in inferior direction, 1.4 to 4.4 mm in right direction, and 1.3 to 5.2 mm in left direction. Conclusion: More than 1.0 mm additional margins were needed at least in 3 directions to guarantee CTV coverage due to shape variations. Therefore, shape variations should be taken into account for the determination of CTV-to-PTV margins.« less
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2004-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane strain elements as well as three different generalized plane strain type approaches were performed. The computed deflections, skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with lamination length. For more accurate predictions, however, a three-dimensional analysis is required.
Critical Computer Literacy: Computers in First-Year Composition as Topic and Environment.
ERIC Educational Resources Information Center
Duffelmeyer, Barbara Blakely
2000-01-01
Addresses how first-year students understand the influence of computers by cultural assumptions about technology. Presents three meaning perspectives on technology that students expressed based on formative experiences they have had with it. Discusses implications for how computers and composition scholars incorporate computer technology into…
Critical reflections on methodological challenge in arts and dementia evaluation and research.
Gray, Karen; Evans, Simon Chester; Griffiths, Amanda; Schneider, Justine
2017-01-01
Methodological rigour, or its absence, is often a focus of concern for the emerging field of evaluation and research around arts and dementia. However, this paper suggests that critical attention should also be paid to the way in which individual perceptions, hidden assumptions and underlying social and political structures influence methodological work in the field. Such attention will be particularly important for addressing methodological challenges relating to contextual variability, ethics, value judgement and signification identified through a literature review on this topic. Understanding how, where and when evaluators and researchers experience such challenges may help to identify fruitful approaches for future evaluation.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
..., Domestic Strategic Intelligence Unit, Office of Intelligence, Warning, Plans and Programs, Drug Enforcement... proposed collection of information, including the validity of the methodology and assumptions used; Enhance...
Biomass for Electricity Generation
2002-01-01
This paper examines issues affecting the uses of biomass for electricity generation. The methodology used in the National Energy Modeling System to account for various types of biomass is discussed, and the underlying assumptions are explained.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-27
... of information, including the validity of the methodology and assumptions used; --Evaluate whether...- 52. In addition, 70 respondents of these respondents will be used for reliability testing averaging 1...
77 FR 69614 - Commission Information Collection Activities (FERC-714); Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-20
... to correlate rates and charges, assess reliability and other operating attributes in regulatory..., including the validity of the methodology and assumptions used; (3) ways to enhance the quality, utility and...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-26
... Program the ability to conduct pretests which evaluate the validity and reliability of information... the proposed collection of information, including the validity of the methodology and assumptions used...
78 FR 24716 - Information Collection: Disposal of Mineral Materials
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-26
... petrified wood and common varieties of sand, stone, gravel, pumice, pumicite, cinders, clay, and other... validity of the methodology and assumptions used; (3) ways to enhance the quality, utility, and clarity of...
A comparison of linear interpolation models for iterative CT reconstruction.
Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric
2016-12-01
Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.
Fudin, R
2000-06-01
Methodological changes in subliminal psychodynamic activation experiments based on the assumption that multiletter messages can be encoded automatically (Birgegard & Sohlberg, 1999) are questioned. Their contention that partial experimental messages and appropriate nonsense anagram controls (Fudin, 1986) need not be presented in every experiment is supported, with a reservation. If the difference between responses to the complete message and its control is significant in the predicted direction, then Fudin's procedure should be used. A nonsignificant difference between the response to each partial message and its control is needed to support the assumption of proponents of subliminal psychodynamic activation that successful outcomes are effected by the encoding of the meaning of a complete message. Experiments in subliminal psychodynamic activation can be improved if their methodologies take into account variables that may operate when subliminal stimuli are presented and encoded.
Connecticut Highlands Technical Report - Documentation of the Regional Rainfall-Runoff Model
Ahearn, Elizabeth A.; Bjerklie, David M.
2010-01-01
This report provides the supporting data and describes the data sources, methodologies, and assumptions used in the assessment of existing and potential water resources of the Highlands of Connecticut and Pennsylvania (referred to herein as the “Highlands”). Included in this report are Highlands groundwater and surface-water use data and the methods of data compilation. Annual mean streamflow and annual mean base-flow estimates from selected U.S. Geological Survey (USGS) gaging stations were computed using data for the period of record through water year 2005. The methods of watershed modeling are discussed and regional and sub-regional water budgets are provided. Information on Highlands surface-water-quality trends is presented. USGS web sites are provided as sources for additional information on groundwater levels, streamflow records, and ground- and surface-water-quality data. Interpretation of these data and the findings are summarized in the Highlands study report.
Using directed information for influence discovery in interconnected dynamical systems
NASA Astrophysics Data System (ADS)
Rao, Arvind; Hero, Alfred O.; States, David J.; Engel, James Douglas
2008-08-01
Structure discovery in non-linear dynamical systems is an important and challenging problem that arises in various applications such as computational neuroscience, econometrics, and biological network discovery. Each of these systems have multiple interacting variables and the key problem is the inference of the underlying structure of the systems (which variables are connected to which others) based on the output observations (such as multiple time trajectories of the variables). Since such applications demand the inference of directed relationships among variables in these non-linear systems, current methods that have a linear assumption on structure or yield undirected variable dependencies are insufficient. Hence, in this work, we present a methodology for structure discovery using an information-theoretic metric called directed time information (DTI). Using both synthetic dynamical systems as well as true biological datasets (kidney development and T-cell data), we demonstrate the utility of DTI in such problems.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
An Uncertainty Quantification Framework for Remote Sensing Retrievals
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Hobbs, J.
2017-12-01
Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.
A Computational Framework to Control Verification and Robustness Analysis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2010-01-01
This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
Future WGCEP Models and the Need for Earthquake Simulators
NASA Astrophysics Data System (ADS)
Field, E. H.
2008-12-01
The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).
McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron
2011-03-01
Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.
Enhancing Least-Squares Finite Element Methods Through a Quantity-of-Interest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb Hameed; Cyr, Eric C.; Liu, Kuo
2014-12-18
Here, we introduce an approach that augments least-squares finite element formulations with user-specified quantities-of-interest. The method incorporates the quantity-of-interest into the least-squares functional and inherits the global approximation properties of the standard formulation as well as increased resolution of the quantity-of-interest. We establish theoretical properties such as optimality and enhanced convergence under a set of general assumptions. Central to the approach is that it offers an element-level estimate of the error in the quantity-of-interest. As a result, we introduce an adaptive approach that yields efficient, adaptively refined approximations. Several numerical experiments for a range of situations are presented to supportmore » the theory and highlight the effectiveness of our methodology. Notably, the results show that the new approach is effective at improving the accuracy per total computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenzie-Carter, M.A.; Lyon, R.E.
This report contains information to support the Environmental Assessment for the Compact Ignition Tokamak Project (CIT) proposed for Princeton Plasma Physics Laboratory (PPPL). The assumptions and methodology used to assess the impact to members of the public from operational and accidental releases of radioactive material from the proposed CIT during the operational period of the project are described. A description of the tracer release tests conducted at PPPL by NOAA is included; dispersion values from these tests are used in the dose calculation. Radiological releases, doses, and resulting health risks are calculated. The computer code AIRDOS-EPA is used to calculatemore » the individual and population doses for routine releases; FUSCRAC3 is used to calculate doses resulting from off-normal releases where direct application of the NOAA tracer test data is not practical. Where applicable, doses are compared to regulatory limits and guidelines values. 44 refs., 5 figs., 18 tabs.« less
Selecting between-sample RNA-Seq normalization methods from the perspective of their assumptions.
Evans, Ciaran; Hardin, Johanna; Stoebel, Daniel M
2017-02-27
RNA-Seq is a widely used method for studying the behavior of genes under different biological conditions. An essential step in an RNA-Seq study is normalization, in which raw data are adjusted to account for factors that prevent direct comparison of expression measures. Errors in normalization can have a significant impact on downstream analysis, such as inflated false positives in differential expression analysis. An underemphasized feature of normalization is the assumptions on which the methods rely and how the validity of these assumptions can have a substantial impact on the performance of the methods. In this article, we explain how assumptions provide the link between raw RNA-Seq read counts and meaningful measures of gene expression. We examine normalization methods from the perspective of their assumptions, as an understanding of methodological assumptions is necessary for choosing methods appropriate for the data at hand. Furthermore, we discuss why normalization methods perform poorly when their assumptions are violated and how this causes problems in subsequent analysis. To analyze a biological experiment, researchers must select a normalization method with assumptions that are met and that produces a meaningful measure of expression for the given experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Otto-engine-equivalent vehicle concept
NASA Technical Reports Server (NTRS)
Dowdy, M. W.; Couch, M. D.
1978-01-01
A vehicle comparison methodology based on the Otto-Engine Equivalent (OEE) vehicle concept is described. As an illustration of this methodology, the concept is used to make projections of the fuel economy potential of passenger cars using various alternative power systems. Sensitivities of OEE vehicle results to assumptions made in the calculational procedure are discussed. Factors considered include engine torque boundary, rear axle ratio, performance criteria, engine transient response, and transmission shift logic.
What lies behind crop decisions?Coming to terms with revealing farmers' preferences
NASA Astrophysics Data System (ADS)
Gomez, C.; Gutierrez, C.; Pulido-Velazquez, M.; López Nicolás, A.
2016-12-01
The paper offers a fully-fledged applied revealed preference methodology to screen and represent farmers' choices as the solution of an optimal program involving trade-offs among the alternative welfare outcomes of crop decisions such as profits, income security and management easiness. The recursive two-stage method is proposed as an alternative to cope with the methodological problems inherent to common practice positive mathematical program methodologies (PMP). Differently from PMP, in the model proposed in this paper, the non-linear costs that are required for both calibration and smooth adjustment are not at odds with the assumptions of linear Leontief technologies and fixed crop prices and input costs. The method frees the model from ad-hoc assumptions about costs and then recovers the potential of economic analysis as a means to understand the rationale behind observed and forecasted farmers' decisions and then to enhance the potential of the model to support policy making in relevant domains such as agricultural policy, water management, risk management and climate change adaptation. After the introduction, where the methodological drawbacks and challenges are set up, section two presents the theoretical model, section three develops its empirical application and presents its implementation to a Spanish irrigation district and finally section four concludes and makes suggestions for further research.
Estimation of kinematic parameters in CALIFA galaxies: no-assumption on internal dynamics
NASA Astrophysics Data System (ADS)
García-Lorenzo, B.; Barrera-Ballesteros, J.; CALIFA Team
2016-06-01
We propose a simple approach to homogeneously estimate kinematic parameters of a broad variety of galaxies (elliptical, spirals, irregulars or interacting systems). This methodology avoids the use of any kinematical model or any assumption on internal dynamics. This simple but novel approach allows us to determine: the frequency of kinematic distortions, systemic velocity, kinematic center, and kinematic position angles which are directly measured from the two dimensional-distributions of radial velocities. We test our analysis tools using the CALIFA Survey
Do we need methodological theory to do qualitative research?
Avis, Mark
2003-09-01
Positivism is frequently used to stand for the epistemological assumption that empirical science based on principles of verificationism, objectivity, and reproducibility is the foundation of all genuine knowledge. Qualitative researchers sometimes feel obliged to provide methodological alternatives to positivism that recognize their different ethical, ontological, and epistemological commitments and have provided three theories: phenomenology, grounded theory, and ethnography. The author argues that positivism was a doomed attempt to define empirical foundations for knowledge through a rigorous separation of theory and evidence; offers a pragmatic, coherent view of knowledge; and suggests that rigorous, rational empirical investigation does not need methodological theory. Therefore, qualitative methodological theory is unnecessary and counterproductive because it hinders critical reflection on the relation between methodological theory and empirical evidence.
Kromeier, M; Kommerell, G
2006-01-01
The "Measuring and Correcting Methodology after H.-J. Haase" is based on the assumption that a minute deviation from the orthovergence position (fixation disparity) indicates a difficulty to overcome a larger "vergence angle of rest". Objective recordings have, however, revealed that the subjective tests applied in the "Measuring and Correcting Methodology after H.-J. Haase" can mislead to the assumption of a fixation disparity, although both eyes are aligned exactly to the fixation point. How do patients with an inconspicuously small, yet objectively verified strabismus react to the "Measuring and Correcting Methodology by H.-J. Haase"? Eight patients with a microesotropia between 0.5 and 3 degrees were subjected to the "Measuring and Correcting Methodology after H.-J. Haase. In all 8 patients, the prisms determined with the Cross-, Pointer- and Rectangle Tests increased the angle of squint, without reaching a full correction: the original angle prevailed. In the Stereobalance Test, prisms did not reduce the 100 % preponderance of the non-squinting eye. The stereoscopic threshold was between 36 and 1170 arcsec in 7 out of the 8 subjects, and above 4000 arcsec in 1 subject. (1) In all 8 patients, prisms determined with the "Measuring and Correcting Methodology by H.-J. Haase" increased the angle of strabismus, without reaching bifoveal vision. This uniform result suggests that primary microesotropia cannot be corrected with the "Measuring and Correcting Methodology after H.-J. Haase" (2) A lacking contribution of the strabismic eye to the recognition of a lateral offset between stereo objects, as determined with the Stereobalance Test, does not imply a lack of binocular stereopsis.
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Railkar, Sudhir B.
1988-01-01
This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.
77 FR 11481 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-27
... estimate of burden including the validity of the methodology and assumptions used; (c) ways to enhance the... form FNS 698, Profile of Integrity Practices and Procedures; FNS 699, the Integrity Profile Report Form...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dombroski, M; Melius, C; Edmunds, T
2008-09-24
This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to humanmore » epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future work including validating the model against reliable historical disease data, improving contact rates, spread methods, and disease parameters through discussions with epidemiological experts, and incorporating realistic behavioral assumptions.« less
Documentation of Helicopter Aeroelastic Stability Analysis Computer Program (HASTA)
1977-12-01
of the blade phasing assumption for which all blades of the rotor are identical and equally spaced azimuthally allows the size of the T. matrices...to be significantly reduced by the removal of the submatrices associated with blades other than the first blade. With the use of this assumption ...different program representational options such as the type of rotor system, the type of blades, and the use of the blade phasing assumption , the
Design Considerations for Large Computer Communication Networks,
1976-04-01
particular, we will discuss the last three assumptions in order to motivate some of the models to be considered in this chapter. Independence Assumption...channels. fg Part (a), again motivated by an earlier remark on deterministic routing, will become more accurate when we include in the model, based on fixed...hierarchical routing, then this assumption appears to be quite acceptable. Part (b) is motivated by the quite symmetrical structure of the networks considered
Validity of body composition methods across ethnic population groups.
Deurenberg, P; Deurenberg-Yap, M
2003-10-01
Most in vivo body composition methods rely on assumptions that may vary among different population groups as well as within the same population group. The assumptions are based on in vitro body composition (carcass) analyses. The majority of body composition studies were performed on Caucasians and much of the information on validity methods and assumptions were available only for this ethnic group. It is assumed that these assumptions are also valid for other ethnic groups. However, if apparent differences across ethnic groups in body composition 'constants' and body composition 'rules' are not taken into account, biased information on body composition will be the result. This in turn may lead to misclassification of obesity or underweight at an individual as well as a population level. There is a need for more cross-ethnic population studies on body composition. Those studies should be carried out carefully, with adequate methodology and standardization for the obtained information to be valuable.
ERIC Educational Resources Information Center
Notes on Literacy, 1997
1997-01-01
The 1997 volume of "Notes on Literacy," numbers 1-4, includes the following articles: "Community Based Literacy, Burkina Faso"; "The Acquisition of a Second Writing System"; "Appropriate Methodology and Social Context"; "Literacy Megacourse Offered"; "Fitting in with Local Assumptions about…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-20
... validity of the methodology and assumptions used; (3) Enhance the quality, utility, and clarity of the... the inadmissibility grounds that were added by the Intelligence Reform and Terrorism Prevention Act of...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-21
... validity of the methodology and assumptions used. 3. Enhance the quality, utility, and clarity of the... of residue chemistry and toxicology data. In addition, EPA must ensure that adequate enforcement of...
77 FR 77069 - Commission Information Collection Activities (FERC-730); Comment Request; Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... reliability and to reduce the cost of delivered power by reducing transmission congestion. Order No. 679 also... information, including the validity of the methodology and assumptions used; (3) ways to enhance the quality...
The Use of Object-Oriented Analysis Methods in Surety Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, Richard L.; Funkhouser, Donald R.; Wyss, Gregory D.
1999-05-01
Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automaticmore » model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.« less
Methodological approach to crime scene investigation: the dangers of technology
NASA Astrophysics Data System (ADS)
Barnett, Peter D.
1997-02-01
The visitor to any modern forensic science laboratory is confronted with equipment and processes that did not exist even 10 years ago: thermocyclers to allow genetic typing of nanogram amounts of DNA isolated from a few spermatozoa; scanning electron microscopes that can nearly automatically detect submicrometer sized particles of molten lead, barium and antimony produced by the discharge of a firearm and deposited on the hands of the shooter; and computers that can compare an image of a latent fingerprint with millions of fingerprints stored in the computer memory. Analysis of populations of physical evidence has permitted statistically minded forensic scientists to use Bayesian inference to draw conclusions based on a priori assumptions which are often poorly understood, irrelevant, or misleading. National commissions who are studying quality control in DNA analysis propose that people with barely relevant graduate degrees and little forensic science experience be placed in charge of forensic DNA laboratories. It is undeniable that high- tech has reversed some miscarriages of justice by establishing the innocence of a number of people who were imprisoned for years for crimes that they did not commit. However, this papers deals with the dangers of technology in criminal investigations.
Kernel methods and flexible inference for complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Capobianco, Enrico
2008-07-01
Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.
A Process-Based Transport-Distance Model of Aeolian Transport
NASA Astrophysics Data System (ADS)
Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.
2017-12-01
We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.
Family learning research in museums: An emerging disciplinary matrix?
NASA Astrophysics Data System (ADS)
Ellenbogen, Kirsten M.; Luke, Jessica J.; Dierking, Lynn D.
2004-07-01
Thomas Kuhn's notion of a disciplinary matrix provides a useful framework for investigating the growth of research on family learning in and from museums over the last decade. To track the emergence of this disciplinary matrix we consider three issues. First are shifting theoretical perspectives that result in new shared language, beliefs, values, understandings, and assumptions about what counts as family learning. Second are realigning methodologies, driven by underlying disciplinary assumptions about how research in this arena is best conducted, what questions should be addressed, and criteria for valid and reliable evidence. Third is resituating the focus of our research to make the family central to what we study, reflecting a more holistic understanding of the family as an educational institution within larger learning infrastructure. We discuss research that exemplifies these three issues and demonstrates the ways in which shifting theoretical perspectives, realigning methodologies, and resituating research foci signal the existence of a nascent disciplinary matrix.
Statistical Mechanical Derivation of Jarzynski's Identity for Thermostated Non-Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Cuendet, Michel A.
2006-03-01
The recent Jarzynski identity (JI) relates thermodynamic free energy differences to nonequilibrium work averages. Several proofs of the JI have been provided on the thermodynamic level. They rely on assumptions such as equivalence of ensembles in the thermodynamic limit or weakly coupled infinite heat baths. However, the JI is widely applied to NVT computer simulations involving finite numbers of particles, whose equations of motion are strongly coupled to a few extra degrees of freedom modeling a thermostat. In this case, the above assumptions are no longer valid. We propose a statistical mechanical approach to the JI solely based on the specific equations of motion, without any further assumption. We provide a detailed derivation for the non-Hamiltonian Nosé-Hoover dynamics, which is routinely used in computer simulations to produce canonical sampling.
ERIC Educational Resources Information Center
Farlow, D'arcy, Ed.
This document reports on a health promotion divisional workshop on popular education (PE) that was conducted to teach health promoters/educators to use PE methodology to analyze their educational work and role as health promoters and to learn to apply PE methodology during the health promotion activities. Information on the history and…
Teaching and Learning Methodologies Supported by ICT Applied in Computer Science
ERIC Educational Resources Information Center
Capacho, Jose
2016-01-01
The main objective of this paper is to show a set of new methodologies applied in the teaching of Computer Science using ICT. The methodologies are framed in the conceptual basis of the following sciences: Psychology, Education and Computer Science. The theoretical framework of the research is supported by Behavioral Theory, Gestalt Theory.…
Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques
2013-03-01
MEMRISTOR-BASED COMPUTING ARCHITECTURE : DESIGN METHODOLOGIES AND CIRCUIT TECHNIQUES POLYTECHNIC INSTITUTE OF NEW YORK UNIVERSITY...TECHNICAL REPORT 3. DATES COVERED (From - To) OCT 2010 – OCT 2012 4. TITLE AND SUBTITLE MEMRISTOR-BASED COMPUTING ARCHITECTURE : DESIGN METHODOLOGIES...schemes for a memristor-based reconfigurable architecture design have not been fully explored yet. Therefore, in this project, we investigated
Spacelab experiment computer study. Volume 1: Executive summary (presentation)
NASA Technical Reports Server (NTRS)
Lewis, J. L.; Hodges, B. C.; Christy, J. O.
1976-01-01
A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.
Computer Applications in Teaching and Learning.
ERIC Educational Resources Information Center
Halley, Fred S.; And Others
Some examples of the usage of computers in teaching and learning are examination generation, automatic exam grading, student tracking, problem generation, computational examination generators, program packages, simulation, and programing skills for problem solving. These applications are non-trivial and do fulfill the basic assumptions necessary…
The Status of Ubiquitous Computing.
ERIC Educational Resources Information Center
Brown, David G.; Petitto, Karen R.
2003-01-01
Explains the prevalence and rationale of ubiquitous computing on college campuses--teaching with the assumption or expectation that all faculty and students have access to the Internet--and offers lessons learned by pioneering institutions. Lessons learned involve planning, technology, implementation and management, adoption of computer-enhanced…
Data on fossil fuel availability for Shared Socioeconomic Pathways.
Bauer, Nico; Hilaire, Jérôme; Brecha, Robert J; Edmonds, Jae; Jiang, Kejun; Kriegler, Elmar; Rogner, Hans-Holger; Sferra, Fabio
2017-02-01
The data files contain the assumptions and results for the construction of cumulative availability curves for coal, oil and gas for the five Shared Socioeconomic Pathways. The files include the maximum availability (also known as cumulative extraction cost curves) and the assumptions that are applied to construct the SSPs. The data is differentiated into twenty regions. The resulting cumulative availability curves are plotted and the aggregate data as well as cumulative availability curves are compared across SSPs. The methodology, the data sources and the assumptions are documented in a related article (N. Bauer, J. Hilaire, R.J. Brecha, J. Edmonds, K. Jiang, E. Kriegler, H.-H. Rogner, F. Sferra, 2016) [1] under DOI: http://dx.doi.org/10.1016/j.energy.2016.05.088.
Calculation of Disease Dynamics in a Population of Households
Ross, Joshua V.; House, Thomas; Keeling, Matt J.
2010-01-01
Early mathematical representations of infectious disease dynamics assumed a single, large, homogeneously mixing population. Over the past decade there has been growing interest in models consisting of multiple smaller subpopulations (households, workplaces, schools, communities), with the natural assumption of strong homogeneous mixing within each subpopulation, and weaker transmission between subpopulations. Here we consider a model of SIRS (susceptible-infectious-recovered-susceptible) infection dynamics in a very large (assumed infinite) population of households, with the simplifying assumption that each household is of the same size (although all methods may be extended to a population with a heterogeneous distribution of household sizes). For this households model we present efficient methods for studying several quantities of epidemiological interest: (i) the threshold for invasion; (ii) the early growth rate; (iii) the household offspring distribution; (iv) the endemic prevalence of infection; and (v) the transient dynamics of the process. We utilize these methods to explore a wide region of parameter space appropriate for human infectious diseases. We then extend these results to consider the effects of more realistic gamma-distributed infectious periods. We discuss how all these results differ from standard homogeneous-mixing models and assess the implications for the invasion, transmission and persistence of infection. The computational efficiency of the methodology presented here will hopefully aid in the parameterisation of structured models and in the evaluation of appropriate responses for future disease outbreaks. PMID:20305791
Van der Elst, Wim; Molenberghs, Geert; Hilgers, Ralf-Dieter; Verbeke, Geert; Heussen, Nicole
2016-11-01
There are various settings in which researchers are interested in the assessment of the correlation between repeated measurements that are taken within the same subject (i.e., reliability). For example, the same rating scale may be used to assess the symptom severity of the same patients by multiple physicians, or the same outcome may be measured repeatedly over time in the same patients. Reliability can be estimated in various ways, for example, using the classical Pearson correlation or the intra-class correlation in clustered data. However, contemporary data often have a complex structure that goes well beyond the restrictive assumptions that are needed with the more conventional methods to estimate reliability. In the current paper, we propose a general and flexible modeling approach that allows for the derivation of reliability estimates, standard errors, and confidence intervals - appropriately taking hierarchies and covariates in the data into account. Our methodology is developed for continuous outcomes together with covariates of an arbitrary type. The methodology is illustrated in a case study, and a Web Appendix is provided which details the computations using the R package CorrMixed and the SAS software. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Funk, Christie J.
2013-01-01
A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees of freedom and allows for the calculation of various airplane responses due to a discrete one-minus-cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and output data so as to provide a more useful and accurate tool for gust load analysis. Revisions are made in the categories of aircraft geometry, computation of aerodynamic forces and moments, and implementation of horizontal tail mode shapes. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs in included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
... validity of the methodology and assumptions used; (3) Enhance the quality, utility, and clarity of the... the inadmissibility grounds that were added by the Intelligence Reform and Terrorism Prevention Act of...
Action-Oriented Research: Models and Methods.
ERIC Educational Resources Information Center
Small, Stephen A.
1995-01-01
Four models of action-oriented research, a research approach that can inform policy and practice, are described: action, participatory, empowerment, and feminism research. Discusses historical roots, epistemological assumptions, agendas, and methodological strategies of each, and presents implications for family researchers. (JPS)
ERIC Educational Resources Information Center
Kast, David
1993-01-01
The crisis confronting calculus and mathematics education generally results from a number of failed assumptions implicit in the dominant lecture-homework-exam methodology used in teaching mathematics. Positive resolution of this crisis can be found in adopting a noncompetitive, collaborative approach to mathematics education. (Author)
ERIC Educational Resources Information Center
Crain, Robert L.; Hawley, Willis D.
1982-01-01
Criticizes James Coleman's study, "Public and Private Schools," and points out methodological weaknesses in sampling, testing, data reliability, and statistical methods. Questions assumptions which have led to conclusions justifying federal support, especially tuition tax credits, to private schools. Raises the issue of ethical standards…
The Cost of CAI: A Matter of Assumptions.
ERIC Educational Resources Information Center
Kearsley, Greg P.
Cost estimates for Computer Assisted Instruction (CAI) depend crucially upon the particular assumptions made about the components of the system to be included in the costs, the expected lifetime of the system and courseware, and the anticipated student utilization of the system/courseware. The cost estimates of three currently operational systems…
Structural Optimization Methodology for Rotating Disks of Aircraft Engines
NASA Technical Reports Server (NTRS)
Armand, Sasan C.
1995-01-01
In support of the preliminary evaluation of various engine technologies, a methodology has been developed for structurally designing the rotating disks of an aircraft engine. The structural design methodology, along with a previously derived methodology for predicting low-cycle fatigue life, was implemented in a computer program. An interface computer program was also developed that gathers the required data from a flowpath analysis program (WATE) being used at NASA Lewis. The computer program developed for this study requires minimum interaction with the user, thus allowing engineers with varying backgrounds in aeropropulsion to successfully execute it. The stress analysis portion of the methodology and the computer program were verified by employing the finite element analysis method. The 10th- stage, high-pressure-compressor disk of the Energy Efficient Engine Program (E3) engine was used to verify the stress analysis; the differences between the stresses and displacements obtained from the computer program developed for this study and from the finite element analysis were all below 3 percent for the problem solved. The computer program developed for this study was employed to structurally optimize the rotating disks of the E3 high-pressure compressor. The rotating disks designed by the computer program in this study were approximately 26 percent lighter than calculated from the E3 drawings. The methodology is presented herein.
Alive and Kicking: Making the Case for Mainframe Education
ERIC Educational Resources Information Center
Murphy, Marianne C.; Sharma, Aditya; Seay, Cameron; McClelland, Marilyn K.
2010-01-01
As universities continually update and assess their curriculums, mainframe computing is quite often overlooked as it is often thought of as "legacy computer." Mainframe computing appears to be either uninteresting or thought of as a computer past its prime. However, both assumptions are leading to a shortage of IS professionals in the…
Kiluk, Brian D.; Sugarman, Dawn E.; Nich, Charla; Gibbons, Carly J.; Martino, Steve; Rounsaville, Bruce J.; Carroll, Kathleen M.
2013-01-01
Objective Computer-assisted therapies offer a novel, cost-effective strategy for providing evidence-based therapies to a broad range of individuals with psychiatric disorders. However, the extent to which the growing body of randomized trials evaluating computer-assisted therapies meets current standards of methodological rigor for evidence-based interventions is not clear. Method A methodological analysis of randomized clinical trials of computer-assisted therapies for adult psychiatric disorders, published between January 1990 and January 2010, was conducted. Seventy-five studies that examined computer-assisted therapies for a range of axis I disorders were evaluated using a 14-item methodological quality index. Results Results indicated marked heterogeneity in study quality. No study met all 14 basic quality standards, and three met 13 criteria. Consistent weaknesses were noted in evaluation of treatment exposure and adherence, rates of follow-up assessment, and conformity to intention-to-treat principles. Studies utilizing weaker comparison conditions (e.g., wait-list controls) had poorer methodological quality scores and were more likely to report effects favoring the computer-assisted condition. Conclusions While several well-conducted studies have indicated promising results for computer-assisted therapies, this emerging field has not yet achieved a level of methodological quality equivalent to those required for other evidence-based behavioral therapies or pharmacotherapies. Adoption of more consistent standards for methodological quality in this field, with greater attention to potential adverse events, is needed before computer-assisted therapies are widely disseminated or marketed as evidence based. PMID:21536689
Economic and environmental optimization of waste treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Münster, M.; Ravn, H.; Hedegaard, K.
2015-04-15
Highlights: • Optimizing waste treatment by incorporating LCA methodology. • Applying different objectives (minimizing costs or GHG emissions). • Prioritizing multiple objectives given different weights. • Optimum depends on objective and assumed displaced electricity production. - Abstract: This article presents the new systems engineering optimization model, OptiWaste, which incorporates a life cycle assessment (LCA) methodology and captures important characteristics of waste management systems. As part of the optimization, the model identifies the most attractive waste management options. The model renders it possible to apply different optimization objectives such as minimizing costs or greenhouse gas emissions or to prioritize several objectivesmore » given different weights. A simple illustrative case is analysed, covering alternative treatments of one tonne of residual household waste: incineration of the full amount or sorting out organic waste for biogas production for either combined heat and power generation or as fuel in vehicles. The case study illustrates that the optimal solution depends on the objective and assumptions regarding the background system – illustrated with different assumptions regarding displaced electricity production. The article shows that it is feasible to combine LCA methodology with optimization. Furthermore, it highlights the need for including the integrated waste and energy system into the model.« less
NASA Astrophysics Data System (ADS)
Kang, Pilsang; Koo, Changhoi; Roh, Hokyu
2017-11-01
Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.
Cognitive neuroenhancement: false assumptions in the ethical debate.
Heinz, Andreas; Kipke, Roland; Heimann, Hannah; Wiesing, Urban
2012-06-01
The present work critically examines two assumptions frequently stated by supporters of cognitive neuroenhancement. The first, explicitly methodological, assumption is the supposition of effective and side effect-free neuroenhancers. However, there is an evidence-based concern that the most promising drugs currently used for cognitive enhancement can be addictive. Furthermore, this work describes why the neuronal correlates of key cognitive concepts, such as learning and memory, are so deeply connected with mechanisms implicated in the development and maintenance of addictive behaviour so that modification of these systems may inevitably run the risk of addiction to the enhancing drugs. Such a potential risk of addiction could only be falsified by in-depth empirical research. The second, implicit, assumption is that research on neuroenhancement does not pose a serious moral problem. However, the potential for addiction, along with arguments related to research ethics and the potential social impact of neuroenhancement, could invalidate this assumption. It is suggested that ethical evaluation needs to consider the empirical data as well as the question of whether and how such empirical knowledge can be obtained.
Assumptions at the philosophical and programmatic levels in evaluation.
Mertens, Donna M
2016-12-01
Stakeholders and evaluators hold a variety of levels of assumptions at the philosophical, methodological, and programmatic levels. The use of a transformative philosophical framework is presented as a way for evaluators to become more aware of the implications of various assumptions made by themselves and program stakeholders. The argument is examined and demonstrated that evaluators who are aware of the assumptions that underlie their evaluation choices are able to provide useful support for stakeholders in the examination of the assumptions they hold with regard to the nature of the problem being addressed, the program designed to solve the problem, and the approach to evaluation that is appropriate in that context. Such an informed approach has the potential for development of more appropriate and culturally responsive programs being implemented in ways that lead to the desired impacts, as well as to lead to evaluation approaches that support effective solutions to intransigent social problems. These arguments are illustrated through examples of evaluations from multiple sectors; additional challenges are also identified. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cross-Cultural Group Performance
ERIC Educational Resources Information Center
Mitchell, Rebecca; Boyle, Brendan; Nicholas, Stephen
2011-01-01
Purpose: This paper aims to explore the assumption that the impact of cultural diversity on knowledge creating capability is consequent to associated differences in knowledge and perspectives, and suggests that these knowledge differences produce their effect by triggering deliberative, collaborative behaviours. Design/methodology/approach: To…
Wind Resource and Feasibility Assessment Report for the Lummi Reservation
DOE Office of Scientific and Technical Information (OSTI.GOV)
DNV Renewables; J.C. Brennan & Associates, Inc.; Hamer Environmental L.P.
2012-08-31
This report summarizes the wind resource on the Lummi Indian Reservation (Washington State) and presents the methodology, assumptions, and final results of the wind energy development feasibility assessment, which included an assessment of biological impacts and noise impacts.
76 FR 20043 - Agency Information Collection Activities: New Collection, Comments Requested
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-11
... DEPARTMENT OF JUSTICE Federal Bureau of Investigation [OMB Number 1110-NEW] Agency Information... renewal: Final Disposition Report (R-84). The Department of Justice (DOJ), Federal Bureau of Investigation... validity of the methodology and assumptions used; [[Page 20044
Stable isotopes and elasmobranchs: tissue types, methods, applications and assumptions.
Hussey, N E; MacNeil, M A; Olin, J A; McMeans, B C; Kinney, M J; Chapman, D D; Fisk, A T
2012-04-01
Stable-isotope analysis (SIA) can act as a powerful ecological tracer with which to examine diet, trophic position and movement, as well as more complex questions pertaining to community dynamics and feeding strategies or behaviour among aquatic organisms. With major advances in the understanding of the methodological approaches and assumptions of SIA through dedicated experimental work in the broader literature coupled with the inherent difficulty of studying typically large, highly mobile marine predators, SIA is increasingly being used to investigate the ecology of elasmobranchs (sharks, skates and rays). Here, the current state of SIA in elasmobranchs is reviewed, focusing on available tissues for analysis, methodological issues relating to the effects of lipid extraction and urea, the experimental dynamics of isotopic incorporation, diet-tissue discrimination factors, estimating trophic position, diet and mixing models and individual specialization and niche-width analyses. These areas are discussed in terms of assumptions made when applying SIA to the study of elasmobranch ecology and the requirement that investigators standardize analytical approaches. Recommendations are made for future SIA experimental work that would improve understanding of stable-isotope dynamics and advance their application in the study of sharks, skates and rays. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.
Corbett, Andrea M; Francis, Karen; Chapman, Ysanne
2007-04-01
Identifying a methodology to guide a study that aims to enhance service delivery can be challenging. Participatory action research offers a solution to this challenge as it both informs and is informed by critical social theory. In addition, using a feminist lens helps acquiesce this approach as a suitable methodology for changing practice. This methodology embraces empowerment self-determination and the facilitation of agreed change as central tenets that guide the research process. Encouraged by the work of Foucault, Friere, Habermas, and Maguire, this paper explicates the philosophical assumptions underpinning critical social theory and outlines how feminist influences are complimentary in exploring the processes and applications of nursing research that seeks to embrace change.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
Evaluation of 2D shallow-water model for spillway flow with a complex geometry
USDA-ARS?s Scientific Manuscript database
Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...
A novel porous Ffowcs-Williams and Hawkings acoustic methodology for complex geometries
NASA Astrophysics Data System (ADS)
Nitzkorski, Zane Lloyd
Predictive noise calculations from high Reynolds number flows in complex engineering geometry are becoming a possibility with the high performance computing resources that have become available in recent years. Increasing the applicability and reliability of solution methodologies have been two key challenges toward this goal. This dissertation develops a porous Ffowcs-Williams and Hawkings methodology that uses a novel endcap methodology, and can be applied to unstructured grids. The use of unstructured grids allows complex geometry to be represented while porous formulation eliminates difficulties with the choice of acoustic Green's function. Specifically, this dissertation (1) proposes and examines a novel endcap procedure to account for spurious noise, (2) uses the proposed methodology to investigate noise production from a range of subcritical Reynolds number circular cylinders, and (3) investigates a trailing edge geometry for noise production and to illustrate the generality of the Green's function. Porous acoustic analogies need an endcap scheme in order to prevent spurious noise due to truncation errors. A dynamic end cap methodology is proposed to account for spurious contributions to the far--field sound within the context of the Ffowcs--Williams and Hawkings (FW--H) acoustic analogy. The quadrupole source terms are correlated over multiple planes to obtain a convection velocity which is then used to determine a corrective convective flux at the FW--H porous surface. The proposed approach is first demonstrated for a convecting potential vortex. The correlation is investigated by examining it pass through multiple exit planes. It is then evaluated by computing the sound emitted by flow over a circular cylinder at Reynolds number of 150 and compared to other endcap methods, such as Shur et al. [1]. Insensitivity to end plane location and spacing and the effect of the dynamic convection velocity are computed. Subcritical Reynolds number circular cylinder flows are investigated at Re = 3900, 10000 and 89000 in order to evaluate the method and investigate the physical sources of noise production. The Re = 3900 case was chosen due to its highly validated flow-field and to serve as a basis of comparison. The Re = 10000 cylinder is used to validate the noise production at turbulent Reynolds numbers against other simulations. Finally the Re = 89000 simulations are used to compare to experiment serving as a rigorous test of the methods predictive ability. The proposed approach demonstrates better performance than other commonly used approaches with the added benefit of computational efficiency and the ability to query independent volumes. This gives the added benefit of discovering how much noise production is directly associated with volumetric noise contributions. These capabilities allow for a thorough investigation of the sources of noise production and a means to evaluate proposed theories. A physical description of the source of sound for subcritical Reynolds number cylinders is established. A 45° beveled trailing edge configuration is investigated due to its relevance to hydrofoil and propeller noise. This configuration also allows for the evaluation of the assumption associated with the free-space Green's function since the half-plane Green's function can be used to represent the solution to the wave equation for this geometry. Similar results for directivity and amplitudes of the two formulations confirm the flexibility of the porous surface implementation. Good agreement with experiment is obtained. The effect of boundary layer thickness is investigated. The noise produced in the upper half plane is significantly decreased for the thinner boundary layer while the noise production in the lower half plane is only slightly decreased.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... explored in this series is cloud computing. The workshop on this topic will be held in Gaithersburg, MD on October 21, 2011. Assertion: ``Current implementations of cloud computing indicate a new approach to security'' Implementations of cloud computing have provided new ways of thinking about how to secure data...
The incompressibility assumption in computational simulations of nasal airflow.
Cal, Ismael R; Cercos-Pita, Jose Luis; Duque, Daniel
2017-06-01
Most of the computational works on nasal airflow up to date have assumed incompressibility, given the low Mach number of these flows. However, for high temperature gradients, the incompressibility assumption could lead to a loss of accuracy, due to the temperature dependence of air density and viscosity. In this article we aim to shed some light on the influence of this assumption in a model of calm breathing in an Asian nasal cavity, by solving the fluid flow equations in compressible and incompressible formulation for different ambient air temperatures using the OpenFOAM package. At low flow rates and warm climatological conditions, similar results were obtained from both approaches, showing that density variations need not be taken into account to obtain a good prediction of all flow features, at least for usual breathing conditions. This agrees with most of the simulations previously reported, at least as far as the incompressibility assumption is concerned. However, parameters like nasal resistance and wall shear stress distribution differ for air temperatures below [Formula: see text]C approximately. Therefore, density variations should be considered for simulations at such low temperatures.
Racine, Eric; Martin Rubio, Tristana; Chandler, Jennifer; Forlini, Cynthia; Lucke, Jayne
2014-08-01
In the debate on the ethics of the non-medical use of pharmaceuticals for cognitive performance enhancement in healthy individuals there is a clear division between those who view "cognitive enhancement" as ethically unproblematic and those who see such practices as fraught with ethical problems. Yet another, more subtle issue, relates to the relevance and quality of the contribution of scholarly bioethics to this debate. More specifically, how have various forms of speculation, anticipatory ethics, and methods to predict scientific trends and societal responses augmented or diminished this contribution? In this paper, we use the discussion of the ethics of cognitive enhancement to explore the positive and negative contribution of speculation in bioethics scholarship. First, we review and discuss how speculation has relied on different sets of assumptions regarding the non-medical use of stimulants, namely: (1) terminology and framing; (2) scientific aspects such as efficacy and safety; (3) estimates of prevalence and consequent normalization; and (4) the need for normative reflection and regulatory guidelines. Second, three methodological guideposts are proposed to alleviate some of the pitfalls of speculation: (1) acknowledge assumptions more explicitly and identify the value attributed to assumptions; (2) validate assumptions with interdisciplinary literature; and (3) adopt a broad perspective to promote more comprehensive reflection. We conclude that, through the examination of the controversy about cognitive enhancement, we can employ these methodological guideposts to enhance the value of contributions from bioethics and minimize potential epistemic and practical pitfalls in this case and perhaps in other areas of bioethical debate.
Resampling: A Marriage of Computers and Statistics. ERIC/TM Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Shafer, Mary Morello
Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…
Computer Applications and Technology 105.
ERIC Educational Resources Information Center
Manitoba Dept. of Education and Training, Winnipeg.
Designed to promote Manitoba students' familiarity with computer technology and their ability to interact with that technology, the Computer Applications and Technology 105 course is a one-credit course presented in 15 topical, non-sequential units that require 110-120 hours of instruction time. It has been developed with the assumption that each…
Medication safety research by observational study design.
Lao, Kim S J; Chui, Celine S L; Man, Kenneth K C; Lau, Wallis C Y; Chan, Esther W; Wong, Ian C K
2016-06-01
Observational studies have been recognised to be essential for investigating the safety profile of medications. Numerous observational studies have been conducted on the platform of large population databases, which provide adequate sample size and follow-up length to detect infrequent and/or delayed clinical outcomes. Cohort and case-control are well-accepted traditional methodologies for hypothesis testing, while within-individual study designs are developing and evolving, addressing previous known methodological limitations to reduce confounding and bias. Respective examples of observational studies of different study designs using medical databases are shown. Methodology characteristics, study assumptions, strengths and weaknesses of each method are discussed in this review.
Schedule Risks Due to Delays in Advanced Technology Development
NASA Technical Reports Server (NTRS)
Reeves, John D. Jr.; Kayat, Kamal A.; Lim, Evan
2008-01-01
This paper discusses a methodology and modeling capability that probabilistically evaluates the likelihood and impacts of delays in advanced technology development prior to the start of design, development, test, and evaluation (DDT&E) of complex space systems. The challenges of understanding and modeling advanced technology development considerations are first outlined, followed by a discussion of the problem in the context of lunar surface architecture analysis. The current and planned methodologies to address the problem are then presented along with sample analyses and results. The methodology discussed herein provides decision-makers a thorough understanding of the schedule impacts resulting from the inclusion of various enabling advanced technology assumptions within system design.
ERIC Educational Resources Information Center
Romi, Shlomo; Zoabi, Houssien
2003-01-01
Describes a study that examined the attitudes of Arab dropout youth in Israel toward the use of computer technology and the influence of this use on their self-esteem. Results supported the assumptions that exposure to computer technology would change the attitudes of dropout adolescents toward computers to positive ones. (Contains 43 references.)…
Robust estimators for speech enhancement in real environments
NASA Astrophysics Data System (ADS)
Sandoval-Ibarra, Yuma; Diaz-Ramirez, Victor H.; Kober, Vitaly
2015-09-01
Common statistical estimators for speech enhancement rely on several assumptions about stationarity of speech signals and noise. These assumptions may not always valid in real-life due to nonstationary characteristics of speech and noise processes. We propose new estimators based on existing estimators by incorporation of computation of rank-order statistics. The proposed estimators are better adapted to non-stationary characteristics of speech signals and noise processes. Through computer simulations we show that the proposed estimators yield a better performance in terms of objective metrics than that of known estimators when speech signals are contaminated with airport, babble, restaurant, and train-station noise.
Algorithms for the Computation of Debris Risk
NASA Technical Reports Server (NTRS)
Matney, Mark J.
2017-01-01
Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of satellites. A number of tools have been developed in NASA’s Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA’s Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper presents an introduction to these algorithms and the assumptions upon which they are based.
Algorithms for the Computation of Debris Risks
NASA Technical Reports Server (NTRS)
Matney, Mark
2017-01-01
Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.
SURVIAC Bulletin: RPG Encounter Modeling, Vol 27, Issue 1, 2012
2012-01-01
return a probability of hit ( PHIT ) for the scenario. In the model, PHIT depends on the presented area of the targeted system and a set of errors infl...simplifying assumptions, is data-driven, and uses simple yet proven methodologies to determine PHIT . Th e inputs to THREAT describe the target, the RPG, and...Point on 2-D Representation of a CH-47 Th e determination of PHIT by THREAT is performed using one of two possible methodologies. Th e fi rst is a
Short-term energy outlook. Volume 2. Methodology
NASA Astrophysics Data System (ADS)
1983-05-01
Recent changes in forecasting methodology for nonutility distillate fuel oil demand and for the near-term petroleum forecasts are discussed. The accuracy of previous short-term forecasts of most of the major energy sources published in the last 13 issues of the Outlook is evaluated. Macroeconomic and weather assumptions are included in this evaluation. Energy forecasts for 1983 are compared. Structural change in US petroleum consumption, the use of appropriate weather data in energy demand modeling, and petroleum inventories, imports, and refinery runs are discussed.
Top Level Space Cost Methodology (TLSCM)
1997-12-02
Software 7 6. ACEIT . 7 C. Ground Rules and Assumptions 7 D. Typical Life Cycle Cost Distribution 7 E. Methodologies 7 1. Cost/budget Threshold 9 2. Analogy...which is based on real-time Air Force and space programs. Ref.(25:2- 8, 2-9) 6. ACEIT : Automated Cost Estimating Integrated Tools( ACEIT ), Tecolote...Research, Inc. There is a way to use the ACEIT cost program to get a print-out of an expanded WBS. Therefore, find someone that has ACEIT experience and
Are There Faster Than Light Particles?
ERIC Educational Resources Information Center
Kreisler, Michael N.
1969-01-01
Based upon recent relativistic theory, the researcher describes the search for tachyons, particles having velocities greater than that of a light. The properties of these particles are speculated upon. The author delineates the difficulties anticipated in their detection and the assumptions underlying their methodology. (RR)
Ethnographic/Qualitative Research: Theoretical Perspectives and Methodological Strategies.
ERIC Educational Resources Information Center
Butler, E. Dean
This paper examines the metatheoretical concepts associated with ethnographic/qualitative educational inquiry and overviews the more commonly utilized research designs, data collection methods, and analytical approaches. The epistemological and ontological assumptions of this newer approach differ greatly from those of the traditional educational…
Doing Animist Research in Academia: A Methodological Framework
ERIC Educational Resources Information Center
Barrett, M. J.
2011-01-01
Epistemologies, ontologies, and education based on colonial Eurocentric assumptions have made animism difficult to explicitly explore, acknowledge, and embody in environmental research. Boundaries between humans and the "natural world," including other animals, are continually reproduced through a culture that privileges rationality and the…
A Methodology for Developing Army Acquisition Strategies for an Uncertain Future
2007-01-01
manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are
A Practical Examination of Two Diverse Research Paradigms
ERIC Educational Resources Information Center
Brewer, Robert A.
2007-01-01
This manuscript examines the practical differences between quantitative and qualitative inquiry by comparing the differences between one article from each paradigm. Quantitative research differs greatly from qualitative inquiry in purpose, assumptions, methodology, and representation. While quantitative research has been the dominant paradigm for…
76 FR 51065 - Information Collection Request Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
... (NAC) Questionnaire for Peace Corps Volunteer Background Investigation (OMB Control Number 0420-0001... Questionnaire for Peace Corps Volunteer Background Investigation (OMB Control Number 0420-0001) requests only... the proposed collection of information, including the validity of the methodology and assumptions used...
Fast and optimized methodology to generate road traffic emission inventories and their uncertainties
NASA Astrophysics Data System (ADS)
Blond, N.; Ho, B. Q.; Clappier, A.
2012-04-01
Road traffic emissions are one of the main sources of air pollution in the cities. They are also the main sources of uncertainties in the air quality numerical models used to forecast and define abatement strategies. Until now, the available models for generating road traffic emission always required a big effort, money and time. This inhibits decisions to preserve air quality, especially in developing countries where road traffic emissions are changing very fast. In this research, we developed a new model designed to fast produce road traffic emission inventories. This model, called EMISENS, combines the well-known top-down and bottom-up approaches to force them to be coherent. A Monte Carlo methodology is included for computing emission uncertainties and the uncertainty rate due to each input parameters. This paper presents the EMISENS model and a demonstration of its capabilities through an application over Strasbourg region (Alsace), France. Same input data as collected for Circul'air model (using bottom-up approach) which has been applied for many years to forecast and study air pollution by the Alsatian air quality agency, ASPA, are used to evaluate the impact of several simplifications that a user could operate . These experiments give the possibility to review older methodologies and evaluate EMISENS results when few input data are available to produce emission inventories, as in developing countries and assumptions need to be done. We show that same average fraction of mileage driven with a cold engine can be used for all the cells of the study domain and one emission factor could replace both cold and hot emission factors.
Realistic micromechanical modeling and simulation of two-phase heterogeneous materials
NASA Astrophysics Data System (ADS)
Sreeranganathan, Arun
This dissertation research focuses on micromechanical modeling and simulations of two-phase heterogeneous materials exhibiting anisotropic and non-uniform microstructures with long-range spatial correlations. Completed work involves development of methodologies for realistic micromechanical analyses of materials using a combination of stereological techniques, two- and three-dimensional digital image processing, and finite element based modeling tools. The methodologies are developed via its applications to two technologically important material systems, namely, discontinuously reinforced aluminum composites containing silicon carbide particles as reinforcement, and boron modified titanium alloys containing in situ formed titanium boride whiskers. Microstructural attributes such as the shape, size, volume fraction, and spatial distribution of the reinforcement phase in these materials were incorporated in the models without any simplifying assumptions. Instrumented indentation was used to determine the constitutive properties of individual microstructural phases. Micromechanical analyses were performed using realistic 2D and 3D models and the results were compared with experimental data. Results indicated that 2D models fail to capture the deformation behavior of these materials and 3D analyses are required for realistic simulations. The effect of clustering of silicon carbide particles and associated porosity on the mechanical response of discontinuously reinforced aluminum composites was investigated using 3D models. Parametric studies were carried out using computer simulated microstructures incorporating realistic microstructural attributes. The intrinsic merit of this research is the development and integration of the required enabling techniques and methodologies for representation, modeling, and simulations of complex geometry of microstructures in two- and three-dimensional space facilitating better understanding of the effects of microstructural geometry on the mechanical behavior of materials.
Groundwater vulnerability to climate change: A review of the assessment methodology.
Aslam, Rana Ammar; Shrestha, Sangam; Pandey, Vishnu Prasad
2018-01-15
Impacts of climate change on water resources, especially groundwater, can no longer be hidden. These impacts are further exacerbated under the integrated influence of climate variability, climate change and anthropogenic activities. The degree of impact varies according to geographical location and other factors leading systems and regions towards different levels of vulnerability. In the recent past, several attempts have been made in various regions across the globe to quantify the impacts and consequences of climate and non-climate factors in terms of vulnerability to groundwater resources. Firstly, this paper provides a structured review of the available literature, aiming to critically analyse and highlight the limitations and knowledge gaps involved in vulnerability (of groundwater to climate change) assessment methodologies. The effects of indicator choice and the importance of including composite indicators are then emphasised. A new integrated approach for the assessment of groundwater vulnerability to climate change is proposed to successfully address those limitations. This review concludes that the choice of indicator has a significant role in defining the reliability of computed results. The effect of an individual indicator is also apparent but the consideration of a combination (variety) of indicators may give more realistic results. Therefore, in future, depending upon the local conditions and scale of the study, indicators from various groups should be chosen. Furthermore, there are various assumptions involved in previous methodologies, which limit their scope by introducing uncertainty in the calculated results. These limitations can be overcome by implementing the proposed approach. Copyright © 2017 Elsevier B.V. All rights reserved.
Design and analysis of sustainable computer mouse using design for disassembly methodology
NASA Astrophysics Data System (ADS)
Roni Sahroni, Taufik; Fitri Sukarman, Ahmad; Agung Mahardini, Karunia
2017-12-01
This paper presents the design and analysis of computer mouse using Design for Disassembly methodology. Basically, the existing computer mouse model consist a number of unnecessary part that cause the assembly and disassembly time in production. The objective of this project is to design a new computer mouse based on Design for Disassembly (DFD) methodology. The main methodology of this paper was proposed from sketch generation, concept selection, and concept scoring. Based on the design screening, design concept B was selected for further analysis. New design of computer mouse is proposed using fastening system. Furthermore, three materials of ABS, Polycarbonate, and PE high density were prepared to determine the environmental impact category. Sustainable analysis was conducted using software SolidWorks. As a result, PE High Density gives the lowers amount in the environmental category with great maximum stress value.
Excessive computer game playing: evidence for addiction and aggression?
Grüsser, S M; Thalemann, R; Griffiths, M D
2007-04-01
Computer games have become an ever-increasing part of many adolescents' day-to-day lives. Coupled with this phenomenon, reports of excessive gaming (computer game playing) denominated as "computer/video game addiction" have been discussed in the popular press as well as in recent scientific research. The aim of the present study was the investigation of the addictive potential of gaming as well as the relationship between excessive gaming and aggressive attitudes and behavior. A sample comprising of 7069 gamers answered two questionnaires online. Data revealed that 11.9% of participants (840 gamers) fulfilled diagnostic criteria of addiction concerning their gaming behavior, while there is only weak evidence for the assumption that aggressive behavior is interrelated with excessive gaming in general. Results of this study contribute to the assumption that also playing games without monetary reward meets criteria of addiction. Hence, an addictive potential of gaming should be taken into consideration regarding prevention and intervention.
Rocca, Elena; Andersen, Fredrik
2017-08-14
Scientific risk evaluations are constructed by specific evidence, value judgements and biological background assumptions. The latter are the framework-setting suppositions we apply in order to understand some new phenomenon. That background assumptions co-determine choice of methodology, data interpretation, and choice of relevant evidence is an uncontroversial claim in modern basic science. Furthermore, it is commonly accepted that, unless explicated, disagreements in background assumptions can lead to misunderstanding as well as miscommunication. Here, we extend the discussion on background assumptions from basic science to the debate over genetically modified (GM) plants risk assessment. In this realm, while the different political, social and economic values are often mentioned, the identity and role of background assumptions at play are rarely examined. We use an example from the debate over risk assessment of stacked genetically modified plants (GM stacks), obtained by applying conventional breeding techniques to GM plants. There are two main regulatory practices of GM stacks: (i) regulate as conventional hybrids and (ii) regulate as new GM plants. We analyzed eight papers representative of these positions and found that, in all cases, additional premises are needed to reach the stated conclusions. We suggest that these premises play the role of biological background assumptions and argue that the most effective way toward a unified framework for risk analysis and regulation of GM stacks is by explicating and examining the biological background assumptions of each position. Once explicated, it is possible to either evaluate which background assumptions best reflect contemporary biological knowledge, or to apply Douglas' 'inductive risk' argument.
Teman, Elly
2008-10-01
This article presents a critical appraisal of the psychosocial empirical research on surrogate mothers, their motivations for entering into surrogacy agreements and the outcome of their participation. I apply a social constructionist approach toward analyzing the scholarship, arguing that the cultural assumption that "normal" women do not voluntarily become pregnant with the premeditated intention of relinquishing the child for money, together with the assumption that "normal" women "naturally" bond with the children they bear, frames much of this research. I argue that this scholarship reveals how Western assumptions about motherhood and family impact upon scientific research. In their attempt to research the anomalous phenomenon of surrogacy, these researchers respond to the cultural anxieties that the practice provokes by framing their research methodologies and questions in a manner that upholds essentialist gendered assumptions about the naturalness and normalness of motherhood and childbearing. This leads the researchers to overlook the intrinsic value of the women's personal experiences and has implications for social policy.
On computational methods for crashworthiness
NASA Technical Reports Server (NTRS)
Belytschko, T.
1992-01-01
The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.
Deep Learning in Gastrointestinal Endoscopy.
Patel, Vivek; Armstrong, David; Ganguli, Malika; Roopra, Sandeep; Kantipudi, Neha; Albashir, Siwar; Kamath, Markad V
2016-01-01
Gastrointestinal (GI) endoscopy is used to inspect the lumen or interior of the GI tract for several purposes, including, (1) making a clinical diagnosis, in real time, based on the visual appearances; (2) taking targeted tissue samples for subsequent histopathological examination; and (3) in some cases, performing therapeutic interventions targeted at specific lesions. GI endoscopy is therefore predicated on the assumption that the operator-the endoscopist-is able to identify and characterize abnormalities or lesions accurately and reproducibly. However, as in other areas of clinical medicine, such as histopathology and radiology, many studies have documented marked interobserver and intraobserver variability in lesion recognition. Thus, there is a clear need and opportunity for techniques or methodologies that will enhance the quality of lesion recognition and diagnosis and improve the outcomes of GI endoscopy. Deep learning models provide a basis to make better clinical decisions in medical image analysis. Biomedical image segmentation, classification, and registration can be improved with deep learning. Recent evidence suggests that the application of deep learning methods to medical image analysis can contribute significantly to computer-aided diagnosis. Deep learning models are usually considered to be more flexible and provide reliable solutions for image analysis problems compared to conventional computer vision models. The use of fast computers offers the possibility of real-time support that is important for endoscopic diagnosis, which has to be made in real time. Advanced graphics processing units and cloud computing have also favored the use of machine learning, and more particularly, deep learning for patient care. This paper reviews the rapidly evolving literature on the feasibility of applying deep learning algorithms to endoscopic imaging.
NASA Astrophysics Data System (ADS)
Figiel, Łukasz; Dunne, Fionn P. E.; Buckley, C. Paul
2010-01-01
Layered-silicate nanoparticles offer a cost-effective reinforcement for thermoplastics. Computational modelling has been employed to study large deformations in layered-silicate/poly(ethylene terephthalate) (PET) nanocomposites near the glass transition, as would be experienced during industrial forming processes such as thermoforming or injection stretch blow moulding. Non-linear numerical modelling was applied, to predict the macroscopic large deformation behaviour, with morphology evolution and deformation occurring at the microscopic level, using the representative volume element (RVE) approach. A physically based elasto-viscoplastic constitutive model, describing the behaviour of the PET matrix within the RVE, was numerically implemented into a finite element solver (ABAQUS) using an UMAT subroutine. The implementation was designed to be robust, for accommodating large rotations and stretches of the matrix local to, and between, the nanoparticles. The nanocomposite morphology was reconstructed at the RVE level using a Monte-Carlo-based algorithm that placed straight, high-aspect ratio particles according to the specified orientation and volume fraction, with the assumption of periodicity. Computational experiments using this methodology enabled prediction of the strain-stiffening behaviour of the nanocomposite, observed experimentally, as functions of strain, strain rate, temperature and particle volume fraction. These results revealed the probable origins of the enhanced strain stiffening observed: (a) evolution of the morphology (through particle re-orientation) and (b) early onset of stress-induced pre-crystallization (and hence lock-up of viscous flow), triggered by the presence of particles. The computational model enabled prediction of the effects of process parameters (strain rate, temperature) on evolution of the morphology, and hence on the end-use properties.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-15
... information, please contact Clark R. Fleming, Field Division Counsel, El Paso Intelligence Center, 11339 SSG... of information, including the validity of the methodology and assumptions used; [[Page 42109... 143. Component: El Paso Intelligence Center, Drug Enforcement Administration, U.S. Department of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-15
... collection of information is accurate and based on valid assumptions and methodology; and ways to enhance the...: February 13, 2012. FOR FURTHER INFORMATION CONTACT: Ms. Dana Munson, Procurement Analyst, General Services.../or business confidential information provided. SUPPLEMENTARY INFORMATION: A. Purpose Under certain...
76 FR 4096 - Notice of Submission for OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-24
..., including the validity of the methodology and assumptions used; (3) Enhance the quality, utility, and...: Revision. Title of Collection: 2011-12 National Postsecondary Student Aid Study (NPSAS:12) Field Test...: Annually. Affected Public: Individuals or households; Businesses or other for-profit; Not-for-profit...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-04
... for the Clinical Laboratory Improvement Amendments of 1988 Categorization AGENCY: Food and Drug... notice solicits comments on administrative procedures for the Clinical Laboratory Improvement Amendments..., including the validity of the methodology and assumptions used; (3) ways to enhance the quality, utility...
An overview of the model integration process: From pre-integration assessment to testing
Integration of models requires linking models which can be developed using different tools, methodologies, and assumptions. We performed a literature review with the aim of improving our understanding of model integration process, and also presenting better strategies for buildin...
Some Remarks on the Theory of Political Education. German Studies Notes.
ERIC Educational Resources Information Center
Holtmann, Antonius
This theoretical discussion explores pedagogical assumptions of political education in West Germany. Three major methodological orientations are discussed: the normative-ontological, empirical-analytical, and dialectical-historical. The author recounts the aims, methods, and basic presuppositions of each of these approaches. Topics discussed…
State Politics and Education: An Examination of Selected Multiple-State Case Studies.
ERIC Educational Resources Information Center
Burlingame, Martin; Geske, Terry G.
1979-01-01
Reviews the multiple-state case study literature, highlights some findings, discusses several methodological issues, and concludes with suggestions for possible research agendas. Urges students and researchers to be more actively critical of the assumptions and findings of these studies. (Author/IRT)
Cognitive-Developmental and Behavior-Analytic Theories: Evolving into Complementarity
ERIC Educational Resources Information Center
Overton, Willis F.; Ennis, Michelle D.
2006-01-01
Historically, cognitive-developmental and behavior-analytic approaches to the study of human behavior change and development have been presented as incompatible alternative theoretical and methodological perspectives. This presumed incompatibility has been understood as arising from divergent sets of metatheoretical assumptions that take the form…
Quantitative risk assessment is fraught with many uncertainties. The validity of the assumptions underlying the methods employed are often difficult to test or validate. Cancer risk assessment has generally employed either human epidemiological data from relatively high occupatio...
Thresholds for Chemically Induced Toxicity: Theories and Evidence
Regulatory agencies define “science policies” as a means of proceeding with risk assessments and management decisions in the absence of all the data these bodies would like. Policies may include the use of default assumptions, values and methodologies. The U.S. EPA 20...
Education's Love-Hate Relationship.
ERIC Educational Resources Information Center
Hay, Tina M.
1992-01-01
Although higher education institutions dislike rankings published in the mass media, they like the attention the rankings create and prefer to be included rather than excluded. Common criticisms of the methodology include emphasis on inappropriate criteria, unfair comparison of private and public institutions, faulty assumptions, inaccurate data,…
75 FR 5779 - Proposed Emergency Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-04
... proposed collection of information, including the validity of the methodology and assumptions used; (c... Collection Request Title: Electricity Delivery and Energy Reliability Recovery Act Smart Grid Grant Program..., Chief Operating Officer, Electricity Delivery and Energy Reliability. [FR Doc. 2010-2422 Filed 2-3-10; 8...
76 FR 65504 - Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-21
..., including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility... Reliability Standard, FAC- 008-3--Facility Ratings, developed by the North American Electric Reliability... Reliability Standard FAC- 008-3 is pending before the Commission. The proposed Reliability Standard modifies...
Farmer Experience of Pluralistic Agricultural Extension, Malawi
ERIC Educational Resources Information Center
Chowa, Clodina; Garforth, Chris; Cardey, Sarah
2013-01-01
Purpose: Malawi's current extension policy supports pluralism and advocates responsiveness to farmer demand. We investigate whether smallholder farmers' experience supports the assumption that access to multiple service providers leads to extension and advisory services that respond to the needs of farmers. Design/methodology/approach: Within a…
ICCE/ICCAI 2000 Full & Short Papers (Methodologies).
ERIC Educational Resources Information Center
2000
This document contains the full text of the following full and short papers on methodologies from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A Methodology for Learning Pattern Analysis from Web Logs by Interpreting Web Page Contents" (Chih-Kai Chang and…
Basic principles of respiratory function monitoring in ventilated newborns: A review.
Schmalisch, Gerd
2016-09-01
Respiratory monitoring during mechanical ventilation provides a real-time picture of patient-ventilator interaction and is a prerequisite for lung-protective ventilation. Nowadays, measurements of airflow, tidal volume and applied pressures are standard in neonatal ventilators. The measurement of lung volume during mechanical ventilation by tracer gas washout techniques is still under development. The clinical use of capnography, although well established in adults, has not been embraced by neonatologists because of technical and methodological problems in very small infants. While the ventilatory parameters are well defined, the calculation of other physiological parameters are based upon specific assumptions which are difficult to verify. Incomplete knowledge of the theoretical background of these calculations and their limitations can lead to incorrect interpretations with clinical consequences. Therefore, the aim of this review was to describe the basic principles and the underlying assumptions of currently used methods for respiratory function monitoring in ventilated newborns and to highlight methodological limitations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Documentation of volume 3 of the 1978 Energy Information Administration annual report to congress
NASA Astrophysics Data System (ADS)
1980-02-01
In a preliminary overview of the projection process, the relationship between energy prices, supply, and demand is addressed. Topics treated in detail include a description of energy economic interactions, assumptions regarding world oil prices, and energy modeling in the long term beyond 1995. Subsequent sections present the general approach and methodology underlying the forecasts, and define and describe the alternative projection series and their associated assumptions. Short term forecasting, midterm forecasting, long term forecasting of petroleum, coal, and gas supplies are included. The role of nuclear power as an energy source is also discussed.
Wrinkle-free design of thin membrane structures using stress-based topology optimization
NASA Astrophysics Data System (ADS)
Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan
2017-05-01
Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.
High-Fidelity Coupled Monte-Carlo/Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Ivanov, Aleksandar; Sanchez, Victor; Ivanov, Kostadin
2014-06-01
Monte Carlo methods have been used as reference reactor physics calculation tools worldwide. The advance in computer technology allows the calculation of detailed flux distributions in both space and energy. In most of the cases however, those calculations are done under the assumption of homogeneous material density and temperature distributions. The aim of this work is to develop a consistent methodology for providing realistic three-dimensional thermal-hydraulic distributions by coupling the in-house developed sub-channel code SUBCHANFLOW with the standard Monte-Carlo transport code MCNP. In addition to the innovative technique of on-the fly material definition, a flux-based weight-window technique has been introduced to improve both the magnitude and the distribution of the relative errors. Finally, a coupled code system for the simulation of steady-state reactor physics problems has been developed. Besides the problem of effective feedback data interchange between the codes, the treatment of temperature dependence of the continuous energy nuclear data has been investigated.
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
Component-based integration of chemistry and optimization software.
Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L
2004-11-15
Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.
Diffuse-Interface Modelling of Flow in Porous Media
NASA Astrophysics Data System (ADS)
Addy, Doug; Pradas, Marc; Schmuck, Marcus; Kalliadasis, Serafim
2016-11-01
Multiphase flows are ubiquitous in a wide spectrum of scientific and engineering applications, and their computational modelling often poses many challenges associated with the presence of free boundaries and interfaces. Interfacial flows in porous media encounter additional challenges and complexities due to their inherently multiscale behaviour. Here we investigate the dynamics of interfaces in porous media using an effective convective Cahn-Hilliard (CH) equation recently developed in from a Stokes-CH equation for microscopic heterogeneous domains by means of a homogenization methodology, where the microscopic details are taken into account as effective tensor coefficients which are given by a Poisson equation. The equations are decoupled under appropriate assumptions and solved in series using a classic finite-element formulation with the open-source software FEniCS. We investigate the effects of different microscopic geometries, including periodic and non-periodic, at the bulk fluid flow, and find that our model is able to describe the effective macroscopic behaviour without the need to resolve the microscopic details.
The tangential velocity of M31: CLUES from constrained simulations
NASA Astrophysics Data System (ADS)
Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Courtois, Hélène; Tully, R. Brent
2016-07-01
Determining the precise value of the tangential component of the velocity of M31 is a non-trivial astrophysical issue that relies on complicated modelling. This has recently lead to conflicting estimates, obtained by several groups that used different methodologies and assumptions. This Letter addresses the issue by computing a Bayesian posterior distribution function of this quantity, in order to measure the compatibility of those estimates with Λ cold dark matter (ΛCDM). This is achieved using an ensemble of Local Group (LG) look-alikes collected from a set of constrained simulations (CSs) of the local Universe, and a standard unconstrained ΛCDM. The latter allows us to build a control sample of LG-like pairs and to single out the influence of the environment in our results. We find that neither estimate is at odds with ΛCDM; however, whereas CSs favour higher values of vtan, the reverse is true for estimates based on LG samples gathered from unconstrained simulations, overlooking the environmental element.
Partitioning uncertainty in streamflow projections under nonstationary model conditions
NASA Astrophysics Data System (ADS)
Chawla, Ila; Mujumdar, P. P.
2018-02-01
Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them for future streamflow projections and segregate the contribution of various sources to the uncertainty.
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
VASSAR: Value assessment of system architectures using rules
NASA Astrophysics Data System (ADS)
Selva, D.; Crawley, E. F.
A key step of the mission development process is the selection of a system architecture, i.e., the layout of the major high-level system design decisions. This step typically involves the identification of a set of candidate architectures and a cost-benefit analysis to compare them. Computational tools have been used in the past to bring rigor and consistency into this process. These tools can automatically generate architectures by enumerating different combinations of decisions and options. They can also evaluate these architectures by applying cost models and simplified performance models. Current performance models are purely quantitative tools that are best fit for the evaluation of the technical performance of mission design. However, assessing the relative merit of a system architecture is a much more holistic task than evaluating performance of a mission design. Indeed, the merit of a system architecture comes from satisfying a variety of stakeholder needs, some of which are easy to quantify, and some of which are harder to quantify (e.g., elegance, scientific value, political robustness, flexibility). Moreover, assessing the merit of a system architecture at these very early stages of design often requires dealing with a mix of: a) quantitative and semi-qualitative data; objective and subjective information. Current computational tools are poorly suited for these purposes. In this paper, we propose a general methodology that can used to assess the relative merit of several candidate system architectures under the presence of objective, subjective, quantitative, and qualitative stakeholder needs. The methodology called VASSAR (Value ASsessment for System Architectures using Rules). The major underlying assumption of the VASSAR methodology is that the merit of a system architecture can assessed by comparing the capabilities of the architecture with the stakeholder requirements. Hence for example, a candidate architecture that fully satisfies all critical sta- eholder requirements is a good architecture. The assessment process is thus fundamentally seen as a pattern matching process where capabilities match requirements, which motivates the use of rule-based expert systems (RBES). This paper describes the VASSAR methodology and shows how it can be applied to a large complex space system, namely an Earth observation satellite system. Companion papers show its applicability to the NASA space communications and navigation program and the joint NOAA-DoD NPOESS program.
Computer Majors' Education as Moral Enterprise: A Durkheimian Analysis.
ERIC Educational Resources Information Center
Rigoni, David P.; Lamagdeleine, Donald R.
1998-01-01
Building on Durkheim's (Emile) emphasis on the moral dimensions of social reality and using it to explore contemporary computer education, contends that many of his claims are justified. Argues that the college computer department has created a set of images, maxims, and operating assumptions that frames its curriculum, courses, and student…
ERIC Educational Resources Information Center
Mills, Steven C.; Ragan, Tillman J.
This paper examines a research paradigm that is particularly suited to experimentation-related computer-based instruction and integrated learning systems. The main assumption of the model is that one of the most powerful capabilities of computer-based instruction, and specifically of integrated learning systems, is the capacity to adapt…
The Relationship between Computational Fluency and Student Success in General Studies Mathematics
ERIC Educational Resources Information Center
Hegeman, Jennifer; Waters, Gavin
2012-01-01
Many developmental mathematics programs emphasize computational fluency with the assumption that this is a necessary contributor to student success in general studies mathematics. In an effort to determine which skills are most essential, scores on a computational fluency test were correlated with student success in general studies mathematics at…
Learning Styles and Computers.
ERIC Educational Resources Information Center
Geisert, Gene; Dunn, Rita
Although the use of computers in the classroom has been heralded as a major breakthrough in education, many educators have yet to use computers to their fullest advantage. This is perhaps due to the traditional assumption that students differed only in their speed of learning. However, new research indicates that students differ in their style of…
Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops
NASA Astrophysics Data System (ADS)
Sharma, Vikrant
2017-01-01
The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.
ERIC Educational Resources Information Center
Selig, Judith A.; And Others
This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent information retrieval and computer-assisted…
Cyber-Informed Engineering: The Need for a New Risk Informed and Design Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Joseph Daniel; Anderson, Robert Stephen
Current engineering and risk management methodologies do not contain the foundational assumptions required to address the intelligent adversary’s capabilities in malevolent cyber attacks. Current methodologies focus on equipment failures or human error as initiating events for a hazard, while cyber attacks use the functionality of a trusted system to perform operations outside of the intended design and without the operator’s knowledge. These threats can by-pass or manipulate traditionally engineered safety barriers and present false information, invalidating the fundamental basis of a safety analysis. Cyber threats must be fundamentally analyzed from a completely new perspective where neither equipment nor human operationmore » can be fully trusted. A new risk analysis and design methodology needs to be developed to address this rapidly evolving threatscape.« less
Shedrawy, J; Siroka, A; Oxlade, O; Matteelli, A; Lönnroth, K
2017-09-01
Tuberculosis (TB) in migrants from endemic to low-incidence countries results mainly from the reactivation of latent tuberculous infection (LTBI). LTBI screening policies for migrants vary greatly between countries, and the evidence on the cost-effectiveness of the different approaches is weak and heterogeneous. The aim of this review was to assess the methodology used in published economic evaluations of LTBI screening among migrants to identify critical methodological options that must be considered when using modelling to determine value for money from different economic perspectives. Three electronic databases were searched and 10 articles were included. There was considerable variation across this small number of studies with regard to economic perspective, main outcomes, modelling technique, screening options and target populations considered, as well as in parameterisation of the epidemiological situation, test accuracy, efficacy, safety and programme performance. Only one study adopted a societal perspective; others adopted a health care or wider government perspective. Parameters representing the cascade of screening and treating LTBI varied widely, with some studies using highly aspirational scenarios. This review emphasises the need for a more harmonised approach for economic analysis, and better transparency in how policy options and economic perspectives influence methodological choices. Variability is justifiable for some parameters. However, sufficient data are available to standardise others. A societal perspective is ideal, but can be challenging due to limited data. Assumptions about programme performance should be based on empirical data or at least realistic assumptions. Results should be interpreted within specific contexts and policy options, with cautious generalisations.
NASA Astrophysics Data System (ADS)
Traeger, Brad; Srivatsa, Sanjay S.; Beussman, Kevin M.; Wang, Yechun; Suzen, Yildirim B.; Rybicki, Frank J.; Mazur, Wojciech; Miszalski-Jamka, Tomasz
2016-04-01
Aortic stenosis is the most common valvular heart disease. Assessing the contribution of the valve as a portion to total ventricular load is essential for the aging population. A CT scan for one patient was used to create one in vivo tricuspid aortic valve geometry and assessed with computational fluid dynamics (CFD). CFD simulated the pressure, velocity, and flow rate, which were used to assess the Gorlin formula and continuity equation, current clinical diagnostic standards. The results demonstrate an underestimation of the anatomic orifice area (AOA) by Gorlin formula and overestimation of AOA by the continuity equation, using peak velocities, as would be measured clinically by Doppler echocardiography. As a result, we suggest that the Gorlin formula is unable to achieve the intended estimation of AOA and largely underestimates AOA at the critical low-flow states present in heart failure. The disparity in the use of echocardiography with the continuity equation is due to the variation in velocity profile between the outflow tract and the valve orifice. Comparison of time-averaged orifice areas by Gorlin and continuity with instantaneous orifice areas by planimetry can mask the errors of these methods, which is a result of the assumption that the blood flow is inviscid.
Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis
NASA Technical Reports Server (NTRS)
Cox, C. F.; Cinnella, P.; Westmoreland, S.
1996-01-01
The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.
Model documentation renewable fuels module of the National Energy Modeling System
NASA Astrophysics Data System (ADS)
1995-06-01
This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1995 Annual Energy Outlook (AEO95) forecasts. The report catalogs and describes modeling assumptions, computational methodologies, data inputs, and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. The RFM consists of six analytical submodules that represent each of the major renewable energy resources -- wood, municipal solid waste (MSW), solar energy, wind energy, geothermal energy, and alcohol fuels. The RFM also reads in hydroelectric facility capacities and capacity factors from a data file for use by the NEMS Electricity Market Module (EMM). The purpose of the RFM is to define the technological, cost, and resource size characteristics of renewable energy technologies. These characteristics are used to compute a levelized cost to be competed against other similarly derived costs from other energy sources and technologies. The competition of these energy sources over the NEMS time horizon determines the market penetration of these renewable energy technologies. The characteristics include available energy capacity, capital costs, fixed operating costs, variable operating costs, capacity factor, heat rate, construction lead time, and fuel product price.
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2014-05-01
Modeling complex vibroacoustic systems including poroelastic materials using finite element based methods can be unfeasible for practical applications. For this reason, analytical approaches such as the transfer matrix method are often preferred to obtain a quick estimation of the vibroacoustic parameters. However, the strong assumptions inherent within the transfer matrix method lead to a lack of accuracy in the description of the geometry of the system. As a result, the transfer matrix method is inherently limited to the high frequency range. Nowadays, hybrid substructuring procedures have become quite popular. Indeed, different modeling techniques are typically sought to describe complex vibroacoustic systems over the widest possible frequency range. As a result, the flexibility and accuracy of the finite element method and the efficiency of the transfer matrix method could be coupled in a hybrid technique to obtain a reduction of the computational burden. In this work, a hybrid methodology is proposed. The performances of the method in predicting the vibroacoutic indicators of flat structures with attached homogeneous acoustic treatments are assessed. The results prove that, under certain conditions, the hybrid model allows for a reduction of the computational effort while preserving enough accuracy with respect to the full finite element solution.
Shieh, G
2013-12-01
The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.
Remotely Telling Humans and Computers Apart: An Unsolved Problem
NASA Astrophysics Data System (ADS)
Hernandez-Castro, Carlos Javier; Ribagorda, Arturo
The ability to tell humans and computers apart is imperative to protect many services from misuse and abuse. For this purpose, tests called CAPTCHAs or HIPs have been designed and put into production. Recent history shows that most (if not all) can be broken given enough time and commercial interest: CAPTCHA design seems to be a much more difficult problem than previously thought. The assumption that difficult-AI problems can be easily converted into valid CAPTCHAs is misleading. There are also some extrinsic problems that do not help, especially the big number of in-house designs that are put into production without any prior public critique. In this paper we present a state-of-the-art survey of current HIPs, including proposals that are now into production. We classify them regarding their basic design ideas. We discuss current attacks as well as future attack paths, and we also present common errors in design, and how many implementation flaws can transform a not necessarily bad idea into a weak CAPTCHA. We present examples of these flaws, using specific well-known CAPTCHAs. In a more theoretical way, we discuss the threat model: confronted risks and countermeasures. Finally, we introduce and discuss some desirable properties that new HIPs should have, concluding with some proposals for future work, including methodologies for design, implementation and security assessment.
Respondent-Driven Sampling: An Assessment of Current Methodology.
Gile, Krista J; Handcock, Mark S
2010-08-01
Respondent-Driven Sampling (RDS) employs a variant of a link-tracing network sampling strategy to collect data from hard-to-reach populations. By tracing the links in the underlying social network, the process exploits the social structure to expand the sample and reduce its dependence on the initial (convenience) sample.The current estimators of population averages make strong assumptions in order to treat the data as a probability sample. We evaluate three critical sensitivities of the estimators: to bias induced by the initial sample, to uncontrollable features of respondent behavior, and to the without-replacement structure of sampling.Our analysis indicates: (1) that the convenience sample of seeds can induce bias, and the number of sample waves typically used in RDS is likely insufficient for the type of nodal mixing required to obtain the reputed asymptotic unbiasedness; (2) that preferential referral behavior by respondents leads to bias; (3) that when a substantial fraction of the target population is sampled the current estimators can have substantial bias.This paper sounds a cautionary note for the users of RDS. While current RDS methodology is powerful and clever, the favorable statistical properties claimed for the current estimates are shown to be heavily dependent on often unrealistic assumptions. We recommend ways to improve the methodology.
A cost-construction model to assess the total cost of an anesthesiology residency program.
Franzini, L; Berry, J M
1999-01-01
Although the total costs of graduate medical education are difficult to quantify, this information may be of great importance for health policy and planning over the next decade. This study describes the total costs associated with the residency program at the University of Texas--Houston Department of Anesthesiology during the 1996-1997 academic year. The authors used cost-construction methodology, which computes the cost of teaching from information on program description, resident enrollment, faculty and resident salaries and benefits, and overhead. Surveys of faculty and residents were conducted to determine the time spent in teaching activities; access to institutional and departmental financial records was obtained to quantify associated costs. The model was then developed and examined for a range of assumptions concerning resident productivity, replacement costs, and the cost allocation of activities jointly producing clinical care and education. The cost of resident training (cost of didactic teaching, direct clinical supervision, teaching-related preparation and administration, plus the support of the teaching program) was estimated at $75,070 per resident per year. This cost was less than the estimated replacement value of the teaching and clinical services provided by residents, $103,436 per resident per year. Sensitivity analysis, with different assumptions regarding resident replacement cost and reimbursement rates, varied the cost estimates but generally identified the anesthesiology residency program as a financial asset. In most scenarios, the value of the teaching and clinical services provided by residents exceeded the cost of the resources used in the educational program.
Hanin, Leonid; Rose, Jason
2018-03-01
We study metastatic cancer progression through an extremely general individual-patient mathematical model that is rooted in the contemporary understanding of the underlying biomedical processes yet is essentially free of specific biological assumptions of mechanistic nature. The model accounts for primary tumor growth and resection, shedding of metastases off the primary tumor and their selection, dormancy and growth in a given secondary site. However, functional parameters descriptive of these processes are assumed to be essentially arbitrary. In spite of such generality, the model allows for computing the distribution of site-specific sizes of detectable metastases in closed form. Under the assumption of exponential growth of metastases before and after primary tumor resection, we showed that, regardless of other model parameters and for every set of site-specific volumes of detected metastases, the model-based likelihood-maximizing scenario is always the same: complete suppression of metastatic growth before primary tumor resection followed by an abrupt growth acceleration after surgery. This scenario is commonly observed in clinical practice and is supported by a wealth of experimental and clinical studies conducted over the last 110 years. Furthermore, several biological mechanisms have been identified that could bring about suppression of metastasis by the primary tumor and accelerated vascularization and growth of metastases after primary tumor resection. To the best of our knowledge, the methodology for uncovering general biomedical principles developed in this work is new.
An integrated biomechanical modeling approach to the ergonomic evaluation of drywall installation.
Yuan, Lu; Buchholz, Bryan; Punnett, Laura; Kriebel, David
2016-03-01
Three different methodologies: work sampling, computer simulation and biomechanical modeling, were integrated to study the physical demands of drywall installation. PATH (Posture, Activity, Tools, and Handling), a work-sampling based method, was used to quantify the percent of time that the drywall installers were conducting different activities with different body segment (trunk, arm, and leg) postures. Utilizing Monte-Carlo simulation to convert the categorical PATH data into continuous variables as inputs for the biomechanical models, the required muscle contraction forces and joint reaction forces at the low back (L4/L5) and shoulder (glenohumeral and sternoclavicular joints) were estimated for a typical eight-hour workday. To demonstrate the robustness of this modeling approach, a sensitivity analysis was conducted to examine the impact of some quantitative assumptions that have been made to facilitate the modeling approach. The results indicated that the modeling approach seemed to be the most sensitive to both the distribution of work cycles for a typical eight-hour workday and the distribution and values of Euler angles that are used to determine the "shoulder rhythm." Other assumptions including the distribution of trunk postures did not appear to have a significant impact on the model outputs. It was concluded that the integrated approach might provide an applicable examination of physical loads during the non-routine construction work, especially for those operations/tasks that have certain patterns/sequences for the workers to follow. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
The comparison of various approach to evaluation erosion risks and design control erosion measures
NASA Astrophysics Data System (ADS)
Kapicka, Jiri
2015-04-01
In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas with different importance.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
...] Agency Information Collection Activities; Proposed Collection; Comment Request; Prescription Drug Product... the distribution of patient labeling, called Medications Guides, for certain products that pose a... validity of the methodology and assumption used; (3) ways to enhance the quality, utility, and clarity of...
77 FR 47675 - Agency Information Collection Activities: Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
..., minorities, persons with disabilities), strategic foci (if any) of the project (e.g., research on teaching and learning, international activities, integration of research and education), and the number of... including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility and...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-14
... collection of information, including the validity of the methodology and assumptions used; Enhance the... Intelligence Center, Drug Enforcement Administration, U.S. Department of Justice. (4) Affected public who will... information to the El Paso Intelligence Center, Drug Enforcement Administration, and other Law enforcement...
Survey of Research on Job Satisfaction.
ERIC Educational Resources Information Center
Marconi, Katherine
Sociological research on job satisfaction among civilian workers is reviewed for possible implications for Naval manpower policy. The methodological and conceptual assumptions of civilian studies are examined. The variety of measurement used, and some ambiguity of terminology are discussed in relation to the usefulness of these studies for Naval…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... formula regulations, including infant formula labeling, quality control procedures, notification....S.C. 350a) requires manufacturers of infant formula to establish and adhere to quality control..., including the validity of the methodology and assumptions used; (3) ways to enhance the quality, utility...
77 FR 54555 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
... study is conducted under the grant. Description of Respondents: Business or other for-profit; farms... estimate of burden including the validity of the methodology and assumptions used; (c) ways to enhance the... it displays a currently valid OMB control number. Rural Business-Cooperative Service Title: Renewable...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-30
... Administration (ETA) sponsored information collection request (ICR) titled, ``Benefits Timeliness and Quality... information and analyzes data. BTQ data measure the timeliness and quality of states' administrative actions... of information, including the validity of the methodology and assumptions used; Enhance the quality...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-21
... Officer, (202) 452-3829, Division of Research and Statistics, Board of Governors of the Federal Reserve... the information collection, including the validity of the methodology and assumptions used; (c) Ways... measures (such as regulatory or accounting). The agencies' burden estimates for these information...
An Introduction to Modern Missing Data Analyses
ERIC Educational Resources Information Center
Baraldi, Amanda N.; Enders, Craig K.
2010-01-01
A great deal of recent methodological research has focused on two modern missing data analysis methods: maximum likelihood and multiple imputation. These approaches are advantageous to traditional techniques (e.g. deletion and mean imputation techniques) because they require less stringent assumptions and mitigate the pitfalls of traditional…
Contemporary Inventional Theory: An Aristotelian Model.
ERIC Educational Resources Information Center
Skopec, Eric W.
Contemporary rhetoricians are concerned with the re-examination of classical doctrines in the hope of finding solutions to current problems. In this study, the author presents a methodological perspective consistent with current interests, by re-examining the assumptions that underlie each classical precept. He outlines an inventional system based…
Towards a Lakatosian Analysis of the Piagetian and Alternative Conceptions Research Programs.
ERIC Educational Resources Information Center
Gilbert, John K.; Swift, David J.
1985-01-01
Lakatos's methodology of scientific research programs is summarized and discussed for Piagetian schools and alternative conceptions movement. Commonalities/differences between these two rival programs are presented along with fundamental assumptions, auxiliary hypotheses, and research policy. Suggests that research findings should not be merely…
Implicit Assumptions in High Potentials Recruitment
ERIC Educational Resources Information Center
Posthumus, Jan; Bozer, Gil; Santora, Joseph C.
2016-01-01
Purpose: Professionals of human resources (HR) use different criteria in practice than they verbalize. Thus, the aim of this research was to identify the implicit criteria used for the selection of high-potential employees in recruitment and development settings in the pharmaceutical industry. Design/methodology/approach: A semi-structured…
In Our Learners' Shoes. A Language Teacher's Reflections on Language Learning.
ERIC Educational Resources Information Center
Cervi, David A.
1989-01-01
A language teacher's Japanese-language learning experiences in educational and immersion environments lead to assertions about the ineffectiveness of exclusive use of traditional instructional methodologies; exclusive dependence on osmosis for developing fluency; instructional materials that students perceive as beneath them; and assumptions that…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-22
... Discharge Elimination System (NPDES) permits under section 402 of the Clean Water Act, certain research and...: (202) 564-0072; email address: [email protected] . SUPPLEMENTARY INFORMATION: Supporting documents... information, including the validity of the methodology and assumptions used; (iii) enhance the quality...
77 FR 67344 - Proposed Information Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-09
... Criminal History Checks. DATES: Written comments must be submitted to the individual and office listed in... methodology and assumptions used; Enhance the quality, utility, and clarity of the information to be collected... Criminal History Check. CNCS and its grantees must ensure that national service beneficiaries are protected...
76 FR 3604 - Information Collection; Qualified Products List for Engine Driven Pumps
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... levels. 2. Reliability and endurance requirements. These requirements include a 100-hour endurance test... evaluated to meet specific requirements related to safety, effectiveness, efficiency, and reliability of the... of the collection of information, including the validity of the methodology and assumptions used; (3...
75 FR 7438 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-19
...;and investigations, committee meetings, agency decisions and rulings, #0;delegations of authority... practical utility; (b) the accuracy of the agency's estimate of burden including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to...
Frameworks of Managerial Competence: Limits, Problems and Suggestions
ERIC Educational Resources Information Center
Ruth, Damian
2006-01-01
Purpose: To offer a coherent critique of the concept of managerial frameworks of competence through the exploration of the problems of generalizability and abstraction and the "scientific" assumptions of management. Design/methodology/approach: Employs the ecological metaphor of intellectual landscape and extends it to examining the…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-07
... and Assaulted ACTION: 60-day notice. The Department of Justice, Federal Bureau of Investigation.... Blasher, Unit Chief, Federal Bureau of Investigation, Criminal Justice Information Services (CJIS..., including the validity of the methodology and assumptions used; (3) Enhance the quality, utility, and...
Qualitative Research in Counseling Psychology: Conceptual Foundations
ERIC Educational Resources Information Center
Morrow, Susan L.
2007-01-01
Beginning with calls for methodological diversity in counseling psychology, this article addresses the history and current state of qualitative research in counseling psychology. It identifies the historical and disciplinary origins as well as basic assumptions and underpinnings of qualitative research in general, as well as within counseling…
DOT National Transportation Integrated Search
1995-01-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
A Test of the Validity of Inviscid Wall-Modeled LES
NASA Astrophysics Data System (ADS)
Redman, Andrew; Craft, Kyle; Aikens, Kurt
2015-11-01
Computational expense is one of the main deterrents to more widespread use of large eddy simulations (LES). As such, it is important to reduce computational costs whenever possible. In this vein, it may be reasonable to assume that high Reynolds number flows with turbulent boundary layers are inviscid when using a wall model. This assumption relies on the grid being too coarse to resolve either the viscous length scales in the outer flow or those near walls. We are not aware of other studies that have suggested or examined the validity of this approach. The inviscid wall-modeled LES assumption is tested here for supersonic flow over a flat plate on three different grids. Inviscid and viscous results are compared to those of another wall-modeled LES as well as experimental data - the results appear promising. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively, with the current LES application. Recommendations are presented as are future areas of research. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
NASA Technical Reports Server (NTRS)
Wolfe, M. G.
1978-01-01
Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.
Finite Correlation Length Implies Efficient Preparation of Quantum Thermal States
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Kastoryano, Michael J.
2018-05-01
Preparing quantum thermal states on a quantum computer is in general a difficult task. We provide a procedure to prepare a thermal state on a quantum computer with a logarithmic depth circuit of local quantum channels assuming that the thermal state correlations satisfy the following two properties: (i) the correlations between two regions are exponentially decaying in the distance between the regions, and (ii) the thermal state is an approximate Markov state for shielded regions. We require both properties to hold for the thermal state of the Hamiltonian on any induced subgraph of the original lattice. Assumption (ii) is satisfied for all commuting Gibbs states, while assumption (i) is satisfied for every model above a critical temperature. Both assumptions are satisfied in one spatial dimension. Moreover, both assumptions are expected to hold above the thermal phase transition for models without any topological order at finite temperature. As a building block, we show that exponential decay of correlation (for thermal states of Hamiltonians on all induced subgraphs) is sufficient to efficiently estimate the expectation value of a local observable. Our proof uses quantum belief propagation, a recent strengthening of strong sub-additivity, and naturally breaks down for states with topological order.
Suzuki, Takashi; Takao, Hiroyuki; Suzuki, Takamasa; Suzuki, Tomoaki; Masuda, Shunsuke; Dahmani, Chihebeddine; Watanabe, Mitsuyoshi; Mamori, Hiroya; Ishibashi, Toshihiro; Yamamoto, Hideki; Yamamoto, Makoto; Murayama, Yuichi
2017-01-01
In most simulations of intracranial aneurysm hemodynamics, blood is assumed to be a Newtonian fluid. However, it is a non-Newtonian fluid, and its viscosity profile differs among individuals. Therefore, the common viscosity assumption may not be valid for all patients. This study aims to test the suitability of the common viscosity assumption. Blood viscosity datasets were obtained from two healthy volunteers. Three simulations were performed for three different-sized aneurysms, two using measured value-based non-Newtonian models and one using a Newtonian model. The parameters proposed to predict an aneurysmal rupture obtained using the non-Newtonian models were compared with those obtained using the Newtonian model. The largest difference (25%) in the normalized wall shear stress (NWSS) was observed in the smallest aneurysm. Comparing the difference ratio to the NWSS with the Newtonian model between the two Non-Newtonian models, the difference of the ratio was 17.3%. Irrespective of the aneurysmal size, computational fluid dynamics simulations with either the common Newtonian or non-Newtonian viscosity assumption could lead to values different from those of the patient-specific viscosity model for hemodynamic parameters such as NWSS.
Halloran, Jason P; Ackermann, Marko; Erdemir, Ahmet; van den Bogert, Antonie J
2010-10-19
Current computational methods for simulating locomotion have primarily used muscle-driven multibody dynamics, in which neuromuscular control is optimized. Such simulations generally represent joints and soft tissue as simple kinematic or elastic elements for computational efficiency. These assumptions limit application in studies such as ligament injury or osteoarthritis, where local tissue loading must be predicted. Conversely, tissue can be simulated using the finite element method with assumed or measured boundary conditions, but this does not represent the effects of whole body dynamics and neuromuscular control. Coupling the two domains would overcome these limitations and allow prediction of movement strategies guided by tissue stresses. Here we demonstrate this concept in a gait simulation where a musculoskeletal model is coupled to a finite element representation of the foot. Predictive simulations incorporated peak plantar tissue deformation into the objective of the movement optimization, as well as terms to track normative gait data and minimize fatigue. Two optimizations were performed, first without the strain minimization term and second with the term. Convergence to realistic gait patterns was achieved with the second optimization realizing a 44% reduction in peak tissue strain energy density. The study demonstrated that it is possible to alter computationally predicted neuromuscular control to minimize tissue strain while including desired kinematic and muscular behavior. Future work should include experimental validation before application of the methodology to patient care. Copyright © 2010 Elsevier Ltd. All rights reserved.
Cloud computing: a new business paradigm for biomedical information sharing.
Rosenthal, Arnon; Mork, Peter; Li, Maya Hao; Stanford, Jean; Koester, David; Reynolds, Patti
2010-04-01
We examine how the biomedical informatics (BMI) community, especially consortia that share data and applications, can take advantage of a new resource called "cloud computing". Clouds generally offer resources on demand. In most clouds, charges are pay per use, based on large farms of inexpensive, dedicated servers, sometimes supporting parallel computing. Substantial economies of scale potentially yield costs much lower than dedicated laboratory systems or even institutional data centers. Overall, even with conservative assumptions, for applications that are not I/O intensive and do not demand a fully mature environment, the numbers suggested that clouds can sometimes provide major improvements, and should be seriously considered for BMI. Methodologically, it was very advantageous to formulate analyses in terms of component technologies; focusing on these specifics enabled us to bypass the cacophony of alternative definitions (e.g., exactly what does a cloud include) and to analyze alternatives that employ some of the component technologies (e.g., an institution's data center). Relative analyses were another great simplifier. Rather than listing the absolute strengths and weaknesses of cloud-based systems (e.g., for security or data preservation), we focus on the changes from a particular starting point, e.g., individual lab systems. We often find a rough parity (in principle), but one needs to examine individual acquisitions--is a loosely managed lab moving to a well managed cloud, or a tightly managed hospital data center moving to a poorly safeguarded cloud? 2009 Elsevier Inc. All rights reserved.
Beresniak, Ariel; Medina-Lara, Antonieta; Auray, Jean Paul; De Wever, Alain; Praet, Jean-Claude; Tarricone, Rosanna; Torbica, Aleksandra; Dupont, Danielle; Lamure, Michel; Duru, Gerard
2015-01-01
Quality-adjusted life-years (QALYs) have been used since the 1980s as a standard health outcome measure for conducting cost-utility analyses, which are often inadequately labeled as 'cost-effectiveness analyses'. This synthetic outcome, which combines the quantity of life lived with its quality expressed as a preference score, is currently recommended as reference case by some health technology assessment (HTA) agencies. While critics of the QALY approach have expressed concerns about equity and ethical issues, surprisingly, very few have tested the basic methodological assumptions supporting the QALY equation so as to establish its scientific validity. The main objective of the ECHOUTCOME European project was to test the validity of the underlying assumptions of the QALY outcome and its relevance in health decision making. An experiment has been conducted with 1,361 subjects from Belgium, France, Italy, and the UK. The subjects were asked to express their preferences regarding various hypothetical health states derived from combining different health states with time durations in order to compare observed utility values of the couples (health state, time) and calculated utility values using the QALY formula. Observed and calculated utility values of the couples (health state, time) were significantly different, confirming that preferences expressed by the respondents were not consistent with the QALY theoretical assumptions. This European study contributes to establishing that the QALY multiplicative model is an invalid measure. This explains why costs/QALY estimates may vary greatly, leading to inconsistent recommendations relevant to providing access to innovative medicines and health technologies. HTA agencies should consider other more robust methodological approaches to guide reimbursement decisions.
Kelly, Christopher; Pashayan, Nora; Munisamy, Sreetharan; Powles, John W
2009-06-30
Our aim was to estimate the burden of fatal disease attributable to excess adiposity in England and Wales in 2003 and 2015 and to explore the sensitivity of the estimates to the assumptions and methods used. A spreadsheet implementation of the World Health Organization's (WHO) Comparative Risk Assessment (CRA) methodology for continuously distributed exposures was used. For our base case, adiposity-related risks were assumed to be minimal with a mean (SD) BMI of 21 (1) Kg m-2. All cause mortality risks for 2015 were taken from the Government Actuary and alternative compositions by cause derived. Disease-specific relative risks by BMI were taken from the CRA project and varied in sensitivity analyses. Under base case methods and assumptions for 2003, approximately 41,000 deaths and a loss of 1.05 years of life expectancy were attributed to excess adiposity. Seventy-seven percent of all diabetic deaths, 23% of all ischaemic heart disease deaths and 14% of all cerebrovascular disease deaths were attributed to excess adiposity. Predictions for 2015 were found to be more sensitive to assumptions about the future course of mortality risks for diabetes than to variation in the assumed trend in BMI. On less favourable assumptions the attributable loss of life expectancy in 2015 would rise modestly to 1.28 years. Excess adiposity appears to contribute materially but modestly to mortality risks in England and Wales and this contribution is likely to increase in the future. Uncertainty centres on future trends of associated diseases, especially diabetes. The robustness of these estimates is limited by the lack of control for correlated risks by stratification and by the empirical uncertainty surrounding the effects of prolonged excess adiposity beginning in adolescence.
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Krosel, S. M.; Bruton, W. M.
1982-01-01
A systematic, computer-aided, self-documenting methodology for developing hybrid computer simulations of turbofan engines is presented. The methodology that is pesented makes use of a host program that can run on a large digital computer and a machine-dependent target (hybrid) program. The host program performs all the calculations and data manipulations that are needed to transform user-supplied engine design information to a form suitable for the hybrid computer. The host program also trims the self-contained engine model to match specified design-point information. Part I contains a general discussion of the methodology, describes a test case, and presents comparisons between hybrid simulation and specified engine performance data. Part II, a companion document, contains documentation, in the form of computer printouts, for the test case.
Shuttle program: Computing atmospheric scale height for refraction corrections
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Methods for computing the atmospheric scale height to determine radio wave refraction were investigated for different atmospheres, and different angles of elevation. Tables of refractivity versus altitude are included. The equations used to compute the refraction corrections are given. It is concluded that very accurate corrections are determined with the assumption of an exponential atmosphere.
Using Random Forest Models to Predict Organizational Violence
NASA Technical Reports Server (NTRS)
Levine, Burton; Bobashev, Georgly
2012-01-01
We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"
2005-08-12
productivity of the islands in producing copra or fish, was not considered. The assumption is also inconsistent with the capitalization model that the value of...David Barker and Jay Wa-Aadu, “Is Real Estate Becoming Important Again? A Neo Ricardian Model of Land Rent.” Real Estate Economics, Spring, 2004, pp...the model explicit, it avoids shortcomings of the NCT methodology, by using available data from RMI’s national income and product accounts that is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childers, L.; Liming, L.; Foster, I.
2008-10-15
This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has beenmore » to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2) A method for characterizing users according to their technology interactions, and identification of four user types among the interviewees using the method; (3) Four profiles that highlight points of commonality and diversity in each user type; (4) Recommendations for technology developers and future studies; (5) A description of the interview protocol and overall study methodology; (6) An anonymized list of the interviewees; and (7) Interview writeups and summary data. The interview summaries in Section 3 and transcripts in Appendix D illustrate the value of distributed computing software--and Globus in particular--to scientific enterprises. They also document opportunities to make these tools still more useful both to current users and to new communities. We aim our recommendations at developers who intend their software to be used and reused in many applications. (This kind of software is often referred to as 'middleware.') Our two core recommendations are as follows. First, it is essential for middleware developers to understand and explicitly manage the multiple user products in which their software components are used. We must avoid making assumptions about the commonality of these products and, instead, study and account for their diversity. Second, middleware developers should engage in different ways with different kinds of users. Having identified four general user types in Section 4, we provide specific ideas for how to engage them in Section 5.« less
PDF modeling of turbulent flows on unstructured grids
NASA Astrophysics Data System (ADS)
Bakosi, Jozsef
In probability density function (PDF) methods of turbulent flows, the joint PDF of several flow variables is computed by numerically integrating a system of stochastic differential equations for Lagrangian particles. Because the technique solves a transport equation for the PDF of the velocity and scalars, a mathematically exact treatment of advection, viscous effects and arbitrarily complex chemical reactions is possible; these processes are treated without closure assumptions. A set of algorithms is proposed to provide an efficient solution of the PDF transport equation modeling the joint PDF of turbulent velocity, frequency and concentration of a passive scalar in geometrically complex configurations. An unstructured Eulerian grid is employed to extract Eulerian statistics, to solve for quantities represented at fixed locations of the domain and to track particles. All three aspects regarding the grid make use of the finite element method. Compared to hybrid methods, the current methodology is stand-alone, therefore it is consistent both numerically and at the level of turbulence closure without the use of consistency conditions. Since both the turbulent velocity and scalar concentration fields are represented in a stochastic way, the method allows for a direct and close interaction between these fields, which is beneficial in computing accurate scalar statistics. Boundary conditions implemented along solid bodies are of the free-slip and no-slip type without the need for ghost elements. Boundary layers at no-slip boundaries are either fully resolved down to the viscous sublayer, explicitly modeling the high anisotropy and inhomogeneity of the low-Reynolds-number wall region without damping or wall-functions or specified via logarithmic wall-functions. As in moment closures and large eddy simulation, these wall-treatments provide the usual trade-off between resolution and computational cost as required by the given application. Particular attention is focused on modeling the dispersion of passive scalars in inhomogeneous turbulent flows. Two different micromixing models are investigated that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. An adaptive algorithm to compute the velocity-conditioned scalar mean is proposed that homogenizes the statistical error over the sample space with no assumption on the shape of the underlying velocity PDF. The development also concentrates on a generally applicable micromixing timescale for complex flow domains. Several newly developed algorithms are described in detail that facilitate a stable numerical solution in arbitrarily complex flow geometries, including a stabilized mean-pressure projection scheme, the estimation of conditional and unconditional Eulerian statistics and their derivatives from stochastic particle fields employing finite element shapefunctions, particle tracking through unstructured grids, an efficient particle redistribution procedure and techniques related to efficient random number generation. The algorithm is validated and tested by computing three different turbulent flows: the fully developed turbulent channel flow, a street canyon (or cavity) flow and the turbulent wake behind a circular cylinder at a sub-critical Reynolds number. The solver has been parallelized and optimized for shared memory and multi-core architectures using the OpenMP standard. Relevant aspects of performance and parallelism on cache-based shared memory machines are discussed and presented in detail. The methodology shows great promise in the simulation of high-Reynolds-number incompressible inert or reactive turbulent flows in realistic configurations.
78 FR 38976 - Proposed Agency Information Collection Activities; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-28
... Reserve's functions; including whether the information has practical utility; b. The accuracy of the... the methodology and assumptions used; c. Ways to enhance the quality, utility, and clarity of the... Report Report title: Report of Selected Money Market Rates. Agency form number: FR 2420. OMB control...
An Economic Impact Study: How and Why To Do One.
ERIC Educational Resources Information Center
Graefe, Martin; Wells, Matt
1996-01-01
An economic impact study tells the community about a camp's contribution, and is good advertising. Describes an economic impact study and its benefits. Uses Concordia Language Villages' study to illustrate features of an impact study, including goals and scope, parameters and assumptions, statistical information, research methodology, review…
75 FR 59679 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-28
... agency's estimate of burden including the validity of the methodology and assumptions used; (c) ways to...) was passed into law adding section 1415A to the National Agricultural Research, Extension, and Teaching Policy Act of 1997. This law established a new Veterinary Medicine Loan Repayment Program (VMLRP...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-23
.... Fleming, Field Division Counsel, El Paso Intelligence Center, 11339 SSG Sims Blvd., El Paso, TX 79908... validity of the methodology and assumptions used; Enhance the quality, utility, and clarity of the... sponsoring the collection: Form number: EPIC Form 143. Component: El Paso Intelligence Center, Drug...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-16
...: 30-Day notice. The United States Department of Justice (DOJ), National Drug Intelligence Center (NDIC..., including the validity of the methodology and assumptions used; --Enhance the quality, utility, and clarity... Assessment and other reports and assessments produced by the National Drug Intelligence Center. It provides...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-29
... under review The United States Department of Justice (DOJ), National Drug Intelligence Center (NDIC... Intelligence Center, Fifth Floor, 319 Washington Street, Johnstown, PA 15901. Written comments and suggestions... of the methodology and assumptions used; --Enhance the quality, utility, and clarity of the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-15
... Division Counsel, El Paso Intelligence Center, 11339 SSG Sims Blvd., El Paso, TX 79908. Written comments... of the methodology and assumptions used; Enhance the quality, utility, and clarity of the information... collection: Form number: EPIC Form 143. Component: El Paso Intelligence Center, Drug Enforcement...
Warship Combat System Selection Methodology Based on Discrete Event Simulation
2010-09-01
Platform (from Spanish) PD Damage Probability xiv PHit Hit Probability PKill Kill Probability RSM Response Surface Model SAM Surface-Air Missile...such a large target allows an assumption that the probability of a hit ( PHit ) is one. This structure can be considered as a bridge; therefore, the
75 FR 50005 - Notice of Public Information Collection Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... submission of the following public information collection request (ICR) to the Office of Management and... estimated total burden may be obtained from the RegInfo.gov Web site at http://www.reginfo.gov/public/do..., including the validity of the methodology and assumptions used; (3) Enhance the quality, utility, and...
75 FR 50006 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... submission of the following public information collection request (ICR) to the Office of Management and... estimated total burden may be obtained from the RegInfo.gov Web site at http://www.reginfo.gov/public/do..., including the validity of the methodology and assumptions used; (3) Enhance the quality, utility, and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-23
... rates. This study will also examine the circumstances and influences that shape a mother's feeding... of the methodology and assumptions that were used; (c) ways to enhance the quality, utility, and...) AGENCY: Food and Nutrition Service, USDA. ACTION: Notice. SUMMARY: In accordance with the Paperwork...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-18
... Information on Water Quality Consideration ACTION: 30-Day Notice of Information Collection. The Department of... currently approved collection. (2) Title of the Form/Collection: Supplemental Information on Water Quality..., including the validity of the methodology and assumptions used; --Enhance the quality, utility, and clarity...
75 FR 9572 - Submission for OMB Review; Comment Request: Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-03
... Quality Control Reviews'' OMB control number 0584-0074. The document contained incorrect burden hours. The... methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to... information unless the collection of information displays a currently valid OMB control number and the agency...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-04
... collection regarding the manufacture of infant formula, including infant formula labeling, quality control.... 350a) requires manufacturers of infant formula to establish and adhere to quality control procedures..., including the validity of the methodology and assumptions used; (3) ways to enhance the quality, utility...
The Great Neurolinguistics Methodology Debate.
ERIC Educational Resources Information Center
Obler, L. K.
A major debate exists in the neuropsychology community concerning whether case study is preferable to group study of brain-damaged patients. So far, the discussion has been limited to the advantages and disadvantages of both methods, with the assumption that neurolinguists pursue a single goal attainable by one or the other method. Practical…
"That's Not Quite the Way We See It": The Epistemological Challenge of Visual Data
ERIC Educational Resources Information Center
Wall, Kate; Higgins, Steve; Hall, Elaine; Woolner, Pam
2013-01-01
In research textbooks, and much of the research practice, they describe, qualitative processes and interpretivist epistemologies tend to dominate visual methodology. This article challenges the assumptions behind this dominance. Using exemplification from three existing visual data sets produced through one large education research project, this…
76 FR 2645 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-14
.... These grants will be used for two purposes: (1) To fund feasibility studies, marketing and business... including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility and... a currently valid OMB control number. Rural Business-Cooperative Service Title: Value-Added Producer...
Estimation of dose-response models for discrete and continuous data in weed science
USDA-ARS?s Scientific Manuscript database
Dose-response analysis is widely used in biological sciences and has application to a variety of risk assessment, bioassay, and calibration problems. In weed science, dose-response methodologies have typically relied on least squares estimation under an assumption of normality. Advances in computati...
Organisational Memories in Project-Based Companies: An Autopoietic View
ERIC Educational Resources Information Center
Koskinen, Kaj U.
2010-01-01
Purpose: The purpose of this paper is to describe project-based companies' knowledge production and memory development with the help of autopoietic epistemology. Design/methodology/approach: The discussion first defines the concept of a project-based company. Then the discussion deals with the two epistemological assumptions, namely cognitivist…
75 FR 50743 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-17
... the proper performance of the functions of the agency, including whether the information will have... methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to be collected; (d) ways to minimize the burden of the collection of information on those who are to...
DOT National Transportation Integrated Search
1994-10-31
The Volpe Center first estimated an inter-regional auto trip model as part of its effort to assess the market feasibility of maglev for the National Maglev Initiative (NMI). The original intent was to develop a direct demand model for estimating inte...
A "View from Nowhen" on Time Perception Experiments
ERIC Educational Resources Information Center
Riemer, Martin; Trojan, Jorg; Kleinbohl, Dieter; Holzl, Rupert
2012-01-01
Systematic errors in time reproduction tasks have been interpreted as a misperception of time and therefore seem to contradict basic assumptions of pacemaker-accumulator models. Here we propose an alternative explanation of this phenomenon based on methodological constraints regarding the direction of time, which cannot be manipulated in…
75 FR 69913 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
... methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to be collected; (d) ways to minimize the burden of the collection of information on those who are to... number. Agricultural Marketing Service Title: Generic Fruit Crops, Marketing Order Administration Branch...
77 FR 61569 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-10
... estimate of burden including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to be collected; (d) ways to minimize the burden of the.... Estimates of stocks provide essential statistics on supplies and contribute to orderly marketing. Farmers...
77 FR 61569 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-10
... estimate of burden including the validity of the methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to be collected; (d) ways to minimize the burden of the... Marketing Service Title: Application for Plant Variety Protection Certificate and Objective Description of...
Post-Secondary Enrolment Forecasting with Traditional and Cross Pressure-Impact Methodologies.
ERIC Educational Resources Information Center
Hoffman, Bernard B.
A model for forecasting postsecondary enrollment, the PDEM-1, is considered, which combines the traditional with a cross-pressure impact decision-making model. The model is considered in relation to its background, assumptions, survey instrument, model conception, applicability to educational environments, and implementation difficulties. The…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-12
... DEPARTMENT OF STATE [Public Notice 7294] 60-Day Notice of Proposed Information Collection... INFORMATION: We are soliciting public comments to permit the Department to: Evaluate whether the proposed... the methodology and assumptions used. Enhance the quality, utility, and clarity of the information to...
ERIC Educational Resources Information Center
Roessger, Kevin M.
2012-01-01
The philosophy of radical behaviourism remains misunderstood within the field of adult education. Contributing to this trend is the field's homogeneous behaviourist interpretation, which attributes methodological behaviourism's principles to radical behaviourism. The guiding principles and assumptions of radical behaviourism are examined to…
Children's Understanding of Television: Research on Attention and Comprehension.
ERIC Educational Resources Information Center
Bryant, Jennings, Ed.; Anderson, Daniel R., Ed.
Major collections of research contributions on the fundamental nature of children's television viewing have been compiled in this book. Each chapter presents the assumptions, methodologies, theories, and major research findings of a particular research program or tradition. Chapters 1 through 4 are directed toward the examination of children's…
Philosophical Roots of Classical Grounded Theory: Its Foundations in Symbolic Interactionism
ERIC Educational Resources Information Center
Aldiabat, Khaldoun M.; Le Navenec, Carole-Lynne
2011-01-01
Although many researchers have discussed the historical relationship between the Grounded Theory methodology and Symbolic Interactionism, they have not clearly articulated the congruency of their salient concepts and assumptions. The purpose of this paper is to provide a thorough discussion of this congruency. A hypothetical example about smoking…
Analysis of Senate Amendment 2028, the Climate Stewardship Act of 2003
2004-01-01
On May 11, 2004, Senator Landrieu asked the Energy Information Administration (EIA) to evaluate SA 2028. This paper responds to that request, relying on the modeling methodology, data sources, and assumptions used to analyze the original bill, as extensively documented in EIA's June 2003 report.
STRUCTURE PLUS MEANING EQUALS LANGUAGE PROFICIENCY.
ERIC Educational Resources Information Center
BELASCO, SIMON
TRUE FOREIGN LANGUAGE PROFICIENCY CAN BE ACHIEVED ONLY BY THE INTERNALIZATION OF THE ENTIRE GRAMMAR OF THE TARGET LANGUAGE PLUS THE DEVELOPMENT OF SKILL IN SEMANTIC INTERPRETATION. ADHERENCE TO EITHER OF THE METHODOLOGICAL ASSUMPTIONS THAT UNDERLIE TODAY'S AUDIOLINGUALLY-ORIENTED PROGRAMS WILL LEAD STUDENTS TO NOTHING MORE THAN A LEARNING PLATEAU.…
Two (Very) Different Worlds: The Cultures of Policymaking and Qualitative Research
ERIC Educational Resources Information Center
Donmoyer, Robert
2012-01-01
This article brackets assumptions embedded in the framing of this special issue on "problematizing methodological simplicity in qualitative research" in a effort to understand why policymakers put pressure on all types of researchers, including those who use qualitative methods, to provide relatively simple, even somewhat mechanistic portrayals of…
78 FR 22261 - Proposed Agency Information Collection Activities; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-15
... information collection, including the validity of the methodology and assumptions used; c. Ways to enhance the quality, utility, and clarity of the information to be collected; d. Ways to minimize the burden of... number: FR 2060. OMB control number: 7100-0232. Frequency: On occasion. Reporters: Small businesses and...
Reclaiming "Sense" from "Cents" in Accounting Education
ERIC Educational Resources Information Center
Dellaportas, Steven
2015-01-01
This essay adopts an interpretive methodology of relevant literature to explore the limitations of accounting education when it is taught purely as a technical practice. The essay proceeds from the assumption that conventional accounting education is captured by a positivistic neo-classical model of decision-making that draws on economic rationale…
A Critical Realist Orientation to Learner Needs
ERIC Educational Resources Information Center
Ayers, David F.
2011-01-01
The objective of this essay is to propose critical realism as a philosophical middle way between two sets of ontological, epistemological, and methodological assumptions regarding learner needs. Key concepts of critical realism, a tradition in the philosophy of science, are introduced and applied toward an analysis of learner needs, resulting in…
Educational Research in Palestine: Epistemological and Cultural Challenges--A Case Study
ERIC Educational Resources Information Center
Khalifah, Ayman A.
2010-01-01
This study investigates the prevailing epistemological and cultural conditions that underlie educational research in Palestine. Using a case study of a major Palestinian University that awards Masters Degrees in Education, the study analyzes the assumptions and the methodology that characterizes current educational research. Using an analysis of…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-11
... of Facilities Management and Program Services; Submission for OMB Review; Background Investigations... collection of personal data for background investigations for child care workers accessing GSA owned and... assumptions and methodology; ways to enhance the quality, utility, and clarity of the information to be...
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
NASA Astrophysics Data System (ADS)
Nebot, Àngela; Mugica, Francisco
2012-10-01
Fuzzy inductive reasoning (FIR) is a modelling and simulation methodology derived from the General Systems Problem Solver. It compares favourably with other soft computing methodologies, such as neural networks, genetic or neuro-fuzzy systems, and with hard computing methodologies, such as AR, ARIMA, or NARMAX, when it is used to predict future behaviour of different kinds of systems. This paper contains an overview of the FIR methodology, its historical background, and its evolution.
NASA Astrophysics Data System (ADS)
Madani, Kaveh
2016-04-01
Water management benefits from a suite of modelling tools and techniques that help simplifying and understanding the complexities involved in managing water resource systems. Early water management models were mainly concerned with optimizing a single objective, related to the design, operations or management of water resource systems (e.g. economic cost, hydroelectricity production, reliability of water deliveries). Significant improvements in methodologies, computational capacity, and data availability over the last decades have resulted in developing more complex water management models that can now incorporate multiple objectives, various uncertainties, and big data. These models provide an improved understanding of complex water resource systems and provide opportunities for making positive impacts. Nevertheless, there remains an alarming mismatch between the optimal solutions developed by these models and the decisions made by managers and stakeholders of water resource systems. Modelers continue to consider decision makers as irrational agents who fail to implement the optimal solutions developed by sophisticated and mathematically rigours water management models. On the other hand, decision makers and stakeholders accuse modelers of being idealist, lacking a perfect understanding of reality, and developing 'smart' solutions that are not practical (stable). In this talk I will have a closer look at the mismatch between the optimality and stability of solutions and argue that conventional water resources management models suffer inherently from a full-cooperation assumption. According to this assumption, water resources management decisions are based on group rationality where in practice decisions are often based on individual rationality, making the group's optimal solution unstable for individually rational decision makers. I discuss how game theory can be used as an appropriate framework for addressing the irrational "rationality assumption" of water resources management models and for better capturing the social aspects of decision making in water management systems with multiple stakeholders.
Learning to Predict Combinatorial Structures
NASA Astrophysics Data System (ADS)
Vembu, Shankar
2009-12-01
The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.
Computer Assistance for Writing Interactive Programs: TICS.
ERIC Educational Resources Information Center
Kaplow, Roy; And Others
1973-01-01
Investigators developed an on-line, interactive programing system--the Teacher-Interactive Computer System (TICS)--to provide assistance to those who were not programers, but nevertheless wished to write interactive instructional programs. TICS had two components: an author system and a delivery system. Underlying assumptions were that…
Automated analysis in generic groups
NASA Astrophysics Data System (ADS)
Fagerholm, Edvard
This thesis studies automated methods for analyzing hardness assumptions in generic group models, following ideas of symbolic cryptography. We define a broad class of generic and symbolic group models for different settings---symmetric or asymmetric (leveled) k-linear groups --- and prove ''computational soundness'' theorems for the symbolic models. Based on this result, we formulate a master theorem that relates the hardness of an assumption to solving problems in polynomial algebra. We systematically analyze these problems identifying different classes of assumptions and obtain decidability and undecidability results. Then, we develop automated procedures for verifying the conditions of our master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome is an automated tool, the Generic Group Analyzer, which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption. Structure-preserving signatures are signature schemes defined over bilinear groups in which messages, public keys and signatures are group elements, and the verification algorithm consists of evaluating ''pairing-product equations''. Recent work on structure-preserving signatures studies optimality of these schemes in terms of the number of group elements needed in the verification key and the signature, and the number of pairing-product equations in the verification algorithm. While the size of keys and signatures is crucial for many applications, another aspect of performance is the time it takes to verify a signature. The most expensive operation during verification is the computation of pairings. However, the concrete number of pairings is not captured by the number of pairing-product equations considered in earlier work. We consider the question of what is the minimal number of pairing computations needed to verify structure-preserving signatures. We build an automated tool to search for structure-preserving signatures matching a template. Through exhaustive search we conjecture lower bounds for the number of pairings required in the Type~II setting and prove our conjecture to be true. Finally, our tool exhibits examples of structure-preserving signatures matching the lower bounds, which proves tightness of our bounds, as well as improves on previously known structure-preserving signature schemes.
The Wally plot approach to assess the calibration of clinical prediction models.
Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T
2017-12-06
A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.
A New Kind of Single-Well Tracer Test for Assessing Subsurface Heterogeneity
NASA Astrophysics Data System (ADS)
Hansen, S. K.; Vesselinov, V. V.; Lu, Z.; Reimus, P. W.; Katzman, D.
2017-12-01
Single-well injection-withdrawal (SWIW) tracer tests have historically been interpreted using the idealized assumption of tracer path reversibility (i.e., negligible background flow), with background flow due to natural hydraulic gradient being an un-modeled confounding factor. However, we have recently discovered that it is possible to use background flow to our advantage to extract additional information about the subsurface. To wit: we have developed a new kind of single-well tracer test that exploits flow due to natural gradient to estimate the variance of the log hydraulic conductivity field of a heterogeneous aquifer. The test methodology involves injection under forced gradient and withdrawal under natural gradient, and makes use of a relationship, discovered using a large-scale Monte Carlo study and machine learning techniques, between power law breakthrough curve tail exponent and log-hydraulic conductivity variance. We will discuss how we performed the computational study and derived this relationship and then show an application example in which our new single-well tracer test interpretation scheme was applied to estimation of heterogeneity of a formation at the chromium contamination site at Los Alamos National Laboratory. Detailed core hole records exist at the same site, from which it was possible to estimate the log hydraulic conductivity variance using a Kozeny-Carman relation. The variances estimated using our new tracer test methodology and estimated by direct inspection of core were nearly identical, corroborating the new methodology. Assessment of aquifer heterogeneity is of critical importance to deployment of amendments associated with in-situ remediation strategies, since permeability contrasts potentially reduce the interaction between amendment and contaminant. Our new tracer test provides an easy way to obtain this information.
Accurate Treatment of Collisions and Water-Delivery in Models of Terrestrial Planet Formation
NASA Astrophysics Data System (ADS)
Haghighipour, Nader; Maindl, Thomas; Schaefer, Christoph
2017-10-01
It is widely accepted that collisions among solid bodies, ignited by their interactions with planetary embryos is the key process in the formation of terrestrial planets and transport of volatiles and chemical compounds to their accretion zones. Unfortunately, due to computational complexities, these collisions are often treated in a rudimentary way. Impacts are considered to be perfectly inelastic and volatiles are considered to be fully transferred from one object to the other. This perfect-merging assumption has profound effects on the mass and composition of final planetary bodies as it grossly overestimates the masses of these objects and the amounts of volatiles and chemical elements transferred to them. It also entirely neglects collisional-loss of volatiles (e.g., water) and draws an unrealistic connection between these properties and the chemical structure of the protoplanetary disk (i.e., the location of their original carriers). We have developed a new and comprehensive methodology to simulate growth of embryos to planetary bodies where we use a combination of SPH and N-body codes to accurately model collisions as well as the transport/transfer of chemical compounds. Our methodology accounts for the loss of volatiles (e.g., ice sublimation) during the orbital evolution of their careers and accurately tracks their transfer from one body to another. Results of our simulations show that traditional N-body modeling of terrestrial planet formation overestimates the amount of the mass and water contents of the final planets by over 60% implying that not only the amount of water they suggest is far from being realistic, small planets such as Mars can also form in these simulations when collisions are treated properly. We will present details of our methodology and discuss its implications for terrestrial planet formation and water delivery to Earth.
A computational algorithm addressing how vessel length might depend on vessel diameter
Jing Cai; Shuoxin Zhang; Melvin T. Tyree
2010-01-01
The objective of this method paper was to examine a computational algorithm that may reveal how vessel length might depend on vessel diameter within any given stem or species. The computational method requires the assumption that vessels remain approximately constant in diameter over their entire length. When this method is applied to three species or hybrids in the...
2007-05-01
35 5 Actinide product radionuclides... actinides , and fission products in fallout. Doses from low-linear energy transfer (LET) radiation (beta particles and gamma rays) are reported separately...assumptions about the critical parameters used in calculating internal doses – resuspension factor, breathing rate, fractionation, and scenario elements – to
1981-01-01
comparison of formal and informal design methodologies will show how we think they are converging. Lastly, I will describe our involvement with the DoD...computer security must begin with the design methodology , with the objective being provability. The idea ofa formal evaluation and on-the-shelf... Methodologies ] Here we can compare the formal design methodologies with those used by informal practitioners like Control Data. Obviously, both processes
Effective Energy Simulation and Optimal Design of Side-lit Buildings with Venetian Blinds
NASA Astrophysics Data System (ADS)
Cheng, Tian
Venetian blinds are popularly used in buildings to control the amount of incoming daylight for improving visual comfort and reducing heat gains in air-conditioning systems. Studies have shown that the proper design and operation of window systems could result in significant energy savings in both lighting and cooling. However, there is no convenient computer tool that allows effective and efficient optimization of the envelope of side-lit buildings with blinds now. Three computer tools, Adeline, DOE2 and EnergyPlus widely used for the above-mentioned purpose have been experimentally examined in this study. Results indicate that the two former tools give unacceptable accuracy due to unrealistic assumptions adopted while the last one may generate large errors in certain conditions. Moreover, current computer tools have to conduct hourly energy simulations, which are not necessary for life-cycle energy analysis and optimal design, to provide annual cooling loads. This is not computationally efficient, particularly not suitable for optimal designing a building at initial stage because the impacts of many design variations and optional features have to be evaluated. A methodology is therefore developed for efficient and effective thermal and daylighting simulations and optimal design of buildings with blinds. Based on geometric optics and radiosity method, a mathematical model is developed to reasonably simulate the daylighting behaviors of venetian blinds. Indoor illuminance at any reference point can be directly and efficiently computed. They have been validated with both experiments and simulations with Radiance. Validation results show that indoor illuminances computed by the new models agree well with the measured data, and the accuracy provided by them is equivalent to that of Radiance. The computational efficiency of the new models is much higher than that of Radiance as well as EnergyPlus. Two new methods are developed for the thermal simulation of buildings. A fast Fourier transform (FFT) method is presented to avoid the root-searching process in the inverse Laplace transform of multilayered walls. Generalized explicit FFT formulae for calculating the discrete Fourier transform (DFT) are developed for the first time. They can largely facilitate the implementation of FFT. The new method also provides a basis for generating the symbolic response factors. Validation simulations show that it can generate the response factors as accurate as the analytical solutions. The second method is for direct estimation of annual or seasonal cooling loads without the need for tedious hourly energy simulations. It is validated by hourly simulation results with DOE2. Then symbolic long-term cooling load can be created by combining the two methods with thermal network analysis. The symbolic long-term cooling load can keep the design parameters of interest as symbols, which is particularly useful for the optimal design and sensitivity analysis. The methodology is applied to an office building in Hong Kong for the optimal design of building envelope. Design variables such as window-to-wall ratio, building orientation, and glazing optical and thermal properties are included in the study. Results show that the selected design values could significantly impact the energy performance of windows, and the optimal design of side-lit buildings could greatly enhance energy savings. The application example also demonstrates that the developed methodology significantly facilitates the optimal building design and sensitivity analysis, and leads to high computational efficiency.
Peirlinck, Mathias; De Beule, Matthieu; Segers, Patrick; Rebelo, Nuno
2018-05-28
Patient-specific biomechanical modeling of the cardiovascular system is complicated by the presence of a physiological pressure load given that the imaged tissue is in a pre-stressed and -strained state. Neglect of this prestressed state into solid tissue mechanics models leads to erroneous metrics (e.g. wall deformation, peak stress, wall shear stress) which in their turn are used for device design choices, risk assessment (e.g. procedure, rupture) and surgery planning. It is thus of utmost importance to incorporate this deformed and loaded tissue state into the computational models, which implies solving an inverse problem (calculating an undeformed geometry given the load and the deformed geometry). Methodologies to solve this inverse problem can be categorized into iterative and direct methodologies, both having their inherent advantages and disadvantages. Direct methodologies are typically based on the inverse elastostatics (IE) approach and offer a computationally efficient single shot methodology to compute the in vivo stress state. However, cumbersome and problem-specific derivations of the formulations and non-trivial access to the finite element analysis (FEA) code, especially for commercial products, refrain a broad implementation of these methodologies. For that reason, we developed a novel, modular IE approach and implemented this methodology in a commercial FEA solver with minor user subroutine interventions. The accuracy of this methodology was demonstrated in an arterial tube and porcine biventricular myocardium model. The computational power and efficiency of the methodology was shown by computing the in vivo stress and strain state, and the corresponding unloaded geometry, for two models containing multiple interacting incompressible, anisotropic (fiber-embedded) and hyperelastic material behaviors: a patient-specific abdominal aortic aneurysm and a full 4-chamber heart model. Copyright © 2018 Elsevier Ltd. All rights reserved.
MIX: a computer program to evaluate interaction between chemicals
Jacqueline L. Robertson; Kimberly C. Smith
1989-01-01
A computer program, MIX, was designed to identify pairs of chemicals whose interaction results in a response that departs significantly from the model predicated on the assumption of independent, uncorrelated joint action. This report describes the MIX program, its statistical basis, and instructions for its use.
Naïve and Robust: Class-Conditional Independence in Human Classification Learning
ERIC Educational Resources Information Center
Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.
2018-01-01
Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…
Analysis of Introducing Active Learning Methodologies in a Basic Computer Architecture Course
ERIC Educational Resources Information Center
Arbelaitz, Olatz; José I. Martín; Muguerza, Javier
2015-01-01
This paper presents an analysis of introducing active methodologies in the Computer Architecture course taught in the second year of the Computer Engineering Bachelor's degree program at the University of the Basque Country (UPV/EHU), Spain. The paper reports the experience from three academic years, 2011-2012, 2012-2013, and 2013-2014, in which…
Computer Network Operations Methodology
2004-03-01
means of their computer information systems. Disrupt - This type of attack focuses on disrupting as “attackers might surreptitiously reprogram enemy...by reprogramming the computers that control distribution within the power grid. A disruption attack introduces disorder and inhibits the effective...between commanders. The use of methodologies is widespread and done subconsciously to assist individuals in decision making. The processes that
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
An Econometric Model for Forecasting Income and Employment in Hawaii.
ERIC Educational Resources Information Center
Chau, Laurence C.
This report presents the methodology for short-run forecasting of personal income and employment in Hawaii. The econometric model developed in the study is used to make actual forecasts through 1973 of income and employment, with major components forecasted separately. Several sets of forecasts are made, under different assumptions on external…
ERIC Educational Resources Information Center
Liu, Shiang-Yao; Lin, Chuan-Shun; Tsai, Chin-Chung
2011-01-01
This study aims to test the nature of the assumption that there are relationships between scientific epistemological views (SEVs) and reasoning processes in socioscientific decision making. A mixed methodology that combines both qualitative and quantitative approaches of data collection and analysis was adopted not only to verify the assumption…
Achieving Methodological Alignment When Combining QCA and Process Tracing in Practice
ERIC Educational Resources Information Center
Beach, Derek
2018-01-01
This article explores the practical challenges one faces when combining qualitative comparative analysis (QCA) and process tracing (PT) in a manner that is consistent with their underlying assumptions about the nature of causal relationships. While PT builds on a mechanism-based understanding of causation, QCA as a comparative method makes claims…
Complexity, Methodology and Method: Crafting a Critical Process of Research
ERIC Educational Resources Information Center
Alhadeff-Jones, Michel
2013-01-01
This paper defines a theoretical framework aiming to support the actions and reflections of researchers looking for a "method" in order to critically conceive the complexity of a scientific process of research. First, it starts with a brief overview of the core assumptions framing Morin's "paradigm of complexity" and Le…
75 FR 44811 - Office of the Secretary; Agency Information Collection Activities
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-29
... the Office of Management and Budget (OMB) for review and approval in accordance with the Paperwork... response, and estimated total burden may be obtained from the RegInfo.gov Web site at http://www.reginfo... information, including the validity of the methodology and assumptions used; (3) Enhance the quality, utility...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-23
... be submitted to the Office of Management and Budget (OMB) for review and approval. Written comments..., including the validity of the methodology and assumptions used; (3) Enhance the quality, utility, and... to respondents other than their time. The total estimated annualized burden hours are 331. Estimated...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... the Office of Management and Budget (OMB) for review and approval. Written comments and/or suggestions... methodology and assumptions used; (3) The quality, utility, and clarity of the information to be collected.... There are capital, operating, and/or maintenance costs of $98,022. The total estimated annualized burden...
75 FR 44017 - Office of the Secretary; Submission for OMB review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-27
... submission of the following public information collection request (ICR) to the Office of Management and... estimated total burden may be obtained from the RegInfo.gov Web site at http://www.reginfo.gov/public/do... information, including the validity of the methodology and assumptions used; (3) Enhance the quality, utility...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... improve quality of life; (3) decrease the number of Americans with undiagnosed diabetes; (4) Among people... and resources that support behavior change, improved quality of life, and better diabetes outcomes; (3..., including the validity of the methodology and assumptions used; (3) Ways to enhance the quality, utility...
Finding the Right Fit: A Comparison of Process Assumptions Underlying Popular Drift-Diffusion Models
ERIC Educational Resources Information Center
Ashby, Nathaniel J. S.; Jekel, Marc; Dickert, Stephan; Glöckner, Andreas
2016-01-01
Recent research makes increasing use of eye-tracking methodologies to generate and test process models. Overall, such research suggests that attention, generally indexed by fixations (gaze duration), plays a critical role in the construction of preference, although the methods used to support this supposition differ substantially. In two studies…
Get Real in Individual Participant Data (IPD) Meta-Analysis: A Review of the Methodology
ERIC Educational Resources Information Center
Debray, Thomas P. A.; Moons, Karel G. M.; van Valkenhoef, Gert; Efthimiou, Orestis; Hummel, Noemi; Groenwold, Rolf H. H.; Reitsma, Johannes B.
2015-01-01
Individual participant data (IPD) meta-analysis is an increasingly used approach for synthesizing and investigating treatment effect estimates. Over the past few years, numerous methods for conducting an IPD meta-analysis (IPD-MA) have been proposed, often making different assumptions and modeling choices while addressing a similar research…
Enterprise Education Needs Enterprising Educators: A Case Study on Teacher Training Provision
ERIC Educational Resources Information Center
Penaluna, Kathryn; Penaluna, Andy; Usei, Caroline; Griffiths, Dinah
2015-01-01
Purpose: The purpose of this paper is to reflect upon the process that underpinned and informed the development and delivery of a "creativity-led" credit-bearing teacher training provision and to illuminate key factors of influence for the approaches to teaching and learning. Design/methodology/approach: Based on the assumption that…
Assessing Measurement Equivalence in Ordered-Categorical Data
ERIC Educational Resources Information Center
Elosua, Paula
2011-01-01
Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…
77 FR 50078 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to be collected; (d) ways to minimize the burden of the collection of information on those who are to... Program, the School Breakfast Program, and the Special Milk Program as mandated by the National School...
Problem Solving Frameworks for Mathematics and Software Development
ERIC Educational Resources Information Center
McMaster, Kirby; Sambasivam, Samuel; Blake, Ashley
2012-01-01
In this research, we examine how problem solving frameworks differ between Mathematics and Software Development. Our methodology is based on the assumption that the words used frequently in a book indicate the mental framework of the author. We compared word frequencies in a sample of 139 books that discuss problem solving. The books were grouped…
Analysis of the Impacts of an Early Start for Compliance with the Kyoto Protocol
1999-01-01
This report describes the Energy Information Administration's analysis of the impacts of an early start, using the same methodology as in Impacts of the Kyoto Protocol on U.S. Energy Markets and Economic Activity, with only those changes in assumptions caused by the early start date.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-15
....regulations.gov . You can search for the document by selecting ``Notice'' under Document Type, entering the... ``Search.'' If necessary, use the ``Narrow by Agency'' option on the Results page. Email: [email protected] the burden of the proposed collection, including the validity of the methodology and assumptions used...
78 FR 52996 - 60-Day Notice of Proposed Information Collection: Voluntary Disclosures.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-27
... System (FDMS) to comment on this notice by going to www.regulations.gov . You may search for the document by entering ``Public Notice '' in the search bar. If necessary, use the ``narrow by agency'' filter... collection, including the validity of the methodology and assumptions used. Enhance the quality, utility, and...
Aggregation Bias and the Analysis of Necessary and Sufficient Conditions in fsQCA
ERIC Educational Resources Information Center
Braumoeller, Bear F.
2017-01-01
Fuzzy-set qualitative comparative analysis (fsQCA) has become one of the most prominent methods in the social sciences for capturing causal complexity, especially for scholars with small- and medium-"N" data sets. This research note explores two key assumptions in fsQCA's methodology for testing for necessary and sufficient…
ERIC Educational Resources Information Center
Jung, Steven M.; And Others
Survey activities are reported which were designed to provide the foundation for a national evaluation of the effectiveness of programs assisted under the Career Education Incentive Act of 1977 (PL 95-207). The methodology described, called "program evaluability assessment," focuses on detailed analysis of program assumptions in order to…
ERIC Educational Resources Information Center
Stirling, Keith
2000-01-01
Describes a session on information retrieval systems that planned to discuss relevance measures with Web-based information retrieval; retrieval system performance and evaluation; probabilistic independence of index terms; vector-based models; metalanguages and digital objects; how users assess the reliability, timeliness and bias of information;…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-12
... provide a safe environment for miners. Methane is liberated from the strata, and noxious gases and dusts from blasting and other mining activities may be present. The explosive and noxious gases and dusts... collection of information, including the validity of the methodology and assumptions used; Enhance the...
Qualitative Teacher Research and the Complexity of Classroom Contexts
ERIC Educational Resources Information Center
Klehr, Mary
2012-01-01
This article discusses how the underlying assumptions and practices of teacher research position it as a distinct form of educational inquiry, and identifies qualitative methodology as a central influence on the work. A discussion of some of the common conceptualizations and processes of PK-12 teacher research, the complex yet continually changing…
ERIC Educational Resources Information Center
Giani, Matt S.
2015-01-01
The purpose of this study is to revisit the widely held assumption that the impact of socioeconomic background declines steadily across educational transitions, particularly at the postsecondary level. Sequential logit modeling, a staple methodological approach for estimating the relative impact of SES across educational stages, is applied to a…
75 FR 68315 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
... methodology and assumptions used; (c) ways to enhance the quality, utility and clarity of the information to be collected; (d) ways to minimize the burden of the collection of information on those who are to... for the development and marketing of the invention and a description of the applicant's ability to...
10 CFR 436.16 - Establishing non-fuel and non-water cost categories.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PLANNING PROGRAMS Methodology and Procedures for Life Cycle Cost Analyses § 436.16 Establishing non-fuel... discount rate and escalation rate assumptions under § 436.14. When recurring costs begin to accrue at a later time, subtract the present value of recurring costs over the delay, calculated using the...
ERIC Educational Resources Information Center
Penalva, José
2014-01-01
This article examines the underlying problems of one particular perspective in educational theory that has recently gained momentum: the Wilfred Carr approach, which puts forward the premise that there is no theory in educational research and, consequently, it is a form of practice. The article highlights the scientific, epistemological and…
Accounting for Success and Failure: A Discursive Psychological Approach to Sport Talk
ERIC Educational Resources Information Center
Locke, Abigail
2004-01-01
In recent years, constructionist methodologies such as discursive psychology (Edwards & Potter, 1992) have begun to be used in sport research. This paper provides a practical guide to applying a discursive psychological approach to sport data. It discusses the assumptions and principles of discursive psychology and outlines the stages of a…
Thinking Resources for Educational Research Methods and Methodology
ERIC Educational Resources Information Center
Peim, Nick
2009-01-01
This paper considers the idea of a crisis in educational research. Some conventional expressions of that "crisis" are examined in terms of their assumptions about what is "proper" to educational research. The paper then affirms the role of "metaphysics" in educational research as a necessary dimension of method, as opposed to the naive assertion…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-05
... projections models, as well as changes to future vehicle mix assumptions, that influence the emission... methodology that may occur in the future such as updated socioeconomic data, new models, and other factors... updated mobile emissions model, the Motor Vehicle Emissions Simulator (also known as MOVES2010a), and to...
76 FR 36582 - Submission for Review: Standard Form 2809, Health Benefits Election Form
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-22
..., 2010 at Volume 75 FR 39587 allowing for a 60-day public comment period. We received comments from one... comments that: 1. Evaluate whether the proposed collection of information is necessary for the proper..., including the validity of the methodology and assumptions used; 3. Enhance the quality, utility, and clarity...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
..., including validity of the methodology and assumptions used; (c) ways to enhance the quality, utility and... collection. Abstract: 7 CFR 273.7(c)(9) requires State agencies to submit quarterly E&T Program Activity... filed Total number of Estimated Section of regulation Title respondents annually responses (C x hours...
ERIC Educational Resources Information Center
Yoshizawa, Go; Iwase, Mineyo; Okumoto, Motoko; Tahara, Keiichiro; Takahashi, Shingo
2016-01-01
A value-centered approach to science, technology and society (STS) education illuminates the need of reflexive and relational learning through communication and public engagement. Visualization is a key to represent and compare mental models such as assumptions, background theories and value systems that tacitly shape our own understanding,…
Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty
NASA Astrophysics Data System (ADS)
Kuczera, George
1983-10-01
A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.
Authors' response: the primacy of conscious decision making.
Shanks, David R; Newell, Ben R
2014-02-01
The target article sought to question the common belief that our decisions are often biased by unconscious influences. While many commentators offer additional support for this perspective, others question our theoretical assumptions, empirical evaluations, and methodological criteria. We rebut in particular the starting assumption that all decision making is unconscious, and that the onus should be on researchers to prove conscious influences. Further evidence is evaluated in relation to the core topics we reviewed (multiple-cue judgment, deliberation without attention, and decisions under uncertainty), as well as priming effects. We reiterate a key conclusion from the target article, namely, that it now seems to be generally accepted that awareness should be operationally defined as reportable knowledge, and that such knowledge can only be evaluated by careful and thorough probing. We call for future research to pay heed to the different ways in which awareness can intervene in decision making (as identified in our lens model analysis) and to employ suitable methodology in the assessment of awareness, including the requirements that awareness assessment must be reliable, relevant, immediate, and sensitive.
A comprehensive review on the quasi-induced exposure technique.
Jiang, Xinguo; Lyles, Richard W; Guo, Runhua
2014-04-01
The goal is to comprehensively examine the state-of-the-art applications and methodological development of quasi-induced exposure and consequently pinpoint the future research directions in terms of implementation guidelines, limitations, and validity tests. The paper conducts a comprehensive review on approximately 45 published papers relevant to quasi-induced exposure regarding four key topics of interest: applications, responsibility assignment, validation of assumptions, and methodological development. Specific findings include that: (1) there is no systematic data screening procedure in place and how the eliminated crash data will impact the responsibility assignment is generally unknown; (2) there is a lack of necessary efforts to assess the validity of assumptions prior to its application and the validation efforts are mostly restricted to the aggregated levels due to the limited availability of exposure truth; and (3) there is a deficiency of quantitative analyses to evaluate the magnitude and directions of bias as a result of injury risks and crash avoidance ability. The paper points out the future research directions and insights in terms of the validity tests and implementation guidelines. Copyright © 2013 Elsevier Ltd. All rights reserved.
Railroad classification yard technology : computer system methodology : case study : Potomac Yard
DOT National Transportation Integrated Search
1981-08-01
This report documents the application of the railroad classification yard computer system methodology to Potomac Yard of the Richmond, Fredericksburg, and Potomac Railroad Company (RF&P). This case study entailed evaluation of the yard traffic capaci...
Comments on Musha's theorem that an evanescent photon in the microtubule is a superluminal particle.
Hari, Syamala D
2014-07-01
Takaaki Musha's research of high performance quantum computation in living systems is motivated by the theories of Penrose and Hameroff that microtubules in the brain function as quantum computers, and by those of Jibu and Yasue that the quantum states of microtubules depend upon boson condensates of evanescent photons. His work is based on the assumption that the evanescent photons described by Jibu et al. are superluminal and that they are tachyons defined and discussed by well-known physicists such as Sudarshan, Feinberg and Recami. Musha gives a brief justification for the assumption and sometimes calls it a theorem. However, the assumption is not valid because Jibu et al. stated that the evanescent photons have transmission speed smaller than that of light and that their mass is real and momentum is imaginary whereas a tachyon's mass is imaginary and momentum is real. We show here that Musha's proof of the "theorem" has errors and hence his theorem/assumption is not valid. This article is not meant to further discuss any biological aspects of the brain but only to comment on the consistency of the quantum-physical aspects of earlier work by Musha et al. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Boyce, Lola
1995-01-01
This report presents the results of both the fifth and sixth year effort of a research program conducted for NASA-LeRC by The University of Texas at San Antonio (UTSA). The research included on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for five variables, namely, high temperature, high-cycle and low-cycle mechanical fatigue, creep and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using an updated version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of high-cycle mechanical fatigue, creep and thermal fatigue was performed. Then using the current version of PROMISS, entitled PROMISS94, a second sensitivity study including the effect of low-cycle mechanical fatigue, as well as, the three previous effects was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of high-cycle mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.
NASA Astrophysics Data System (ADS)
Azimi, Maryam
Radiation therapy has been used in the treatment of cancer tumors for several years and many cancer patients receive radiotherapy. It may be used as primary therapy or with a combination of surgery or other kinds of therapy such as chemotherapy, hormone therapy or some mixture of the three. The treatment objective is to destroy cancer cells or shrink the tumor by planning an adequate radiation dose to the desired target without damaging the normal tissues. By using the pre-treatment Computer Tomography (CT) images, most of the radiotherapy planning systems design the target and assume that the size of the tumor will not change throughout the treatment course, which takes 5 to 7 weeks. Based on this assumption, the total amount of radiation is planned and fractionated for the daily dose required to be delivered to the patient's body. However, this assumption is flawed because the patients receiving radiotherapy have marked changes in tumor geometry during the treatment period. Therefore, there is a critical need to understand the changes of the tumor shape and size over time during the course of radiotherapy in order to prevent significant effects of inaccuracy in the planning. In this research, a methodology is proposed in order to monitor and predict daily (fraction day) tumor volume and surface changes of head and neck cancer tumors during the entire treatment period. In the proposed method, geometrical modeling and data mining techniques will be used rather than repetitive CT scans data to predict the tumor deformation for radiation planning. Clinical patient data were obtained from the University of Texas-MD Anderson Cancer Center (MDACC). In the first step, by using CT scan data, the tumor's progressive geometric changes during the treatment period are quantified. The next step relates to using regression analysis in order to develop predictive models for tumor geometry based on the geometric analysis results and the patients' selected attributes (age, weight, stage, etc.). Moreover, repeated measure analyses have been applied to identify the effects of patients' selected attributes on tumor deformation. The main goal of the proposed methodology is increasing the accuracy of each therapy and quality of life for patients.
NASA Astrophysics Data System (ADS)
Buckley, J.; Wilkinson, D.; Malaroda, A.; Metcalfe, P.
2017-01-01
Three alternative methodologies to the Computed-Tomography Dose Index for the evaluation of Cone-Beam Computed Tomography dose are compared, the Cone-Beam Dose Index, IAEA Human Health Report No. 5 recommended methodology and the AAPM Task Group 111 recommended methodology. The protocols were evaluated for Pelvis and Thorax scan modes on Varian® On-Board Imager and Truebeam kV XI imaging systems. The weighted planar average dose was highest for the AAPM methodology across all scans, with the CBDI being the second highest overall. A 17.96% and 1.14% decrease from the TG-111 protocol to the IAEA and CBDI protocols for the Pelvis mode and 18.15% and 13.10% decrease for the Thorax mode were observed for the XI system. For the OBI system, the variation was 16.46% and 7.14% for Pelvis mode and 15.93% to the CBDI protocol in Thorax mode respectively.
Lay Theories Regarding Computer-Mediated Communication in Remote Collaboration
ERIC Educational Resources Information Center
Parke, Karl; Marsden, Nicola; Connolly, Cornelia
2017-01-01
Computer-mediated communication and remote collaboration has become an unexceptional norm as an educational modality for distance and open education, therefore the need to research and analyze students' online learning experience is necessary. This paper seeks to examine the assumptions and expectations held by students in regard to…
Dynamic mass transfer methods have been developed to better describe the interaction of the aerosol population with semi-volatile species such as nitrate, ammonia, and chloride. Unfortunately, these dynamic methods are computationally expensive. Assumptions are often made to r...
Two Studies Examining Argumentation in Asynchronous Computer Mediated Communication
ERIC Educational Resources Information Center
Joiner, Richard; Jones, Sarah; Doherty, John
2008-01-01
Asynchronous computer mediated communication (CMC) would seem to be an ideal medium for supporting development in student argumentation. This paper investigates this assumption through two studies. The first study compared asynchronous CMC with face-to-face discussions. The transactional and strategic level of the argumentation (i.e. measures of…
Integrating Computer Concepts into Principles of Accounting.
ERIC Educational Resources Information Center
Beck, Henry J.; Parrish, Roy James, Jr.
A package of instructional materials for an undergraduate principles of accounting course at Danville Community College was developed based upon the following assumptions: (1) the principles of accounting student does not need to be able to write computer programs; (2) computerized accounting concepts should be presented in this course; (3)…
26 CFR 1.752-2 - Partner's share of recourse liabilities.
Code of Federal Regulations, 2012 CFR
2012-04-01
... of present value. The present value of the guaranteed future interest payments is computed using a... interest index, the present value is computed on the assumption that the interest determined under the... first foreclosing on the property. When the partnership obtains the loan, the present value (discounted...
26 CFR 1.752-2 - Partner's share of recourse liabilities.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of present value. The present value of the guaranteed future interest payments is computed using a... interest index, the present value is computed on the assumption that the interest determined under the... first foreclosing on the property. When the partnership obtains the loan, the present value (discounted...
26 CFR 1.752-2 - Partner's share of recourse liabilities.
Code of Federal Regulations, 2011 CFR
2011-04-01
... of present value. The present value of the guaranteed future interest payments is computed using a... interest index, the present value is computed on the assumption that the interest determined under the... first foreclosing on the property. When the partnership obtains the loan, the present value (discounted...
26 CFR 1.752-2 - Partner's share of recourse liabilities.
Code of Federal Regulations, 2013 CFR
2013-04-01
... of present value. The present value of the guaranteed future interest payments is computed using a... interest index, the present value is computed on the assumption that the interest determined under the... first foreclosing on the property. When the partnership obtains the loan, the present value (discounted...
NASA Technical Reports Server (NTRS)
Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.
ERIC Educational Resources Information Center
Young, Keith
2016-01-01
This study examined the attitudes of teachers towards using tablet computers, predominantly Apple's iPad, across 22 post primary-schools in Ireland. The study also questions some previous research and assumptions on the educational use of tablet computers. The majority of schools were using devices with students and teachers; the combined size of…
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.
Pink, Joshua; Parker, Ben; Petrou, Stavros
2016-09-01
A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters. Studies should consistently report not only all simplifying assumptions made but also the estimated impact of these assumptions on the cost-effectiveness results.
NASA Astrophysics Data System (ADS)
Solazzi, Santiago G.; Rubino, J. Germán; Müller, Tobias M.; Milani, Marco; Guarracino, Luis; Holliger, Klaus
2016-11-01
Wave-induced fluid flow (WIFF) due to the presence of mesoscopic heterogeneities is considered as one of the main seismic attenuation mechanisms in the shallower parts of the Earth's crust. For this reason, several models have been developed to quantify seismic attenuation in the presence of heterogeneities of varying complexity, ranging from periodically layered media to rocks containing fractures and highly irregular distributions of fluid patches. Most of these models are based on Biot's theory of poroelasticity and make use of the assumption that the upscaled counterpart of a heterogeneous poroelastic medium can be represented by a homogeneous viscoelastic solid. Under this dynamic-equivalent viscoelastic medium (DEVM) assumption, attenuation is quantified in terms of the ratio of the imaginary and real parts of a frequency-dependent, complex-valued viscoelastic modulus. Laboratory measurements on fluid-saturated rock samples also rely on this DEVM assumption when inferring attenuation from the phase shift between the applied stress and the resulting strain. However, whether it is correct to use an effective viscoelastic medium to represent the attenuation arising from WIFF at mesoscopic scales in heterogeneous poroelastic media remains largely unexplored. In this work, we present an alternative approach to estimate seismic attenuation due to WIFF. It is fully rooted in the framework of poroelasticity and is based on the quantification of the dissipated power and stored strain energy resulting from numerical oscillatory relaxation tests. We employ this methodology to compare different definitions of the inverse quality factor for a set of pertinent scenarios, including patchy saturation and fractured rocks. This numerical analysis allows us to verify the correctness of the DEVM assumption in the presence of different kinds of heterogeneities. The proposed methodology has the key advantage of providing the local contributions of energy dissipation to the overall seismic attenuation, information that is not available when attenuation is retrieved from methods based on the DEVM assumption. Using the local attenuation contributions we provide further insights into the WIFF mechanism for randomly distributed fluid patches and explore the accumulation of energy dissipation in the vicinity of fractures.
Plagianakos, V P; Magoulas, G D; Vrahatis, M N
2006-03-01
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.
Gordon, H R; Wang, M
1992-07-20
The first step in the coastal zone color scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering contribution, Lr(r), to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm Lr(r), is computed by assuming that the ocean surface is flat. Computations of the radiance leaving a Rayleigh-scattering atmosphere overlying a rough Fresnel-reflecting ocean are presented to assess the radiance error caused by the flat-ocean assumption. The surface-roughness model is described in detail for both scalar and vector (including polarization) radiative transfer theory. The computations utilizing the vector theory show that the magnitude of the error significantly depends on the assumptions made in regard to the shadowing of one wave by another. In the case of the coastal zone color scanner bands, we show that for moderate solar zenith angles the error is generally below the 1 digital count level, except near the edge of the scan for high wind speeds. For larger solar zenith angles, the error is generally larger and can exceed 1 digital count at some wavelengths over the entire scan, even for light winds. The error in Lr(r) caused by ignoring surface roughness is shown to be the same order of magnitude as that caused by uncertainties of +/- 15 mb in the surface atmospheric pressure or of +/- 50 Dobson units in the ozone concentration. For future sensors, which will have greater radiometric sensitivity, the error caused by the flat-ocean assumption in the computation of Lr(r) could be as much as an order of magnitude larger than the noise-equivalent spectral radiance in certain situations.
Modeling Endovascular Coils as Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Yadollahi Farsani, H.; Herrmann, M.; Chong, B.; Frakes, D.
2016-12-01
Minimally invasive surgeries are the stat-of-the-art treatments for many pathologies. Treating brain aneurysms is no exception; invasive neurovascular clipping is no longer the only option and endovascular coiling has introduced itself as the most common treatment. Coiling isolates the aneurysm from blood circulation by promoting thrombosis within the aneurysm. One approach to studying intra-aneurysmal hemodynamics consists of virtually deploying finite element coil models and then performing computational fluid dynamics. However, this approach is often computationally expensive and requires extensive resources to perform. The porous medium approach has been considered as an alternative to the conventional coil modeling approach because it lessens the complexities of computational fluid dynamics simulations by reducing the number of mesh elements needed to discretize the domain. There have been a limited number of attempts at treating the endovascular coils as homogeneous porous media. However, the heterogeneity associated with coil configurations requires a more accurately defined porous medium in which the porosity and permeability change throughout the domain. We implemented this approach by introducing a lattice of sample volumes and utilizing techniques available in the field of interactive computer graphics. We observed that the introduction of the heterogeneity assumption was associated with significant changes in simulated aneurysmal flow velocities as compared to the homogeneous assumption case. Moreover, as the sample volume size was decreased, the flow velocities approached an asymptotical value, showing the importance of the sample volume size selection. These results demonstrate that the homogeneous assumption for porous media that are inherently heterogeneous can lead to considerable errors. Additionally, this modeling approach allowed us to simulate post-treatment flows without considering the explicit geometry of a deployed endovascular coil mass, greatly simplifying computation.
Wauters, Lauri D J; Miguel-Moragas, Joan San; Mommaerts, Maurice Y
2015-11-01
To gain insight into the methodology of different computer-aided design-computer-aided manufacturing (CAD-CAM) applications for the reconstruction of cranio-maxillo-facial (CMF) defects. We reviewed and analyzed the available literature pertaining to CAD-CAM for use in CMF reconstruction. We proposed a classification system of the techniques of implant and cutting, drilling, and/or guiding template design and manufacturing. The system consisted of 4 classes (I-IV). These classes combine techniques used for both the implant and template to most accurately describe the methodology used. Our classification system can be widely applied. It should facilitate communication and immediate understanding of the methodology of CAD-CAM applications for the reconstruction of CMF defects.
Clarity of objectives and working principles enhances the success of biomimetic programs.
Wolff, Jonas O; Wells, David; Reid, Chris R; Blamires, Sean J
2017-09-26
Biomimetics, the transfer of functional principles from living systems into product designs, is increasingly being utilized by engineers. Nevertheless, recurring problems must be overcome if it is to avoid becoming a short-lived fad. Here we assess the efficiency and suitability of methods typically employed by examining three flagship examples of biomimetic design approaches from different disciplines: (1) the creation of gecko-inspired adhesives; (2) the synthesis of spider silk, and (3) the derivation of computer algorithms from natural self-organizing systems. We find that identification of the elemental working principles is the most crucial step in the biomimetic design process. It bears the highest risk of failure (e.g. losing the target function) due to false assumptions about the working principle. Common problems that hamper successful implementation are: (i) a discrepancy between biological functions and the desired properties of the product, (ii) uncertainty about objectives and applications, (iii) inherent limits in methodologies, and (iv) false assumptions about the biology of the models. Projects that aim for multi-functional products are particularly challenging to accomplish. We suggest a simplification, modularisation and specification of objectives, and a critical assessment of the suitability of the model. Comparative analyses, experimental manipulation, and numerical simulations followed by tests of artificial models have led to the successful extraction of working principles. A searchable database of biological systems would optimize the choice of a model system in top-down approaches that start at an engineering problem. Only when biomimetic projects become more predictable will there be wider acceptance of biomimetics as an innovative problem-solving tool among engineers and industry.
Eiser, C; Vance, Y H; Seamark, D
2000-11-01
To report the development and psychometric properties of a generic computer-delivered measure of quality of life (QoL) suitable for children aged 6-12 years: the Exqol. The theoretical model adopted is based on an assumption that poorer QoL is the result of discrepancies between an individual's actual ('like me') and ideal self ('how I would like to be'). The Exqol consists of 12 pictures, each of which is rated twice; first in terms of 'like me' and second as 'I would like to be'. The Exqol is delivered using a Macintosh Powerbook and takes approximately 20 min to complete. Data are reported for 58 children with asthma (Mage = 8.95 years) and 69 healthy children (Mage = 749 years). In order to determine validity of the Exqol, children with asthma also completed the Childhood Asthma Questionnaire (CAQ) and their mothers completed a measure of child vulnerability and caregiver QoL. Higher discrepancies were found for children with asthma compared with healthy children (P < 0.05). For children with asthma, significant correlations were found between discrepancy scores and two of the four subscales of the CAQ. Children who rated their asthma to be more severe also had higher discrepancy scores (P < 0.05). The Exqol has acceptable internal reliability and validity and distinguishes between children with asthma and healthy children. These data provide preliminary support for the theoretical assumption that QoL reflects perceived discrepancies between an individual's actual and ideal self. Methodological refinements to the Exqol are suggested.
NASA Astrophysics Data System (ADS)
Udomsungworagul, A.; Charnsethikul, P.
2018-03-01
This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.
Conjugate gradient based projection - A new explicit methodology for frictional contact
NASA Technical Reports Server (NTRS)
Tamma, Kumar K.; Li, Maocheng; Sha, Desong
1993-01-01
With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.
Research for the Fluid Field of the Centrifugal Compressor Impeller in Accelerating Startup
NASA Astrophysics Data System (ADS)
Li, Xiaozhu; Chen, Gang; Zhu, Changyun; Qin, Guoliang
2013-03-01
In order to study the flow field in the impeller in the accelerating start-up process of centrifugal compressor, the 3-D and 1-D transient accelerated flow governing equations along streamline in the impeller of the centrifugal compressor are derived in detail, the assumption of pressure gradient distribution is presented, and the solving method for 1-D transient accelerating flow field is given based on the assumption. The solving method is achieved by programming and the computing result is obtained. It is obtained by comparison that the computing method is met with the test result. So the feasibility and effectiveness for solving accelerating start-up problem of centrifugal compressor by the solving method in this paper is proven.
Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments
NASA Astrophysics Data System (ADS)
Berk, Mario; Å pačková, Olga; Straub, Daniel
2017-12-01
The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.
NASA Astrophysics Data System (ADS)
Sethian, J.; Suckale, J.; Yu, J.; Elkins-Tanton, L. T.
2011-12-01
Numerous problems in the Earth sciences involve the dynamic interaction between solid bodies and viscous flow. The goal of this contribution is to develop and validate a computational methodology for modeling complex solid-fluid interactions with minimal simplifying assumptions. The approach we develop is general enough to be applicable in a wide range of geophysical systems ranging from crystal-bearing lava flows to sediment-rich rivers and aerosol transport. Our algorithm relies on a two-step projection scheme: In the first step, we solve the multiple-phase Navier-Stokes or Stokes equation, respectively, in both domains. In the second step, we project the velocity field in the solid domain onto a rigid-body motion by enforcing that the deformation tensor in the respective domain is zero. An important component of the numerical scheme is the accurate treatment of collisions between an arbitrary number of suspended solid bodies based on the impact Stokes number and the elasticity parameters of the solid phase. We perform several benchmark computations to validate our computations including wake formation behind fixed and mobile cylinders and cuboids, the settling speed of particles, and laboratory experiments of collision modes. Finally, we apply our method to investigate the competing effect of entrainment and fractionation in crystalline suspensions - an important question in the context of magma differentiation processes in magma chambers and magma oceans. We find that the properties and volume fraction of the crystalline phase play an important role for evaluating differentiation efficiency.