Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F
2012-01-01
Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Musculoskeletal modelling in dogs: challenges and future perspectives.
Dries, Billy; Jonkers, Ilse; Dingemanse, Walter; Vanwanseele, Benedicte; Vander Sloten, Jos; van Bree, Henri; Gielen, Ingrid
2016-05-18
Musculoskeletal models have proven to be a valuable tool in human orthopaedics research. Recently, veterinary research started taking an interest in the computer modelling approach to understand the forces acting upon the canine musculoskeletal system. While many of the methods employed in human musculoskeletal models can applied to canine musculoskeletal models, not all techniques are applicable. This review summarizes the important parameters necessary for modelling, as well as the techniques employed in human musculoskeletal models and the limitations in transferring techniques to canine modelling research. The major challenges in future canine modelling research are likely to centre around devising alternative techniques for obtaining maximal voluntary contractions, as well as finding scaling factors to adapt a generalized canine musculoskeletal model to represent specific breeds and subjects.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1993-01-01
Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.
Time series forecasting using ERNN and QR based on Bayesian model averaging
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clegg, Samuel M; Barefield, James E; Wiens, Roger C
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less
Inverse Problems in Geodynamics Using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.
2018-01-01
During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
2014-05-01
solver to treat the spray process. An Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with...Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with high fidelity while keeping the cell...in single and multi-hole nozzle configurations. The models were added to the present CONVERGE liquid fuel database and validated extensively
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
User-Centered Innovation: A Model for "Early Usability Testing."
ERIC Educational Resources Information Center
Sugar, William A.; Boling, Elizabeth
The goal of this study is to show how some concepts and techniques from disciplines outside Instructional Systems Development (ISD) have the potential to extend and enhance the traditional view of ISD practice when they are employed very early in the ISD process. The concepts and techniques employed were user-centered in design and usability, and…
Protein-membrane electrostatic interactions: Application of the Lekner summation technique
NASA Astrophysics Data System (ADS)
Juffer, André H.; Shepherd, Craig M.; Vogel, Hans J.
2001-01-01
A model has been developed to calculate the electrostatic interaction between biomolecules and lipid bilayers. The effect of ionic strength is included by means of explicit ions, while water is described as a background continuum. The bilayer is considered at the atomic level. The Lekner summation technique is employed to calculate the long-range electrostatic interactions. The new method is employed to estimate the electrostatic contribution to the free energy of binding of sandostatin, a cyclic eight-residue analogue of the peptide hormone somatostatin, to lipid bilayers with thermodynamic integration. Monte Carlo simulation techniques were employed to determine ion distributions and peptide orientations. Both neutral as well as negatively charged lipid bilayers were used. An error analysis to judge the quality of the computation is also presented. The applicability of the Lekner summation technique to combine it with computer simulation models that simulate the adsorption of peptides (and proteins) into the interfacial region of lipid bilayers is discussed.
USDA-ARS?s Scientific Manuscript database
Cover: The electrospinning technique was employed to obtain conducting nanofibers based on polyaniline and poly(lactic acid). A statistical model was employed to describe how the process factors (solution concentration, applied voltage, and flow rate) govern the fiber dimensions. Nanofibers down to ...
Information management: considering adolescents' regulation of parental knowledge.
Marshall, Sheila K; Tilton-Weaver, Lauree C; Bosdet, Lara
2005-10-01
Employing Goffman's [(1959). The presentation of self in everyday life. New York: Doubleday and Company] notion of impression management, adolescents' conveyance of information about their whereabouts and activities to parents was assessed employing two methodologies. First, a two-wave panel design with a sample of 121 adolescents was used to test a model of information management incorporating two forms of information regulation (lying and willingness to disclose), adolescents' perception of their parents' knowledge about their activities, and adolescent misconduct. Path analysis was used to examine the model for two forms of misconduct as outcomes: substance use and antisocial behaviours. Fit indices indicate the path models were all good fits to the data. Second, 96 participants' responses to semi-structured questions were analyzed using a qualitative analytic technique. Findings reveal adolescents withhold or divulge information in coordination with their parents, employ impression management techniques, and try to balance safety issues with preservation of the parent-adolescent relationship.
Trimming a hazard logic tree with a new model-order-reduction technique
Porter, Keith; Field, Edward; Milner, Kevin R
2017-01-01
The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.
Shrestha, Badri Man; Haylor, John
2017-11-15
Rat models of renal transplant are used to investigate immunologic processes and responses to therapeutic agents before their translation into routine clinical practice. In this study, we have described details of rat surgical anatomy and our experiences with the microvascular surgical technique relevant to renal transplant by employing donor inferior vena cava and aortic conduits. For this study, 175 rats (151 Lewis and 24 Fisher) were used to establish the Fisher-Lewis rat model of chronic allograft injury at our institution. Anatomic and technical details were recorded during the period of training and establishment of the model. A final group of 12 transplanted rats were studied for an average duration of 51 weeks for the Lewis-to-Lewis isografts (5 rats) and 42 weeks for the Fisher-to-Lewis allografts (7 rats). Functional measurements and histology confirmed the diagnosis of chronic allograft injury. Mastering the anatomic details and microvascular surgical techniques can lead to the successful establishment of an experimental renal transplant model.
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
An Examination of Sampling Characteristics of Some Analytic Factor Transformation Techniques.
ERIC Educational Resources Information Center
Skakun, Ernest N.; Hakstian, A. Ralph
Two population raw data matrices were constructed by computer simulation techniques. Each consisted of 10,000 subjects and 12 variables, and each was constructed according to an underlying factorial model consisting of four major common factors, eight minor common factors, and 12 unique factors. The computer simulation techniques were employed to…
A Simulation of AI Programming Techniques in BASIC.
ERIC Educational Resources Information Center
Mandell, Alan
1986-01-01
Explains the functions of and the techniques employed in expert systems. Offers the program "The Periodic Table Expert," as a model for using artificial intelligence techniques in BASIC. Includes the program listing and directions for its use on: Tandy 1000, 1200, and 2000; IBM PC; PC Jr; TRS-80; and Apple computers. (ML)
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
Low temperature ablation models made by pressure/vacuum application
NASA Technical Reports Server (NTRS)
Fischer, M. C.; Heier, W. C.
1970-01-01
Method developed employs high pressure combined with strong vacuum force to compact ablation models into desired conical shape. Technique eliminates vapor hazard and results in high material density providing excellent structural integrity.
Knowledge discovery in cardiology: A systematic literature review.
Kadi, I; Idri, A; Fernandez-Aleman, J L
2017-01-01
Data mining (DM) provides the methodology and technology needed to transform huge amounts of data into useful information for decision making. It is a powerful process employed to extract knowledge and discover new patterns embedded in large data sets. Data mining has been increasingly used in medicine, particularly in cardiology. In fact, DM applications can greatly benefit all those involved in cardiology, such as patients, cardiologists and nurses. The purpose of this paper is to review papers concerning the application of DM techniques in cardiology so as to summarize and analyze evidence regarding: (1) the DM techniques most frequently used in cardiology; (2) the performance of DM models in cardiology; (3) comparisons of the performance of different DM models in cardiology. We performed a systematic literature review of empirical studies on the application of DM techniques in cardiology published in the period between 1 January 2000 and 31 December 2015. A total of 149 articles published between 2000 and 2015 were selected, studied and analyzed according to the following criteria: DM techniques and performance of the approaches developed. The results obtained showed that a significant number of the studies selected used classification and prediction techniques when developing DM models. Neural networks, decision trees and support vector machines were identified as being the techniques most frequently employed when developing DM models in cardiology. Moreover, neural networks and support vector machines achieved the highest accuracy rates and were proved to be more efficient than other techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Racial Variation in Vocational Rehabilitation Outcomes: A Structural Equation Modeling Approach
ERIC Educational Resources Information Center
Martin, Frank H.
2010-01-01
Numerous studies have indicated racial and ethnic disparities in the vocational rehabilitation (VR) system, including differences in acceptance, services provided, closure types, and employment outcomes. Few of these studies, however, have used advanced multivariate techniques or latent constructs to measure quality of employment outcomes (QEO) or…
Working-Class Jobs and New Parents' Mental Health
ERIC Educational Resources Information Center
Perry-Jenkins, Maureen; Smith, JuliAnna Z.; Goldberg, Abbie E.; Logan, Jade
2011-01-01
Little research has explored linkages between work conditions and mental health in working-class employed parents. The current study aims to address this gap, employing hierarchical linear modeling techniques to examine how levels of and changes in job autonomy, job urgency, supervisor support, and coworker support predicted parents' depressive…
Displacement Models for THUNDER Actuators having General Loads and Boundary Conditions
NASA Technical Reports Server (NTRS)
Wieman, Robert; Smith, Ralph C.; Kackley, Tyson; Ounaies, Zoubeida; Bernd, Jeff; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
This paper summarizes techniques for quantifying the displacements generated in THUNDER actuators in response to applied voltages for a variety of boundary conditions and exogenous loads. The PDE (partial differential equations) models for the actuators are constructed in two steps. In the first, previously developed theory quantifying thermal and electrostatic strains is employed to model the actuator shapes which result from the manufacturing process and subsequent repoling. Newtonian principles are then employed to develop PDE models which quantify displacements in the actuator due to voltage inputs to the piezoceramic patch. For this analysis, drive levels are assumed to be moderate so that linear piezoelectric relations can be employed. Finite element methods for discretizing the models are developed and the performance of the discretized models are illustrated through comparison with experimental data.
3D-Printing: an emerging and a revolutionary technology in pharmaceuticals.
Singhvi, Gautam; Patil, Shalini; Girdhar, Vishal; Chellappan, Dinesh K; Gupta, Gaurav; Dua, Kamal
2018-06-01
One of the novel and progressive technology employed in pharmaceutical manufacturing, design of medical device and tissue engineering is threedimensional (3D) printing. 3D printing technologies provide great advantages in 3D scaffolds fabrication over traditional methods in the control of pore size, porosity, and interconnectivity. Various techniques of 3Dprinting include powder bed fusion, fused deposition modeling, binder deposition, inkjet printing, photopolymerization and many others which are still evolving. 3Dprinting technique been employed in developing immediate release products, various systems to deliver multiple release modalities etc. 3D printing has opened the door for new generation of customized drug delivery with builtin flexibility for safer and effective therapy. Our minireview provides a quick snapshot on an overview of 3D printing, various techniques employed, applications and its advancements in pharmaceutical sciences.
Frontal view reconstruction for iris recognition
Santos-Villalobos, Hector J; Bolme, David S; Boehnen, Chris Bensing
2015-02-17
Iris recognition can be accomplished for a wide variety of eye images by correcting input images with an off-angle gaze. A variety of techniques, from limbus modeling, corneal refraction modeling, optical flows, and genetic algorithms can be used. A variety of techniques, including aspherical eye modeling, corneal refraction modeling, ray tracing, and the like can be employed. Precomputed transforms can enhance performance for use in commercial applications. With application of the technologies, images with significantly unfavorable gaze angles can be successfully recognized.
NASA Technical Reports Server (NTRS)
Hewes, D. E.
1978-01-01
A mathematical modeling technique was developed for the lift characteristics of straight wings throughout a very wide angle of attack range. The technique employs a mathematical switching function that facilitates the representation of the nonlinear aerodynamic characteristics in the partially and fully stalled regions and permits matching empirical data within + or - 4 percent of maximum values. Although specifically developed for use in modeling the lift characteristics, the technique appears to have other applications in both aerodynamic and nonaerodynamic fields.
Surface and Flow Field Measurements on the FAITH Hill Model
NASA Technical Reports Server (NTRS)
Bell, James H.; Heineck, James T.; Zilliac, Gregory; Mehta, Rabindra D.; Long, Kurtis R.
2012-01-01
A series of experimental tests, using both qualitative and quantitative techniques, were conducted to characterize both surface and off-surface flow characteristics of an axisymmetric, modified-cosine-shaped, wall-mounted hill named "FAITH" (Fundamental Aero Investigates The Hill). Two separate models were employed: a 6" high, 18" base diameter machined aluminum model that was used for wind tunnel tests and a smaller scale (2" high, 6" base diameter) sintered nylon version that was used in the water channel facility. Wind tunnel and water channel tests were conducted at mean test section speeds of 165 fps (Reynolds Number based on height = 500,000) and 0.1 fps (Reynolds Number of 1000), respectively. The ratio of model height to boundary later height was approximately 3 for both tests. Qualitative techniques that were employed to characterize the complex flow included surface oil flow visualization for the wind tunnel tests, and dye injection for the water channel tests. Quantitative techniques that were employed to characterize the flow included Cobra Probe to determine point-wise steady and unsteady 3D velocities, Particle Image Velocimetry (PIV) to determine 3D velocities and turbulence statistics along specified planes, Pressure Sensitive Paint (PSP) to determine mean surface pressures, and Fringe Imaging Skin Friction (FISF) to determine surface skin friction (magnitude and direction). This initial report summarizes the experimental set-up, techniques used, data acquired and describes some details of the dataset that is being constructed for use by other researchers, especially the CFD community. Subsequent reports will discuss the data and their interpretation in more detail
Terrain modeling for microwave landing system
NASA Technical Reports Server (NTRS)
Poulose, M. M.
1991-01-01
A powerful analytical approach for evaluating the terrain effects on a microwave landing system (MLS) is presented. The approach combines a multiplate model with a powerful and exhaustive ray tracing technique and an accurate formulation for estimating the electromagnetic fields due to the antenna array in the presence of terrain. Both uniform theory of diffraction (UTD) and impedance UTD techniques have been employed to evaluate these fields. Innovative techniques are introduced at each stage to make the model versatile to handle most general terrain contours and also to reduce the computational requirement to a minimum. The model is applied to several terrain geometries, and the results are discussed.
Modeling of switching regulator power stages with and without zero-inductor-current dwell time
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Yu, Y.
1979-01-01
State-space techniques are employed to derive accurate models for the three basic switching converter power stages: buck, boost, and buck/boost operating with and without zero-inductor-current dwell time. A generalized procedure is developed which treats the continuous-inductor-current mode without dwell time as a special case of the discontinuous-current mode when the dwell time vanishes. Abrupt changes of system behavior, including a reduction of the system order when the dwell time appears, are shown both analytically and experimentally. Merits resulting from the present modeling technique in comparison with existing modeling techniques are illustrated.
Soft computing techniques toward modeling the water supplies of Cyprus.
Iliadis, L; Maris, F; Tachos, S
2011-10-01
This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.
Reducing software mass through behavior control. [of planetary roving robots
NASA Technical Reports Server (NTRS)
Miller, David P.
1992-01-01
Attention is given to the tradeoff between communication and computation as regards a planetary rover (both these subsystems are very power-intensive, and both can be the major driver of the rover's power subsystem, and therefore the minimum mass and size of the rover). Software techniques that can be used to reduce the requirements on both communciation and computation, allowing the overall robot mass to be greatly reduced, are discussed. Novel approaches to autonomous control, called behavior control, employ an entirely different approach, and for many tasks will yield a similar or superior level of autonomy to traditional control techniques, while greatly reducing the computational demand. Traditional systems have several expensive processes that operate serially, while behavior techniques employ robot capabilities that run in parallel. Traditional systems make extensive world models, while behavior control systems use minimal world models or none at all.
Structural equation modeling in pediatric psychology: overview and review of applications.
Nelson, Timothy D; Aylward, Brandon S; Steele, Ric G
2008-08-01
To describe the use of structural equation modeling (SEM) in the Journal of Pediatric Psychology (JPP) and to discuss the usefulness of SEM applications in pediatric psychology research. The use of SEM in JPP between 1997 and 2006 was examined and compared to leading journals in clinical psychology, clinical child psychology, and child development. SEM techniques were used in <4% of the empirical articles appearing in JPP between 1997 and 2006. SEM was used less frequently in JPP than in other clinically relevant journals over the past 10 years. However, results indicated a recent increase in JPP studies employing SEM techniques. SEM is an under-utilized class of techniques within pediatric psychology research, although investigations employing these methods are becoming more prevalent. Despite its infrequent use to date, SEM is a potentially useful tool for advancing pediatric psychology research with a number of advantages over traditional statistical methods.
Application of an enriched FEM technique in thermo-mechanical contact problems
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Bahmani, B.
2018-02-01
In this paper, an enriched FEM technique is employed for thermo-mechanical contact problem based on the extended finite element method. A fully coupled thermo-mechanical contact formulation is presented in the framework of X-FEM technique that takes into account the deformable continuum mechanics and the transient heat transfer analysis. The Coulomb frictional law is applied for the mechanical contact problem and a pressure dependent thermal contact model is employed through an explicit formulation in the weak form of X-FEM method. The equilibrium equations are discretized by the Newmark time splitting method and the final set of non-linear equations are solved based on the Newton-Raphson method using a staggered algorithm. Finally, in order to illustrate the capability of the proposed computational model several numerical examples are solved and the results are compared with those reported in literature.
NASA Astrophysics Data System (ADS)
Giannaros, Theodore; Kotroni, Vassiliki; Lagouvardos, Kostas
2015-04-01
Lightning data assimilation has been recently attracting increasing attention as a technique implemented in numerical weather prediction (NWP) models for improving precipitation forecasts. In the frame of TALOS project, we implemented a robust lightning data assimilation technique in the Weather Research and Forecasting (WRF) model with the aim to improve the precipitation prediction in Greece. The assimilation scheme employs lightning as a proxy for the presence or absence of deep convection. In essence, flash data are ingested in WRF to control the Kain-Fritsch (KF) convective parameterization scheme (CPS). When lightning is observed, indicating the occurrence of convective activity, the CPS is forced to attempt to produce convection, whereas the CPS may be optionally be prevented from producing convection when no lightning is observed. Eight two-day precipitation events were selected for assessing the performance of the lightning data assimilation technique. The ingestion of lightning in WRF was carried out during the first 6 h of each event and the evaluation focused on the consequent 24 h, constituting a realistic setup that could be used in operational weather forecasting applications. Results show that the implemented assimilation scheme can improve model performance in terms of precipitation prediction. Forecasts employing the assimilation of flash data were found to exhibit more skill than control simulations, particularly for the intense (>20 mm) 24 h rain accumulations. Analysis of results also revealed that the option not to suppress the KF scheme in the absence of observed lightning, leads to a generally better performance compared to the experiments employing the full control of the CPS' triggering. Overall, the implementation of the lightning data assimilation technique is found to improve the model's ability to represent convection, especially in situations when past convection has modified the mesoscale environment in ways that affect the occurrence and evolution of subsequent convection.
Location Decisions of Charter Schools: An Examination of Michigan
ERIC Educational Resources Information Center
Koller, Kyle; Welsch, David M.
2017-01-01
Using school level data we examine which factors influence charter school location decisions. We augment previous research by employing a panel dataset, recently developed geographic techniques to measure distances and define areas, and employing a hurdle model to deal with the excess zero problem. The main results of our research indicate that,…
DOT National Transportation Integrated Search
1981-01-01
This document specifies the functional requirements for the AGT-SOS Feeder Systems Model (FSM), the type of hardware required, and the modeling techniques employed by the FSM. The objective of the FSM is to map the zone-to-zone transit patronage dema...
Using Drawings in Play Therapy: A Jungian Approach
ERIC Educational Resources Information Center
Birch, Jennifer; Carmichael, Karla D.
2009-01-01
Counselors working with children employ a variety of therapeutic techniques and tools from various theoretical models. One of these tools, drawing, is increasingly being implemented into play therapy. The purpose of this paper is to briefly review Jungian theoretical approaches as they pertain to drawing techniques within the counseling session.
Virtual microphone sensing through vibro-acoustic modelling and Kalman filtering
NASA Astrophysics Data System (ADS)
van de Walle, A.; Naets, F.; Desmet, W.
2018-05-01
This work proposes a virtual microphone methodology which enables full field acoustic measurements for vibro-acoustic systems. The methodology employs a Kalman filtering framework in order to combine a reduced high-fidelity vibro-acoustic model with a structural excitation measurement and small set of real microphone measurements on the system under investigation. By employing model order reduction techniques, a high order finite element model can be converted in a much smaller model which preserves the desired accuracy and maintains the main physical properties of the original model. Due to the low order of the reduced-order model, it can be effectively employed in a Kalman filter. The proposed methodology is validated experimentally on a strongly coupled vibro-acoustic system. The virtual sensor vastly improves the accuracy with respect to regular forward simulation. The virtual sensor also allows to recreate the full sound field of the system, which is very difficult/impossible to do through classical measurements.
An advanced approach for computer modeling and prototyping of the human tooth.
Chang, Kuang-Hua; Magdum, Sheetalkumar; Khera, Satish C; Goel, Vijay K
2003-05-01
This paper presents a systematic and practical method for constructing accurate computer and physical models that can be employed for the study of human tooth mechanics. The proposed method starts with a histological section preparation of a human tooth. Through tracing outlines of the tooth on the sections, discrete points are obtained and are employed to construct B-spline curves that represent the exterior contours and dentino-enamel junction (DEJ) of the tooth using a least square curve fitting technique. The surface skinning technique is then employed to quilt the B-spline curves to create a smooth boundary and DEJ of the tooth using B-spline surfaces. These surfaces are respectively imported into SolidWorks via its application protocol interface to create solid models. The solid models are then imported into Pro/MECHANICA Structure for finite element analysis (FEA). The major advantage of the proposed method is that it first generates smooth solid models, instead of finite element models in discretized form. As a result, a more advanced p-FEA can be employed for structural analysis, which usually provides superior results to traditional h-FEA. In addition, the solid model constructed is smooth and can be fabricated with various scales using the solid freeform fabrication technology. This method is especially useful in supporting bioengineering applications, where the shape of the object is usually complicated. A human maxillary second molar is presented to illustrate and demonstrate the proposed method. Note that both the solid and p-FEA models of the molar are presented. However, comparison between p- and h-FEA models is out of the scope of the paper.
ERIC Educational Resources Information Center
Norman, John T.
1992-01-01
Reports effectiveness of modeling as teaching strategy on learning science process skills. Teachers of urban sixth through ninth grade students were taught modeling techniques; two sets of teachers served as controls. Results indicate students taught by teachers employing modeling instruction exhibited significantly higher competence in process…
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Student Modeling and Ab Initio Language Learning.
ERIC Educational Resources Information Center
Heift, Trude; Schulze, Mathias
2003-01-01
Provides examples of student modeling techniques that have been employed in computer-assisted language learning over the past decade. Describes two systems for learning German: "German Tutor" and "Geroline." Shows how a student model can support computerized adaptive language testing for diagnostic purposes in a Web-based language learning…
Daradkeh, T K; Karim, L
1994-01-01
To investigate the predictors of employment status of patients with DSM-III-R diagnosis, 55 patients were selected by a simple random technique from the main psychiatric clinic in Al Ain, United Arab Emirates. Structured and formal assessments were carried out to extract the potential predictors of outcome of schizophrenia. Logistic regression model revealed that being married, absence of schizoid personality, free or with minimum symptoms of the illness, later age of onset, and higher educational attainment were the most significant predictors of employment outcome. The implications of the results of this study are discussed in the text.
2014-01-01
computational and empirical dosimetric tools [31]. For the computational dosimetry, we employed finite-dif- ference time- domain (FDTD) modeling techniques to...temperature-time data collected for a well exposed to THz radiation using finite-difference time- domain (FDTD) modeling techniques and thermocouples... like )). Alter- ation in the expression of such genes underscores the signif- 62 IEEE TRANSACTIONS ON TERAHERTZ SCIENCE AND TECHNOLOGY, VOL. 6, NO. 1
NASA Astrophysics Data System (ADS)
Fast, Jerome D.; Osteen, B. Lance
In this study, a four-dimensional data assimilation technique based on Newtonian relaxation is incorporated into the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) and evaluated using data taken from one experiment of the US Department of Energy's (DOE) 1991 Atmospheric Studies in COmplex Terrain (ASCOT) field study along the front range of the Rockies in Colorado. The main objective of this study is to determine the ability of the model to predict small-scale circulations influenced by terrain, such as drainage flows, and assess the impact of data assimilation on the numerical results. In contrast to previous studies in which the smallest horizontal grid spacing was 10 km and 8 km, data assimilation is applied in this study to domains with a horizontal grid spacing as small as 1 km. The prognostic forecasts made by RAMS are evaluated by comparing simulations that employ static initial conditions, with simulations that incorporate continuous data assimilation, and data assimilation for a fixed period of time (dynamic initialization). This paper will also elaborate on the application and limitation of the Newtonian relaxation technique in limited-area mesoscale models with a relatively small grid spacing.
NASA Astrophysics Data System (ADS)
Alagha, Jawad S.; Seyam, Mohammed; Md Said, Md Azlin; Mogheir, Yunes
2017-12-01
Artificial intelligence (AI) techniques have increasingly become efficient alternative modeling tools in the water resources field, particularly when the modeled process is influenced by complex and interrelated variables. In this study, two AI techniques—artificial neural networks (ANNs) and support vector machine (SVM)—were employed to achieve deeper understanding of the salinization process (represented by chloride concentration) in complex coastal aquifers influenced by various salinity sources. Both models were trained using 11 years of groundwater quality data from 22 municipal wells in Khan Younis Governorate, Gaza, Palestine. Both techniques showed satisfactory prediction performance, where the mean absolute percentage error (MAPE) and correlation coefficient ( R) for the test data set were, respectively, about 4.5 and 99.8% for the ANNs model, and 4.6 and 99.7% for SVM model. The performances of the developed models were further noticeably improved through preprocessing the wells data set using a k-means clustering method, then conducting AI techniques separately for each cluster. The developed models with clustered data were associated with higher performance, easiness and simplicity. They can be employed as an analytical tool to investigate the influence of input variables on coastal aquifer salinity, which is of great importance for understanding salinization processes, leading to more effective water-resources-related planning and decision making.
Propulsion simulation for magnetically suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Beerman, Henry P.; Chen, James; Krech, Robert H.; Lintz, Andrew L.; Rosen, David I.
1990-01-01
The feasibility of simulating propulsion-induced aerodynamic effects on scaled aircraft models in wind tunnels employing Magnetic Suspension and Balance Systems. The investigation concerned itself with techniques of generating exhaust jets of appropriate characteristics. The objectives were to: (1) define thrust and mass flow requirements of jets; (2) evaluate techniques for generating propulsive gas within volume limitations imposed by magnetically-suspended models; (3) conduct simple diagnostic experiments for techniques involving new concepts; and (4) recommend experiments for demonstration of propulsion simulation techniques. Various techniques of generating exhaust jets of appropriate characteristics were evaluated on scaled aircraft models in wind tunnels with MSBS. Four concepts of remotely-operated propulsion simulators were examined. Three conceptual designs involving innovative adaptation of convenient technologies (compressed gas cylinders, liquid, and solid propellants) were developed. The fourth innovative concept, namely, the laser-assisted thruster, which can potentially simulate both inlet and exhaust flows, was found to require very high power levels for small thrust levels.
Ethnic variations in immigrant poverty exit and female employment: the missing link.
Kaida, Lisa
2015-04-01
Despite widespread interest in poverty among recent immigrants and female immigrant employment, research on the link between the two is limited. This study evaluates the effect of recently arrived immigrant women's employment on the exit from family poverty and considers the implications for ethnic differences in poverty exit. It uses the bivariate probit model and the Fairlie decomposition technique to analyze data from the Longitudinal Survey of Immigrants to Canada (LSIC), a nationally representative survey of immigrants arriving in Canada, 2000-2001. Results show that the employment of recently arrived immigrant women makes a notable contribution to lifting families out of poverty. Moreover, the wide ethnic variations in the probability of exit from poverty between European and non-European groups are partially explained by the lower employment rates among non-European women. The results suggest that the equal earner/female breadwinner model applies to low-income recent immigrant families in general, but the male breadwinner model explains the low probability of poverty exit among select non-European groups whose female employment rates are notably low.
The community development workshop, appendix B.
NASA Technical Reports Server (NTRS)
Brill, R.; Gastro, E.; Pennington, A. J.
1973-01-01
The Community Development Workshop is the name given to a collection of techniques designed to implement participation in the planning process. It is an electric approach, making use of current work in the psychology of groups, mathematical modeling and systems analysis, simulation gaming, and other techniques. An outline is presented for a session of the workshop which indicates some of the psychological techniques employed, i.e. confrontation, synectics, and encounter micro-labs.
Hierarchy of simulation models for a turbofan gas engine
NASA Technical Reports Server (NTRS)
Longenbaker, W. E.; Leake, R. J.
1977-01-01
Steady-state and transient performance of an F-100-like turbofan gas engine are modeled by a computer program, DYNGEN, developed by NASA. The model employs block data maps and includes about 25 states. Low-order nonlinear analytical and linear techniques are described in terms of their application to the model. Experimental comparisons illustrating the accuracy of each model are presented.
Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Lee, Sangyup
2015-06-01
The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
FIELD VALIDATION OF EXPOSURE ASSESSMENT MODELS. VOLUME 1. DATA
This is the first of two volumes describing work done to evaluate the PAL-DS model, a Gaussian diffusion code modified to account for dry deposition and settling. This first volume describes the experimental techniques employed to dispense, collect, and measure depositing (zinc s...
Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2010-01-01
A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…
Modeling Success: Using Preenrollment Data to Identify Academically At-Risk Students
ERIC Educational Resources Information Center
Gansemer-Topf, Ann M.; Compton, Jonathan; Wohlgemuth, Darin; Forbes, Greg; Ralston, Ekaterina
2015-01-01
Improving student success and degree completion is one of the core principles of strategic enrollment management. To address this principle, institutional data were used to develop a statistical model to identify academically at-risk students. The model employs multiple linear regression techniques to predict students at risk of earning below a…
Functional specifications of the annular suspension pointing system, appendix A
NASA Technical Reports Server (NTRS)
Edwards, B.
1980-01-01
The Annular Suspension Pointing System is described. The Design Realization, Evaluation and Modelling (DREAM) system, and its design description technique, the DREAM Design Notation (DDN) is employed.
Schwarz, L.K.; Runge, M.C.
2009-01-01
Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
ERIC Educational Resources Information Center
Kalahar, Kory G.
2011-01-01
Student failure is a prominent issue in many comprehensive secondary schools nationwide. Researchers studying error, reliability, and performance in organizations have developed and employed a method known as critical incident technique (CIT) for investigating failure. Adopting an action research model, this study involved gathering and analyzing…
Empirical Guidelines for Use of Irregular Wave Model to Estimate Nearshore Wave Height.
1982-07-01
height, the easier to use tech- nique presented by McClenan (1975) was employed. The McClenan technique uti- lizes a monogram which was constructed from...the SPM equations and gives the same results. The inputs to the monogram technique are the period, the deep- water wave height, the deepwater wave
Challenging Aerospace Problems for Intelligent Systems
2003-06-01
importance of each rule. Techniques such as logarithmic regression or Saaty’s AHP may be employed to apply the weights on to the fuzzy rules. 15-9 Given u...at which designs could be evaluated. This implies that modeling techniques such as neural networks, fuzzy systems and so on can play an important role...failure conditions [4-6]. These approaches apply techniques, such as neural networks, fuzzy logic, and parameter identification, to improve aircraft
Employment of adaptive learning techniques for the discrimination of acoustic emissions
NASA Astrophysics Data System (ADS)
Erkes, J. W.; McDonald, J. F.; Scarton, H. A.; Tam, K. C.; Kraft, R. P.
1983-11-01
The following aspects of this study on the discrimination of acoustic emissions (AE) were examined: (1) The analytical development and assessment of digital signal processing techniques for AE signal dereverberation, noise reduction, and source characterization; (2) The modeling and verification of some aspects of key selected techniques through a computer-based simulation; and (3) The study of signal propagation physics and their effect on received signal characteristics for relevant physical situations.
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Simulation of wind turbine wakes using the actuator line technique
Sørensen, Jens N.; Mikkelsen, Robert F.; Henningson, Dan S.; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J.
2015-01-01
The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. PMID:25583862
ERIC Educational Resources Information Center
Blanco, Francesco; La Rocca, Paola; Petta, Catia; Riggi, Francesco
2009-01-01
An educational model simulation of the sound produced by lightning in the sky has been employed to demonstrate realistic signatures of thunder and its connection to the particular structure of the lightning channel. Algorithms used in the past have been revisited and implemented, making use of current computer techniques. The basic properties of…
A Statistical Decision Model for Periodical Selection for a Specialized Information Center
ERIC Educational Resources Information Center
Dym, Eleanor D.; Shirey, Donald L.
1973-01-01
An experiment is described which attempts to define a quantitative methodology for the identification and evaluation of all possibly relevant periodical titles containing toxicological-biological information. A statistical decision model was designed and employed, along with yes/no criteria questions, a training technique and a quality control…
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Investigation of Antiangiogenic Mechanisms Using Novel Imaging Techniques
2010-02-01
of the tumor environment can sensitize the tumor to conventional cytotoxic therapies. To this end, we employ the window chamber model to optically ...facilitate longitudinal, in vivo investigation into the parameters of interest. These include Doppler Optical Coherence Tomography for the measurement of... Optical Techniques, Tumor Pathophysiology, Treatment Response, Vascular Normalization 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18
Photoacoustic imaging of angiogenesis in a subcutaneous islet transplant site in a murine model
NASA Astrophysics Data System (ADS)
Shi, Wei; Pawlick, Rena; Bruni, Antonio; Rafiei, Yasmin; Pepper, Andrew R.; Gala-Lopez, Boris; Choi, Min; Malcolm, Andrew; Zemp, Roger J.; Shapiro, A. M. James
2016-06-01
Islet transplantation (IT) is an established clinical therapy for select patients with type-1 diabetes. Clinically, the hepatic portal vein serves as the site for IT. Despite numerous advances in clinical IT, limitations remain, including early islet cell loss posttransplant, procedural complications, and the inability to effectively monitor islet grafts. Hence, alternative sites for IT are currently being explored, with the subcutaneous space as one potential option. When left unmodified, the subcutaneous space routinely fails to promote successful islet engraftment. However, when employing the previously developed subcutaneous "deviceless" technique, a favorable microenvironment for islet survival and function is established. In this technique, an angiocatheter was temporarily implanted subcutaneously, which facilitated angiogenesis to promote subsequent islet engraftment. This technique has been employed in preclinical animal models, providing a sufficient means to develop techniques to monitor functional aspects of the graft such as angiogenesis. Here, we utilize photoacoustic imaging to track angiogenesis during the priming of the subcutaneous site by the implanted catheter at 1 to 4 weeks postcatheter. Quantitative analysis on vessel densities shows gradual growth of vasculature in the implant position. These results demonstrate the ability to track angiogenesis, thus facilitating a means to optimize and assess the pretransplant microenvironment.
Execution models for mapping programs onto distributed memory parallel computers
NASA Technical Reports Server (NTRS)
Sussman, Alan
1992-01-01
The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Tsou, Tsung-Shan
2007-03-30
This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.
On the Solution of the Three-Dimensional Flowfield About a Flow-Through Nacelle. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Compton, William Bernard
1985-01-01
The solution of the three dimensional flow field for a flow through nacelle was studied. Both inviscid and viscous inviscid interacting solutions were examined. Inviscid solutions were obtained with two different computational procedures for solving the three dimensional Euler equations. The first procedure employs an alternating direction implicit numerical algorithm, and required the development of a complete computational model for the nacelle problem. The second computational technique employs a fourth order Runge-Kutta numerical algorithm which was modified to fit the nacelle problem. Viscous effects on the flow field were evaluated with a viscous inviscid interacting computational model. This model was constructed by coupling the explicit Euler solution procedure with a flag entrainment boundary layer solution procedure in a global iteration scheme. The computational techniques were used to compute the flow field for a long duct turbofan engine nacelle at free stream Mach numbers of 0.80 and 0.94 and angles of attack of 0 and 4 deg.
An Investigation of Large Aircraft Handling Qualities
NASA Astrophysics Data System (ADS)
Joyce, Richard D.
An analytical technique for investigating transport aircraft handling qualities is exercised in a study using models of two such vehicles, a Boeing 747 and Lockheed C-5A. Two flight conditions are employed for climb and directional tasks, and a third included for a flare task. The analysis technique is based upon a "structural model" of the human pilot developed by Hess. The associated analysis procedure has been discussed previously in the literature, but centered almost exclusively on the characteristics of high-performance fighter aircraft. The handling qualities rating level (HQRL) and pilot induced oscillation tendencies rating level (PIORL) are predicted for nominal configurations of the aircraft and for "damaged" configurations where actuator rate limits are introduced as nonlinearites. It is demonstrated that the analysis can accommodate nonlinear pilot/vehicle behavior and do so in the context of specific flight tasks, yielding estimates of handling qualities, pilot-induced oscillation tendencies and upper limits of task performance. A brief human-in-the-loop tracking study was performed to provide a limited validation of the pilot model employed.
NASA Technical Reports Server (NTRS)
El-Kaddah, N.; Szekely, J.
1982-01-01
A mathematical representation for the electromagnetic force field and the fluid flow field in a coreless induction furnace is presented. The fluid flow field was represented by writing the axisymmetric turbulent Navier-Stokes equation, containing the electromagnetic body force term. The electromagnetic body force field was calculated by using a technique of mutual inductances. The kappa-epsilon model was employed for evaluating the turbulent viscosity and the resultant differential equations were solved numerically. Theoretically predicted velocity fields are in reasonably good agreement with the experimental measurements reported by Hunt and Moore; furthermore, the agreement regarding the turbulent intensities are essentially quantitative. These results indicate that the kappa-epsilon model provides a good engineering representation of the turbulent recirculating flows occurring in induction furnaces. At this stage it is not clear whether the discrepancies between measurements and the predictions, which were not very great in any case, are attributable either to the model or to the measurement techniques employed.
Cheema, Jitender Jit Singh; Sankpal, Narendra V; Tambe, Sanjeev S; Kulkarni, Bhaskar D
2002-01-01
This article presents two hybrid strategies for the modeling and optimization of the glucose to gluconic acid batch bioprocess. In the hybrid approaches, first a novel artificial intelligence formalism, namely, genetic programming (GP), is used to develop a process model solely from the historic process input-output data. In the next step, the input space of the GP-based model, representing process operating conditions, is optimized using two stochastic optimization (SO) formalisms, viz., genetic algorithms (GAs) and simultaneous perturbation stochastic approximation (SPSA). These SO formalisms possess certain unique advantages over the commonly used gradient-based optimization techniques. The principal advantage of the GP-GA and GP-SPSA hybrid techniques is that process modeling and optimization can be performed exclusively from the process input-output data without invoking the detailed knowledge of the process phenomenology. The GP-GA and GP-SPSA techniques have been employed for modeling and optimization of the glucose to gluconic acid bioprocess, and the optimized process operating conditions obtained thereby have been compared with those obtained using two other hybrid modeling-optimization paradigms integrating artificial neural networks (ANNs) and GA/SPSA formalisms. Finally, the overall optimized operating conditions given by the GP-GA method, when verified experimentally resulted in a significant improvement in the gluconic acid yield. The hybrid strategies presented here are generic in nature and can be employed for modeling and optimization of a wide variety of batch and continuous bioprocesses.
ERIC Educational Resources Information Center
Karakaya-Ozyer, Kubra; Aksu-Dunya, Beyza
2018-01-01
Structural equation modeling (SEM) is one of the most popular multivariate statistical techniques in Turkish educational research. This study elaborates the SEM procedures employed by 75 educational research articles which were published from 2010 to 2015 in Turkey. After documenting and coding 75 academic papers, categorical frequencies and…
DOT National Transportation Integrated Search
1978-02-01
Ride-quality models for city buses and intercity trains are presented and discussed in terms of their ability to predict passenger comfort and ride acceptability. The report, the last of three volumes, contains procedural guidelines to be employed by...
ERIC Educational Resources Information Center
Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib
2016-01-01
This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…
Cultural Models of Domestic Violence: Perspectives of Social Work and Anthropology Students
ERIC Educational Resources Information Center
Collins, Cyleste C.; Dressler, William W.
2008-01-01
This study employed a unique theoretical approach and a series of participant-based ethnographic interviewing techniques that are traditionally used in cognitive anthropology to examine and compare social work and anthropology students' cultural models of the causes of domestic violence. The study findings indicate that although social work…
NASA Technical Reports Server (NTRS)
Kana, D. D.; Vargas, L. M.
1977-01-01
Transient excitation forces were applied separately to simple beam-and-mass launch vehicle and payload models to develop complex admittance functions for the interface and other appropriate points on the structures. These measured admittances were then analytically combined by a matrix representation to obtain a description of the coupled system dynamic characteristics. Response of the payload model to excitation of the launch vehicle model was predicted and compared with results measured on the combined models. These results are also compared with results of earlier work in which a similar procedure was employed except that steady-state sinusoidal excitation techniques were included. It is found that the method employing transient tests produces results that are better overall than the steady state methods. Furthermore, the transient method requires far less time to implement, and provides far better resolution in the data. However, the data acquisition and handling problem is more complex for this method. It is concluded that the transient test and admittance matrix prediction method can be a valuable tool for development of payload vibration tests.
Model-based phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.
Uncovered secret of a Vasseur-Tramond wax model.
Pastor, J F; Gutiérrez, B; Montes, J M; Ballestriero, R
2016-01-01
The technique of anatomical wax modelling reached its heyday in Italy during the 18th century, through a fruitful collaboration between sculptors and anatomists. It soon spread to other countries, and prestigious schools were created in England, France, Spain and Austria. Paris subsequently replaced Italy as the major centre of manufacture, and anatomical waxes were created there from the mid-19th century in workshops such as that of Vasseur-Tramond. This workshop began to sell waxes to European Faculties of Medicine and Schools of Surgery around 1880. Little is known of the technique employed in the creation of such artefacts as this was deemed a professional secret. To gain some insight into the methods of construction, we have studied a Vasseur-Tramond wax model in the Valladolid University Anatomy Museum, Spain, by means of multi-slice computerised tomography and X-ray analysis by means of environmental scanning electron microscopy. Scanning electron microscopy was used to examine the hair. These results have revealed some of the methods used to make these anatomical models and the materials employed. © 2015 Anatomical Society.
Erdoğdu, Utku; Tan, Mehmet; Alhajj, Reda; Polat, Faruk; Rokne, Jon; Demetrick, Douglas
2013-01-01
The availability of enough samples for effective analysis and knowledge discovery has been a challenge in the research community, especially in the area of gene expression data analysis. Thus, the approaches being developed for data analysis have mostly suffered from the lack of enough data to train and test the constructed models. We argue that the process of sample generation could be successfully automated by employing some sophisticated machine learning techniques. An automated sample generation framework could successfully complement the actual sample generation from real cases. This argument is validated in this paper by describing a framework that integrates multiple models (perspectives) for sample generation. We illustrate its applicability for producing new gene expression data samples, a highly demanding area that has not received attention. The three perspectives employed in the process are based on models that are not closely related. The independence eliminates the bias of having the produced approach covering only certain characteristics of the domain and leading to samples skewed towards one direction. The first model is based on the Probabilistic Boolean Network (PBN) representation of the gene regulatory network underlying the given gene expression data. The second model integrates Hierarchical Markov Model (HIMM) and the third model employs a genetic algorithm in the process. Each model learns as much as possible characteristics of the domain being analysed and tries to incorporate the learned characteristics in generating new samples. In other words, the models base their analysis on domain knowledge implicitly present in the data itself. The developed framework has been extensively tested by checking how the new samples complement the original samples. The produced results are very promising in showing the effectiveness, usefulness and applicability of the proposed multi-model framework.
Wind tunnel investigation of simulated helicopter engine exhaust interacting with windstream
NASA Technical Reports Server (NTRS)
Shaw, C. S.; Wilson, J. C.
1974-01-01
A wind tunnel investigation of the windstream-engine exhaust flow interaction on a light observation helicopter model has been conducted in the Langley V/STOL tunnel. The investigation utilized flow visualization techniques to determine the cause to determine the cause of exhaust shield overheating during cruise and to find a means of eliminating the problem. Exhaust flow attachment to the exhaust shield during cruise was found to cause the overheating. Several flow-altering devices were evaluated to find a suitable way to correct the problem. A flow deflector located on the model cowling upstream of the exhaust in addition to aerodynamic shield fairings provided the best solution. Also evaluated was heat transfer concept employing pin fins to cool future exhaust hardware. The primary flow visualization technique used in the investigation was a newly developed system employing neutrally buoyant helium-filled bubbles. The resultant flow patterns were recorded on motion picture film and on television magnetic tape.
Atomic force microscopy of model lipid membranes.
Morandat, Sandrine; Azouzi, Slim; Beauvais, Estelle; Mastouri, Amira; El Kirat, Karim
2013-02-01
Supported lipid bilayers (SLBs) are biomimetic model systems that are now widely used to address the biophysical and biochemical properties of biological membranes. Two main methods are usually employed to form SLBs: the transfer of two successive monolayers by Langmuir-Blodgett or Langmuir-Schaefer techniques, and the fusion of preformed lipid vesicles. The transfer of lipid films on flat solid substrates offers the possibility to apply a wide range of surface analytical techniques that are very sensitive. Among them, atomic force microscopy (AFM) has opened new opportunities for determining the nanoscale organization of SLBs under physiological conditions. In this review, we first focus on the different protocols generally employed to prepare SLBs. Then, we describe AFM studies on the nanoscale lateral organization and mechanical properties of SLBs. Lastly, we survey recent developments in the AFM monitoring of bilayer alteration, remodeling, or digestion, by incubation with exogenous agents such as drugs, proteins, peptides, and nanoparticles.
NASA Astrophysics Data System (ADS)
Vo, Kiet T.; Sowmya, Arcot
A directional multi-scale modeling scheme based on wavelet and contourlet transforms is employed to describe HRCT lung image textures for classifying four diffuse lung disease patterns: normal, emphysema, ground glass opacity (GGO) and honey-combing. Generalized Gaussian density parameters are used to represent the detail sub-band features obtained by wavelet and contourlet transforms. In addition, support vector machines (SVMs) with excellent performance in a variety of pattern classification problems are used as classifier. The method is tested on a collection of 89 slices from 38 patients, each slice of size 512x512, 16 bits/pixel in DICOM format. The dataset contains 70,000 ROIs of those slices marked by experienced radiologists. We employ this technique at different wavelet and contourlet transform scales for diffuse lung disease classification. The technique presented here has best overall sensitivity 93.40% and specificity 98.40%.
Magnetic resonance in studies of glaucoma
Fiedorowicz, Michał; Dyda, Wojciech; Rejdak, Robert; Grieb, Paweł
2011-01-01
Summary Glaucoma is the second leading cause of blindness. It affects retinal ganglion cells and the optic nerve. However, there is emerging evidence that glaucoma also affects other components of the visual pathway and visual cortex. There is a need to employ new methods of in vivo brain evaluation to characterize these changes. Magnetic resonance (MR) techniques are well suited for this purpose. We review data on the MR evaluation of the visual pathway and the use of MR techniques in the study of glaucoma, both in humans and in animal models. These studies demonstrated decreases in optic nerve diameter, localized white matter loss and decrease in visual cortex density. Studies on rats employing manganese-enhanced MRI showed that axonal transport in the optic nerve is affected. Diffusion tensor MRI revealed signs of degeneration of the optic pathway. Functional MRI showed decreased response of the visual cortex after stimulation of the glaucomatous eye. Magnetic resonance spectroscopy demonstrated changes in metabolite levels in the visual cortex in a rat model of glaucoma, although not in glaucoma patients. Further applications of MR techniques in studies of glaucomatous brains are indicated. PMID:21959626
Febo, Marcelo; Foster, Thomas C.
2016-01-01
Neuroimaging provides for non-invasive evaluation of brain structure and activity and has been employed to suggest possible mechanisms for cognitive aging in humans. However, these imaging procedures have limits in terms of defining cellular and molecular mechanisms. In contrast, investigations of cognitive aging in animal models have mostly utilized techniques that have offered insight on synaptic, cellular, genetic, and epigenetic mechanisms affecting memory. Studies employing magnetic resonance imaging and spectroscopy (MRI and MRS, respectively) in animal models have emerged as an integrative set of techniques bridging localized cellular/molecular phenomenon and broader in vivo neural network alterations. MRI methods are remarkably suited to longitudinal tracking of cognitive function over extended periods permitting examination of the trajectory of structural or activity related changes. Combined with molecular and electrophysiological tools to selectively drive activity within specific brain regions, recent studies have begun to unlock the meaning of fMRI signals in terms of the role of neural plasticity and types of neural activity that generate the signals. The techniques provide a unique opportunity to causally determine how memory-relevant synaptic activity is processed and how memories may be distributed or reconsolidated over time. The present review summarizes research employing animal MRI and MRS in the study of brain function, structure, and biochemistry, with a particular focus on age-related cognitive decline. PMID:27468264
NASA Technical Reports Server (NTRS)
Smith, R. L.; Lyubomirsky, A. S.
1981-01-01
Two techniques were analyzed. The first is a representation using Chebyshev expansions in three-dimensional cells. The second technique employs a temporary file for storing the components of the nonspherical gravity force. Computer storage requirements and relative CPU time requirements are presented. The Chebyshev gravity representation can provide a significant reduction in CPU time in precision orbit calculations, but at the cost of a large amount of direct-access storage space, which is required for a global model.
Paluch, Piotr; Pawlak, Tomasz; Oszajca, Marcin; Lasocha, Wieslaw; Potrzebowski, Marek J
2015-02-01
We present step by step facets important in NMR Crystallography strategy employing O-phospho-dl-tyrosine as model sample. The significance of three major techniques being components of this approach: solid state NMR (SS NMR), X-ray diffraction of powdered sample (PXRD) and theoretical calculations (Gauge Invariant Projector Augmented Wave; GIPAW) is discussed. Each experimental technique provides different set of structural constraints. From the PXRD measurement the size of the unit cell, space group and roughly refined molecular structure are established. SS NMR provides information about content of crystallographic asymmetric unit, local geometry, molecular motion in the crystal lattice and hydrogen bonding pattern. GIPAW calculations are employed for validation of quality of elucidation and fine refinement of structure. Crystal and molecular structure of O-phospho-dl-tyrosine solved by NMR Crystallography is deposited at Cambridge Crystallographic Data Center under number CCDC 1005924. Copyright © 2014 Elsevier Inc. All rights reserved.
A randomised approach for NARX model identification based on a multivariate Bernoulli distribution
NASA Astrophysics Data System (ADS)
Bianchi, F.; Falsone, A.; Prandini, M.; Piroddi, L.
2017-04-01
The identification of polynomial NARX models is typically performed by incremental model building techniques. These methods assess the importance of each regressor based on the evaluation of partial individual models, which may ultimately lead to erroneous model selections. A more robust assessment of the significance of a specific model term can be obtained by considering ensembles of models, as done by the RaMSS algorithm. In that context, the identification task is formulated in a probabilistic fashion and a Bernoulli distribution is employed to represent the probability that a regressor belongs to the target model. Then, samples of the model distribution are collected to gather reliable information to update it, until convergence to a specific model. The basic RaMSS algorithm employs multiple independent univariate Bernoulli distributions associated to the different candidate model terms, thus overlooking the correlations between different terms, which are typically important in the selection process. Here, a multivariate Bernoulli distribution is employed, in which the sampling of a given term is conditioned by the sampling of the others. The added complexity inherent in considering the regressor correlation properties is more than compensated by the achievable improvements in terms of accuracy of the model selection process.
Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir
2014-01-01
Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
Validation of Aircraft Noise Prediction Models at Low Levels of Exposure
NASA Technical Reports Server (NTRS)
Page, Juliet A.; Hobbs, Christopher M.; Plotkin, Kenneth J.; Stusnick, Eric; Shepherd, Kevin P. (Technical Monitor)
2000-01-01
Aircraft noise measurements were made at Denver International Airport for a period of four weeks. Detailed operational information was provided by airline operators which enabled noise levels to be predicted using the FAA's Integrated Noise Model. Several thrust prediction techniques were evaluated. Measured sound exposure levels for departure operations were found to be 4 to 10 dB higher than predicted, depending on the thrust prediction technique employed. Differences between measured and predicted levels are shown to be related to atmospheric conditions present at the aircraft altitude.
NASA Technical Reports Server (NTRS)
Loane, J. T.; Bowhill, S. A.; Mayes, P. E.
1982-01-01
The effects of atmospheric turbulence and the basis for the coherent scatter radar techniques are discussed. The reasons are given for upgrading the Radar system to a larger steerable array. Phase array theory pertinent to the system design is reviewed, along with approximations for maximum directive gain and blind angles due to mutual coupling. The methods and construction techniques employed in the UHF model study are explained. The antenna range is described, with a block diagram for the mode of operation used.
Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence.
Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be clinically beneficial for inhaler technique training and the remote monitoring of patient adherence. PMID:29346430
Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst
2016-05-01
Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Advanced optical position sensors for magnetically suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Lafleur, S.
1985-01-01
A major concern to aerodynamicists has been the corruption of wind tunnel test data by model support structures, such as stings or struts. A technique for magnetically suspending wind tunnel models was considered by Tournier and Laurenceau (1957) in order to overcome this problem. This technique is now implemented with the aid of a Large Magnetic Suspension and Balance System (LMSBS) and advanced position sensors for measuring model attitude and position within the test section. Two different optical position sensors are discussed, taking into account a device based on the use of linear CCD arrays, and a device utilizing area CID cameras. Current techniques in image processing have been employed to develop target tracking algorithms capable of subpixel resolution for the sensors. The algorithms are discussed in detail, and some preliminary test results are reported.
Daily pan evaporation modelling using a neuro-fuzzy computing technique
NASA Astrophysics Data System (ADS)
Kişi, Özgür
2006-10-01
SummaryEvaporation, as a major component of the hydrologic cycle, is important in water resources development and management. This paper investigates the abilities of neuro-fuzzy (NF) technique to improve the accuracy of daily evaporation estimation. Five different NF models comprising various combinations of daily climatic variables, that is, air temperature, solar radiation, wind speed, pressure and humidity are developed to evaluate degree of effect of each of these variables on evaporation. A comparison is made between the estimates provided by the NF model and the artificial neural networks (ANNs). The Stephens-Stewart (SS) method is also considered for the comparison. Various statistic measures are used to evaluate the performance of the models. Based on the comparisons, it was found that the NF computing technique could be employed successfully in modelling evaporation process from the available climatic data. The ANN also found to perform better than the SS method.
Effect of microstructure on the static and dynamic behavior of recycled asphalt material
DOT National Transportation Integrated Search
2002-07-01
This report describes the research activities of a project dealing with theoretical/numerical modeling and experimental studies of the micromechanical behavior of recycled asphalt material. The theoretical work employed finite element techniques to d...
Iteration and Prototyping in Creating Technical Specifications.
ERIC Educational Resources Information Center
Flynt, John P.
1994-01-01
Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)
Neural assembly models derived through nano-scale measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Hongyou; Branda, Catherine; Schiek, Richard Louis
2009-09-01
This report summarizes accomplishments of a three-year project focused on developing technical capabilities for measuring and modeling neuronal processes at the nanoscale. It was successfully demonstrated that nanoprobes could be engineered that were biocompatible, and could be biofunctionalized, that responded within the range of voltages typically associated with a neuronal action potential. Furthermore, the Xyce parallel circuit simulator was employed and models incorporated for simulating the ion channel and cable properties of neuronal membranes. The ultimate objective of the project had been to employ nanoprobes in vivo, with the nematode C elegans, and derive a simulation based on the resultingmore » data. Techniques were developed allowing the nanoprobes to be injected into the nematode and the neuronal response recorded. To the authors's knowledge, this is the first occasion in which nanoparticles have been successfully employed as probes for recording neuronal response in an in vivo animal experimental protocol.« less
Bindu, G; Semenov, S
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.
Simulation of wind turbine wakes using the actuator line technique.
Sørensen, Jens N; Mikkelsen, Robert F; Henningson, Dan S; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J
2015-02-28
The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
R symmetries and a heterotic MSSM
NASA Astrophysics Data System (ADS)
Kappl, Rolf; Nilles, Hans Peter; Schmitz, Matthias
2015-02-01
We employ powerful techniques based on Hilbert and Gröbner bases to analyze particle physics models derived from string theory. Individual models are shown to have a huge landscape of vacua that differ in their phenomenological properties. We explore the (discrete) symmetries of these vacua, the new R symmetry selection rules and their consequences for moduli stabilization.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
Electronically nonadiabatic wave packet propagation using frozen Gaussian scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondorskiy, Alexey D., E-mail: kondor@sci.lebedev.ru; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp
2015-09-21
We present an approach, which allows to employ the adiabatic wave packet propagation technique and semiclassical theory to treat the nonadiabatic processes by using trajectory hopping. The approach developed generates a bunch of hopping trajectories and gives all additional information to incorporate the effect of nonadiabatic coupling into the wave packet dynamics. This provides an interface between a general adiabatic frozen Gaussian wave packet propagation method and the trajectory surface hopping technique. The basic idea suggested in [A. D. Kondorskiy and H. Nakamura, J. Chem. Phys. 120, 8937 (2004)] is revisited and complemented in the present work by the elaborationmore » of efficient numerical algorithms. We combine our approach with the adiabatic Herman-Kluk frozen Gaussian approximation. The efficiency and accuracy of the resulting method is demonstrated by applying it to popular benchmark model systems including three Tully’s models and 24D model of pyrazine. It is shown that photoabsorption spectrum is successfully reproduced by using a few hundreds of trajectories. We employ the compact finite difference Hessian update scheme to consider feasibility of the ab initio “on-the-fly” simulations. It is found that this technique allows us to obtain the reliable final results using several Hessian matrix calculations per trajectory.« less
Dynamic NMR Study of Model CMP Slurry Containing Silica Particles as Abrasives
NASA Astrophysics Data System (ADS)
Odeh, F.; Al-Bawab, A.; Li, Y.
2018-02-01
Chemical mechanical planarization (CMP) should provide a good surface planarity with minimal surface defectivity. Since CMP slurries are multi-component systems, it is very important to understand the various processes and interactions taking place in such slurries. Several techniques have been employed for such task, however, most of them lack the molecular recognition to investigate molecular interactions without adding probes which in turn increase complexity and might alter the microenvironment of the slurry. Nuclear magnetic resonance (NMR) is a powerful technique that can be employed in such study. The longitudinal relaxation times (T1) of the different components of CMP slurries were measured using Spin Echo-NMR (SE-NMR) at a constant temperature. The fact that NMR is non-invasive and gives information on the molecular level gives more advantage to the technique. The model CMP slurry was prepared in D2O to enable monitoring of T1 for the various components' protons. SE-NMR provide a very powerful tool to study the various interactions and adsorption processes that take place in a model CMP silica based slurry which contains BTA and/or glycine and/or Cu+2 ions. It was found that BTA is very competitive towards complexation with Cu+2 ions and BTA-Cu complex adsorbs on silica surface.
Modelling the effect of structural QSAR parameters on skin penetration using genetic programming
NASA Astrophysics Data System (ADS)
Chung, K. K.; Do, D. Q.
2010-09-01
In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.
Homer, Michael D.; Peterson, James T.; Jennings, Cecil A.
2015-01-01
Back-calculation of length-at-age from otoliths and spines is a common technique employed in fisheries biology, but few studies have compared the precision of data collected with this method for catfish populations. We compared precision of back-calculated lengths-at-age for an introducedIctalurus furcatus (Blue Catfish) population among 3 commonly used cross-sectioning techniques. We used gillnets to collect Blue Catfish (n = 153) from Lake Oconee, GA. We estimated ages from a basal recess, articulating process, and otolith cross-section from each fish. We employed the Frasier-Lee method to back-calculate length-at-age for each fish, and compared the precision of back-calculated lengths among techniques using hierarchical linear models. Precision in age assignments was highest for otoliths (83.5%) and lowest for basal recesses (71.4%). Back-calculated lengths were variable among fish ages 1–3 for the techniques compared; otoliths and basal recesses yielded variable lengths at age 8. We concluded that otoliths and articulating processes are adequate for age estimation of Blue Catfish.
Characterizing sources of uncertainty from global climate models and downscaling techniques
Wootten, Adrienne; Terando, Adam; Reich, Brian J.; Boyles, Ryan; Semazzi, Fred
2017-01-01
In recent years climate model experiments have been increasingly oriented towards providing information that can support local and regional adaptation to the expected impacts of anthropogenic climate change. This shift has magnified the importance of downscaling as a means to translate coarse-scale global climate model (GCM) output to a finer scale that more closely matches the scale of interest. Applying this technique, however, introduces a new source of uncertainty into any resulting climate model ensemble. Here we present a method, based on a previously established variance decomposition method, to partition and quantify the uncertainty in climate model ensembles that is attributable to downscaling. We apply the method to the Southeast U.S. using five downscaled datasets that represent both statistical and dynamical downscaling techniques. The combined ensemble is highly fragmented, in that only a small portion of the complete set of downscaled GCMs and emission scenarios are typically available. The results indicate that the uncertainty attributable to downscaling approaches ~20% for large areas of the Southeast U.S. for precipitation and ~30% for extreme heat days (> 35°C) in the Appalachian Mountains. However, attributable quantities are significantly lower for time periods when the full ensemble is considered but only a sub-sample of all models are available, suggesting that overconfidence could be a serious problem in studies that employ a single set of downscaled GCMs. We conclude with recommendations to advance the design of climate model experiments so that the uncertainty that accrues when downscaling is employed is more fully and systematically considered.
Link-prediction to tackle the boundary specification problem in social network surveys
De Wilde, Philippe; Buarque de Lima-Neto, Fernando
2017-01-01
Diffusion processes in social networks often cause the emergence of global phenomena from individual behavior within a society. The study of those global phenomena and the simulation of those diffusion processes frequently require a good model of the global network. However, survey data and data from online sources are often restricted to single social groups or features, such as age groups, single schools, companies, or interest groups. Hence, a modeling approach is required that extrapolates the locally restricted data to a global network model. We tackle this Missing Data Problem using Link-Prediction techniques from social network research, network generation techniques from the area of Social Simulation, as well as a combination of both. We found that techniques employing less information may be more adequate to solve this problem, especially when data granularity is an issue. We validated the network models created with our techniques on a number of real-world networks, investigating degree distributions as well as the likelihood of links given the geographical distance between two nodes. PMID:28426826
Adaptive neuro fuzzy inference system-based power estimation method for CMOS VLSI circuits
NASA Astrophysics Data System (ADS)
Vellingiri, Govindaraj; Jayabalan, Ramesh
2018-03-01
Recent advancements in very large scale integration (VLSI) technologies have made it feasible to integrate millions of transistors on a single chip. This greatly increases the circuit complexity and hence there is a growing need for less-tedious and low-cost power estimation techniques. The proposed work employs Back-Propagation Neural Network (BPNN) and Adaptive Neuro Fuzzy Inference System (ANFIS), which are capable of estimating the power precisely for the complementary metal oxide semiconductor (CMOS) VLSI circuits, without requiring any knowledge on circuit structure and interconnections. The ANFIS to power estimation application is relatively new. Power estimation using ANFIS is carried out by creating initial FIS modes using hybrid optimisation and back-propagation (BP) techniques employing constant and linear methods. It is inferred that ANFIS with the hybrid optimisation technique employing the linear method produces better results in terms of testing error that varies from 0% to 0.86% when compared to BPNN as it takes the initial fuzzy model and tunes it by means of a hybrid technique combining gradient descent BP and mean least-squares optimisation algorithms. ANFIS is the best suited for power estimation application with a low RMSE of 0.0002075 and a high coefficient of determination (R) of 0.99961.
Logistic regression for risk factor modelling in stuttering research.
Reed, Phil; Wu, Yaqionq
2013-06-01
To outline the uses of logistic regression and other statistical methods for risk factor analysis in the context of research on stuttering. The principles underlying the application of a logistic regression are illustrated, and the types of questions to which such a technique has been applied in the stuttering field are outlined. The assumptions and limitations of the technique are discussed with respect to existing stuttering research, and with respect to formulating appropriate research strategies to accommodate these considerations. Finally, some alternatives to the approach are briefly discussed. The way the statistical procedures are employed are demonstrated with some hypothetical data. Research into several practical issues concerning stuttering could benefit if risk factor modelling were used. Important examples are early diagnosis, prognosis (whether a child will recover or persist) and assessment of treatment outcome. After reading this article you will: (a) Summarize the situations in which logistic regression can be applied to a range of issues about stuttering; (b) Follow the steps in performing a logistic regression analysis; (c) Describe the assumptions of the logistic regression technique and the precautions that need to be checked when it is employed; (d) Be able to summarize its advantages over other techniques like estimation of group differences and simple regression. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Magee, Derek; Tanner, Steven F.; Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis; Jeavons, Alan P.
2010-08-01
Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.
Parameter sensitivity analysis for pesticide impacts on honeybee colonies
We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...
Sobol’ sensitivity analysis for stressor impacts on honeybee colonies
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...
Behavioral Modeling and Characterization of Nonlinear Operation in RF and Microwave Systems
2005-01-01
the model further reinforces the intuition gained by employing this modeling technique. 84 Chapter 5 Remote Characterization of RF Devices 5.1...was used to extract the power series coefficients, 21 dBm. This further reinforces the conclusion that the nonlinear coefficients should be extracted...are becoming important. The fit of the odd-ordered model reinforces this hypothesis since the phase component of the fit roughly splits the
Supercritical nonlinear parametric dynamics of Timoshenko microbeams
NASA Astrophysics Data System (ADS)
Farokhi, Hamed; Ghayesh, Mergen H.
2018-06-01
The nonlinear supercritical parametric dynamics of a Timoshenko microbeam subject to an axial harmonic excitation force is examined theoretically, by means of different numerical techniques, and employing a high-dimensional analysis. The time-variant axial load is assumed to consist of a mean value along with harmonic fluctuations. In terms of modelling, a continuous expression for the elastic potential energy of the system is developed based on the modified couple stress theory, taking into account small-size effects; the kinetic energy of the system is also modelled as a continuous function of the displacement field. Hamilton's principle is employed to balance the energies and to obtain the continuous model of the system. Employing the Galerkin scheme along with an assumed-mode technique, the energy terms are reduced, yielding a second-order reduced-order model with finite number of degrees of freedom. A transformation is carried out to convert the second-order reduced-order model into a double-dimensional first order one. A bifurcation analysis is performed for the system in the absence of the axial load fluctuations. Moreover, a mean value for the axial load is selected in the supercritical range, and the principal parametric resonant response, due to the time-variant component of the axial load, is obtained - as opposed to transversely excited systems, for parametrically excited system (such as our problem here), the nonlinear resonance occurs in the vicinity of twice any natural frequency of the linear system; this is accomplished via use of the pseudo-arclength continuation technique, a direct time integration, an eigenvalue analysis, and the Floquet theory for stability. The natural frequencies of the system prior to and beyond buckling are also determined. Moreover, the effect of different system parameters on the nonlinear supercritical parametric dynamics of the system is analysed, with special consideration to the effect of the length-scale parameter.
ERIC Educational Resources Information Center
Harrison, David J.; Saito, Laurel; Markee, Nancy; Herzog, Serge
2017-01-01
To examine the impact of a hybrid-flipped model utilising active learning techniques, the researchers inverted one section of an undergraduate fluid mechanics course, reduced seat time, and engaged in active learning sessions in the classroom. We compared this model to the traditional section on four performance measures. We employed a propensity…
Reinersman, Phillip N; Carder, Kendall L
2004-05-01
A hybrid method is presented by which Monte Carlo (MC) techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-, two-, or three-dimensional optical environments. The optical environments are first divided into contiguous subregions, or elements. MC techniques are employed to determine the optical response function of each type of element. The elements are combined, and relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. One-dimensional results compare well with a standard radiative transfer model. The light field beneath and adjacent to a long barge is modeled in two dimensions and displayed. Ramifications for underwater video imaging are discussed. The hybrid model is currently capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.
NASA Technical Reports Server (NTRS)
Middleton, Troy F.; Balla, Robert J.; Baurle, Robert A.; Wilson, Lloyd G.
2008-01-01
Under the Propulsion Discipline of NASA s Fundamental Aeronautics Program s Hypersonics Project, a test apparatus, for testing a scramjet isolator model, is being constructed at NASA's Langley Research Center. The test apparatus will incorporate a 1-inch by 2-inch by 15-inch-long scramjet isolator model supplied with 2.1 lbm/sec of unheated dry air through a Mach 2.5 converging-diverging nozzle. The planned research will incorporate progressively more challenging measurement techniques to characterize the flow field within the isolator, concluding with the application of the Laser-Induced Thermal Acoustic (LITA) measurement technique. The primary goal of this research is to use the data acquired to validate Computational Fluid Dynamics (CFD) models employed to characterize the complex flow field of a scramjet isolator. This paper describes the test apparatus being constructed, pre-test CFD simulations, and the LITA measurement technique.
A new data assimilation engine for physics-based thermospheric density models
NASA Astrophysics Data System (ADS)
Sutton, E. K.; Henney, C. J.; Hock-Mysliwiec, R.
2017-12-01
The successful assimilation of data into physics-based coupled Ionosphere-Thermosphere models requires rethinking the filtering techniques currently employed in fields such as tropospheric weather modeling. In the realm of Ionospheric-Thermospheric modeling, the estimation of system drivers is a critical component of any reliable data assimilation technique. How to best estimate and apply these drivers, however, remains an open question and active area of research. The recently developed method of Iterative Re-Initialization, Driver Estimation and Assimilation (IRIDEA) accounts for the driver/response time-delay characteristics of the Ionosphere-Thermosphere system relative to satellite accelerometer observations. Results from two near year-long simulations are shown: (1) from a period of elevated solar and geomagnetic activity during 2003, and (2) from a solar minimum period during 2007. This talk will highlight the challenges and successes of implementing a technique suited for both solar min and max, as well as expectations for improving neutral density forecasts.
NASA Astrophysics Data System (ADS)
Poblet, Josep; Bulnes, Mayte
2007-12-01
A strategy to predict strain across geological structures, based on previous techniques, is modified and evaluated, and a practical application is shown. The technique, which employs cross-section restoration combined with kinematic forward modelling, consists of restoring a section, placing circular strain markers on different domains of the restoration, and forward modelling the restored section with strain markers until the present-day stage is reached. The restoration algorithm employed must be also used to forward model the structure. The ellipses in the forward modelled section allow determining the strain state of the structure and may indirectly predict orientation and distribution of minor structures such as small-scale fractures. The forward model may be frozen at different time steps (different growth stages) allowing prediction of both spatial and temporal variation of strain. The method is evaluated through its application to two stages of a clay experiment, that includes strain markers, and its geometry and deformation history are well documented, providing a strong control on the results. To demonstrate the method's potential, it is successfully applied to a depth-converted seismic profile in the Central Sumatra Basin, Indonesia. This allowed us to gain insight into the deformation undergone by rollover anticlines over listric normal faults.
Low level waste management: a compilation of models and monitoring techniques. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosier, J.E.; Fowler, J.R.; Barton, C.J.
1980-04-01
In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surfacemore » water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.« less
Slicing AADL Specifications for Model Checking
NASA Technical Reports Server (NTRS)
Odenbrett, Maximilian; Nguyen, Viet Yen; Noll, Thomas
2010-01-01
To combat the state-space explosion problem in model checking larger systems, abstraction techniques can be employed. Here, methods that operate on the system specification before constructing its state space are preferable to those that try to minimize the resulting transition system as they generally reduce peak memory requirements. We sketch a slicing algorithm for system specifications written in (a variant of) the Architecture Analysis and Design Language (AADL). Given a specification and a property to be verified, it automatically removes those parts of the specification that are irrelevant for model checking the property, thus reducing the size of the corresponding transition system. The applicability and effectiveness of our approach is demonstrated by analyzing the state-space reduction for an example, employing a translator from AADL to Promela, the input language of the SPIN model checker.
Zebrafish models of cardiovascular diseases and their applications in herbal medicine research.
Seto, Sai-Wang; Kiat, Hosen; Lee, Simon M Y; Bensoussan, Alan; Sun, Yu-Ting; Hoi, Maggie P M; Chang, Dennis
2015-12-05
The zebrafish (Danio rerio) has recently become a powerful animal model for cardiovascular research and drug discovery due to its ease of maintenance, genetic manipulability and ability for high-throughput screening. Recent advances in imaging techniques and generation of transgenic zebrafish have greatly facilitated in vivo analysis of cellular events of cardiovascular development and pathogenesis. More importantly, recent studies have demonstrated the functional similarity of drug metabolism systems between zebrafish and humans, highlighting the clinical relevance of employing zebrafish in identifying lead compounds in Chinese herbal medicine with potential beneficial cardiovascular effects. This paper seeks to summarise the scope of zebrafish models employed in cardiovascular studies and the application of these research models in Chinese herbal medicine to date. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Simulating the Thermal Response of High Explosives on Time Scales of Days to Microseconds
NASA Astrophysics Data System (ADS)
Yoh, Jack J.; McClelland, Matthew A.
2004-07-01
We present an overview of computational techniques for simulating the thermal cookoff of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the response of energetic materials systems exposed to extreme thermal environments, such as fires. We consider an idealized model process for a confined explosive involving the transition from slow heating to rapid deflagration in which the time scale changes from days to hundreds of microseconds. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics.
Sensitivity analyses for simulating pesticide impacts on honey bee colonies
We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop + Pesticide model. Simulations are performed of hive population trajectories with and without pesti...
Non-dynamic decimeter tracking of earth satellites using the Global Positioning System
NASA Technical Reports Server (NTRS)
Yunck, T. P.; Wu, S. C.
1986-01-01
A technique is described for employing the Global Positioning System (GPS) to determine the position of a low earth orbiter with decimeter accuracy without the need for user dynamic models. A differential observing strategy is used requiring a GPS receiver on the user vehicle and a network of six ground receivers. The technique uses the continuous record of position change obtained from GPS carrier phase to smooth position measurements made with pseudo-range. The result is a computationally efficient technique that can deliver decimeter accuracy down to the lowest altitude orbits.
Finite Element Modeling, Simulation, Tools, and Capabilities at Superform
NASA Astrophysics Data System (ADS)
Raman, Hari; Barnes, A. J.
2010-06-01
Over the past thirty years Superform has been a pioneer in the SPF arena, having developed a keen understanding of the process and a range of unique forming techniques to meet varying market needs. Superform’s high-profile list of customers includes Boeing, Airbus, Aston Martin, Ford, and Rolls Royce. One of the more recent additions to Superform’s technical know-how is finite element modeling and simulation. Finite element modeling is a powerful numerical technique which when applied to SPF provides a host of benefits including accurate prediction of strain levels in a part, presence of wrinkles and predicting pressure cycles optimized for time and part thickness. This paper outlines a brief history of finite element modeling applied to SPF and then reviews some of the modeling tools and techniques that Superform have applied and continue to do so to successfully superplastically form complex-shaped parts. The advantages of employing modeling at the design stage are discussed and illustrated with real-world examples.
ERIC Educational Resources Information Center
Hastie, Peter A.; Curtner-Smith, Matthew D.
2006-01-01
Background: Sport Education (SE) and Teaching Games for Understanding (TGfU) are two curriculum models that were developed to help students participate in fair and equitable ways and challenge their thinking beyond the replication of techniques and skills. Given that the general aim of both models is to employ more democratic pedagogies and…
Modeling Temporal Crowd Work Quality with Limited Supervision
2015-11-11
crowdsourcing, human computation, predic- tion, uncertainty-aware learning, time- series modeling Introduction While crowdsourcing offers a cost...individual correctness. As discussed ear- lier, such a strategy is difficult to employ in a live setting because it is unrealistic to assume that all...et al. 2014). Finally, there are interesting opportunities to investigate at the intersection of live task-routing with active-learning techniques
An improved error assessment for the GEM-T1 gravitational model
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.
1988-01-01
Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.
Modeling And Detecting Anomalies In Scada Systems
NASA Astrophysics Data System (ADS)
Svendsen, Nils; Wolthusen, Stephen
The detection of attacks and intrusions based on anomalies is hampered by the limits of specificity underlying the detection techniques. However, in the case of many critical infrastructure systems, domain-specific knowledge and models can impose constraints that potentially reduce error rates. At the same time, attackers can use their knowledge of system behavior to mask their manipulations, causing adverse effects to observed only after a significant period of time. This paper describes elementary statistical techniques that can be applied to detect anomalies in critical infrastructure networks. A SCADA system employed in liquefied natural gas (LNG) production is used as a case study.
Using ridge regression in systematic pointing error corrections
NASA Technical Reports Server (NTRS)
Guiar, C. N.
1988-01-01
A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.
NASA Astrophysics Data System (ADS)
Li, Ning; Wang, Yan; Xu, Kexin
2006-08-01
Combined with Fourier transform infrared (FTIR) spectroscopy and three kinds of pattern recognition techniques, 53 traditional Chinese medicine danshen samples were rapidly discriminated according to geographical origins. The results showed that it was feasible to discriminate using FTIR spectroscopy ascertained by principal component analysis (PCA). An effective model was built by employing the Soft Independent Modeling of Class Analogy (SIMCA) and PCA, and 82% of the samples were discriminated correctly. Through use of the artificial neural network (ANN)-based back propagation (BP) network, the origins of danshen were completely classified.
Bindu, G.; Semenov, S.
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell’s equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness. PMID:24058889
E-Area LLWF Vadose Zone Model: Probabilistic Model for Estimating Subsided-Area Infiltration Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyer, J.; Flach, G.
A probabilistic model employing a Monte Carlo sampling technique was developed in Python to generate statistical distributions of the upslope-intact-area to subsided-area ratio (Area UAi/Area SAi) for closure cap subsidence scenarios that differ in assumed percent subsidence and the total number of intact plus subsided compartments. The plan is to use this model as a component in the probabilistic system model for the E-Area Performance Assessment (PA), contributing uncertainty in infiltration estimates.
A Study on Predictive Analytics Application to Ship Machinery Maintenance
2013-09-01
Looking at the nature of the time series forecasting method , it would be better applied to offline analysis . The application for real- time online...other system attributes in future. Two techniques of statistical analysis , mainly time series models and cumulative sum control charts, are discussed in...statistical tool employed for the two techniques of statistical analysis . Both time series forecasting as well as CUSUM control charts are shown to be
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Adaptive space warping to enhance passive haptics in an arthroscopy surgical simulator.
Spillmann, Jonas; Tuchschmid, Stefan; Harders, Matthias
2013-04-01
Passive haptics, also known as tactile augmentation, denotes the use of a physical counterpart to a virtual environment to provide tactile feedback. Employing passive haptics can result in more realistic touch sensations than those from active force feedback, especially for rigid contacts. However, changes in the virtual environment would necessitate modifications of the physical counterparts. In recent work space warping has been proposed as one solution to overcome this limitation. In this technique virtual space is distorted such that a variety of virtual models can be mapped onto one single physical object. In this paper, we propose as an extension adaptive space warping; we show how this technique can be employed in a mixed-reality surgical training simulator in order to map different virtual patients onto one physical anatomical model. We developed methods to warp different organ geometries onto one physical mock-up, to handle different mechanical behaviors of the virtual patients, and to allow interactive modifications of the virtual structures, while the physical counterparts remain unchanged. Various practical examples underline the wide applicability of our approach. To the best of our knowledge this is the first practical usage of such a technique in the specific context of interactive medical training.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1976-01-01
An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.
A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework
NASA Astrophysics Data System (ADS)
Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco
2017-11-01
We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.
Investigation of the Stability of POD-Galerkin Techniques for Reduced Order Model Development
2016-01-09
symmetrizing the higher- order PDE with a preconditioning matrix. Rowley et al. also pointed out that defining a proper inner product can be important when...equations. The ROM is obtained by employing Galerkin’s method to reduce the high-order PDEs to a lower-order ODE system by means of POD eigen-bases...employing Galerkin’s method to reduce the high-order PDEs to a lower-order ODE system by means of POD eigen-bases. Possible solutions of the ROM stability
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Multidirectional mobilities: Advanced measurement techniques and applications
NASA Astrophysics Data System (ADS)
Ivarsson, Lars Holger
Today high noise-and-vibration comfort has become a quality sign of products in sectors such as the automotive industry, aircraft, components, households and manufacturing. Consequently, already in the design phase of products, tools are required to predict the final vibration and noise levels. These tools have to be applicable over a wide frequency range with sufficient accuracy. During recent decades a variety of tools have been developed such as transfer path analysis (TPA), input force estimation, substructuring, coupling by frequency response functions (FRF) and hybrid modelling. While these methods have a well-developed theoretical basis, their application combined with experimental data often suffers from a lack of information concerning rotational DOFs. In order to measure response in all 6 DOFs (including rotation), a sensor has been developed, whose special features are discussed in the thesis. This transducer simplifies the response measurements, although in practice the excitation of moments appears to be more difficult. Several excitation techniques have been developed to enable measurement of multidirectional mobilities. For rapid and simple measurement of the loaded mobility matrix, a MIMO (Multiple Input Multiple Output) technique is used. The technique has been tested and validated on several structures of different complexity. A second technique for measuring the loaded 6-by-6 mobility matrix has been developed. This technique employs a model of the excitation set-up, and with this model the mobility matrix is determined from sequential measurements. Measurements on ``real'' structures show that both techniques give results of similar quality, and both are recommended for practical use. As a further step, a technique for measuring the unloaded mobilities is presented. It employs the measured loaded mobility matrix in order to calculate compensation forces and moments, which are later applied in order to compensate for the loading of the measurement equipment. The developed measurement techniques have been used in a hybrid coupling of a plate-and-beam structure to study different aspects of the coupling technique. Results show that RDOFs are crucial and have to be included in this case. The importance of stiffness residuals when mobilities are estimated from modal superposition is demonstrated. Finally it is shown that proper curve fitting can correct errors from inconsistently measured data.
Modelling the social and structural determinants of tuberculosis: opportunities and challenges
Boccia, D.; Dodd, P. J.; Lönnroth, K.; Dowdy, D. W.; Siroka, A.; Kimerling, M. E.; White, R. G.; Houben, R. M. G. J.
2017-01-01
INTRODUCTION: Despite the close link between tuberculosis (TB) and poverty, most mathematical models of TB have not addressed underlying social and structural determinants. OBJECTIVE: To review studies employing mathematical modelling to evaluate the epidemiological impact of the structural determinants of TB. METHODS: We systematically searched PubMed and personal libraries to identify eligible articles. We extracted data on the modelling techniques employed, research question, types of structural determinants modelled and setting. RESULTS: From 232 records identified, we included eight articles published between 2008 and 2015; six employed population-based dynamic TB transmission models and two non-dynamic analytic models. Seven studies focused on proximal TB determinants (four on nutritional status, one on wealth, one on indoor air pollution, and one examined overcrowding, socioeconomic and nutritional status), and one focused on macro-economic influences. CONCLUSIONS: Few modelling studies have attempted to evaluate structural determinants of TB, resulting in key knowledge gaps. Despite the challenges of modelling such a complex system, models must broaden their scope to remain useful for policy making. Given the intersectoral nature of the interrelations between structural determinants and TB outcomes, this work will require multidisciplinary collaborations. A useful starting point would be to focus on developing relatively simple models that can strengthen our knowledge regarding the potential effect of the structural determinants on TB outcomes. PMID:28826444
Investigation of Particle Deposition in Internal Cooling Cavities of a Nozzle Guide Vane
NASA Astrophysics Data System (ADS)
Casaday, Brian Patrick
Experimental and computational studies were conducted regarding particle deposition in the internal film cooling cavities of nozzle guide vanes. An experimental facility was fabricated to simulate particle deposition on an impingement liner and upstream surface of a nozzle guide vane wall. The facility supplied particle-laden flow at temperatures up to 1000°F (540°C) to a simplified impingement cooling test section. The heated flow passed through a perforated impingement plate and impacted on a heated flat wall. The particle-laden impingement jets resulted in the buildup of deposit cones associated with individual impingement jets. The deposit growth rate increased with increasing temperature and decreasing impinging velocities. For some low flow rates or high flow temperatures, the deposit cones heights spanned the entire gap between the impingement plate and wall, and grew through the impingement holes. For high flow rates, deposit structures were removed by shear forces from the flow. At low temperatures, deposit formed not only as individual cones, but as ridges located at the mid-planes between impinging jets. A computational model was developed to predict the deposit buildup seen in the experiments. The test section geometry and fluid flow from the experiment were replicated computationally and an Eulerian-Lagrangian particle tracking technique was employed. Several particle sticking models were employed and tested for adequacy. Sticking models that accurately predicted locations and rates in external deposition experiments failed to predict certain structures or rates seen in internal applications. A geometry adaptation technique was employed and the effect on deposition prediction was discussed. A new computational sticking model was developed that predicts deposition rates based on the local wall shear. The growth patterns were compared to experiments under different operating conditions. Of all the sticking models employed, the model based on wall shear, in conjunction with geometry adaptation, proved to be the most accurate in predicting the forms of deposit growth. It was the only model that predicted the changing deposition trends based on flow temperature or Reynolds number, and is recommended for further investigation and application in the modeling of deposition in internal cooling cavities.
Rhea Was a Broad: Pre-Hellenic Greek Myths for Post-Hellenic Children.
ERIC Educational Resources Information Center
Sidwell, R. T.
1981-01-01
Describes a number of techniques employed by the Greek mythographers in the reconstruction of ancient myths to appear as if the male Olympian deities and the patriarchal social order that they modeled were both of vast antiquity, even autochthonic. (HOD)
Principles of operation and data reduction techniques for the loft drag disc turbine transducer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverman, S.
An analysis of the single- and two-phase flow data applicable to the loss-of-fluid test (LOFT) is presented for the LOFT drag turbine transducer. Analytical models which were employed to correlate the experimental data are presented.
Fixed gain and adaptive techniques for rotorcraft vibration control
NASA Technical Reports Server (NTRS)
Roy, R. H.; Saberi, H. A.; Walker, R. A.
1985-01-01
The results of an analysis effort performed to demonstrate the feasibility of employing approximate dynamical models and frequency shaped cost functional control law desgin techniques for helicopter vibration suppression are presented. Both fixed gain and adaptive control designs based on linear second order dynamical models were implemented in a detailed Rotor Systems Research Aircraft (RSRA) simulation to validate these active vibration suppression control laws. Approximate models of fuselage flexibility were included in the RSRA simulation in order to more accurately characterize the structural dynamics. The results for both the fixed gain and adaptive approaches are promising and provide a foundation for pursuing further validation in more extensive simulation studies and in wind tunnel and/or flight tests.
The interaction of unidirectional winds with an isolated barchan sand dune
NASA Technical Reports Server (NTRS)
Gad-El-hak, M.; Pierce, D.; Howard, A.; Morton, J. B.
1976-01-01
Velocity profile measurements are determined on and around a barchan dune model inserted in the roughness layer on the tunnel floor. A theoretical investigation is made into the factors influencing the rate of sand flow around the dune. Flow visualization techniques are employed in the mapping of streamlines of flow on the dune's surface. Maps of erosion and deposition of sand are constructed for the barchan model, utilizing both flow visualization techniques and friction velocities calculated from the measured velocity profiles. The sediment budget found experimentally for the model is compared to predicted and observed results reported. The comparison shows fairly good agreement between the experimentally determined and predicted sediment budgets.
Hidden explosives detector employing pulsed neutron and x-ray interrogation
Schultz, F.J.; Caldwell, J.T.
1993-04-06
Methods and systems for the detection of small amounts of modern, highly-explosive nitrogen-based explosives, such as plastic explosives, hidden in airline baggage. Several techniques are employed either individually or combined in a hybrid system. One technique employed in combination is X-ray imaging. Another technique is interrogation with a pulsed neutron source in a two-phase mode of operation to image both nitrogen and oxygen densities. Another technique employed in combination is neutron interrogation to form a hydrogen density image or three-dimensional map. In addition, deliberately-placed neutron-absorbing materials can be detected.
Hidden explosives detector employing pulsed neutron and x-ray interrogation
Schultz, Frederick J.; Caldwell, John T.
1993-01-01
Methods and systems for the detection of small amounts of modern, highly-explosive nitrogen-based explosives, such as plastic explosives, hidden in airline baggage. Several techniques are employed either individually or combined in a hybrid system. One technique employed in combination is X-ray imaging. Another technique is interrogation with a pulsed neutron source in a two-phase mode of operation to image both nitrogen and oxygen densities. Another technique employed in combination is neutron interrogation to form a hydrogen density image or three-dimensional map. In addition, deliberately-placed neutron-absorbing materials can be detected.
NASA Astrophysics Data System (ADS)
Warren, Z.; Shahriar, M. S.; Tripathi, R.; Pati, G. S.
2018-02-01
A repeated query technique has been demonstrated as a new interrogation method in pulsed coherent population trapping for producing single-peaked Ramsey interference with high contrast. This technique enhances the contrast of the central Ramsey fringe by nearly 1.5 times and significantly suppresses the side fringes by using more query pulses ( >10) in the pulse cycle. Theoretical models have been developed to simulate Ramsey interference and analyze the characteristics of the Ramsey spectrum produced by the repeated query technique. Experiments have also been carried out employing a repeated query technique in a prototype rubidium clock to study its frequency stability performance.
A three-dimensional muscle activity imaging technique for assessing pelvic muscle function
NASA Astrophysics Data System (ADS)
Zhang, Yingchun; Wang, Dan; Timm, Gerald W.
2010-11-01
A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.
Munro, Peter R.T.; Ignatyev, Konstantin; Speller, Robert D.; Olivo, Alessandro
2013-01-01
X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation. PMID:20389424
Munro, Peter R T; Ignatyev, Konstantin; Speller, Robert D; Olivo, Alessandro
2010-03-01
X-ray phase contrast imaging is a very promising technique which may lead to significant advancements in medical imaging. One of the impediments to the clinical implementation of the technique is the general requirement to have an x-ray source of high coherence. The radiation physics group at UCL is currently developing an x-ray phase contrast imaging technique which works with laboratory x-ray sources. Validation of the system requires extensive modelling of relatively large samples of tissue. To aid this, we have undertaken a study of when geometrical optics may be employed to model the system in order to avoid the need to perform a computationally expensive wave optics calculation. In this paper, we derive the relationship between the geometrical and wave optics model for our system imaging an infinite cylinder. From this model we are able to draw conclusions regarding the general applicability of the geometrical optics approximation.
Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction
NASA Astrophysics Data System (ADS)
Aarts, Fides; Jonsson, Bengt; Uijen, Johan
In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.
NASA Astrophysics Data System (ADS)
Walker, Ernest L.
1994-05-01
This paper presents results of a theoretical investigation to evaluate the performance of code division multiple access communications over multimode optical fiber channels in an asynchronous, multiuser communication network environment. The system is evaluated using Gold sequences for spectral spreading of the baseband signal from each user employing direct-sequence biphase shift keying and intensity modulation techniques. The transmission channel model employed is a lossless linear system approximation of the field transfer function for the alpha -profile multimode optical fiber. Due to channel model complexity, a correlation receiver model employing a suboptimal receive filter was used in calculating the peak output signal at the ith receiver. In Part 1, the performance measures for the system, i.e., signal-to-noise ratio and bit error probability for the ith receiver, are derived as functions of channel characteristics, spectral spreading, number of active users, and the bit energy to noise (white) spectral density ratio. In Part 2, the overall system performance is evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Priniski, T. Dodson, M. Duco, S. Raftopoulos, R. Ellis, and A. Brooks
In support of the National Compact Stellerator Experiment (NCSX), stellerator assembly activities continued this past year at the Princeton Plasma Physics Laboratory (PPPL) in partnership with the Oak Ridge National Laboratory (ORNL). The construction program saw the completion of the first two Half Field-Period Assemblies (HPA), each consisting of three modular coils. The full machine includes six such sub-assemblies. A single HPA consists of three of the NCSX modular coils wound and assembled at PPPL. These geometrically-complex threedimensional coils were wound using computer-aided metrology and CAD models to tolerances within +/- 0.5mm. The assembly of these coils required similar accuracymore » on a larger scale with the added complexity of more individual parts and fewer degrees of freedom for correction. Several new potential positioning issues developed for which measurement and control techniques were developed. To accomplish this, CAD coordinate-based computer metrology equipment and software similar to the solutions employed for winding the modular coils was used. Given the size of the assemblies, the primary tools were both interferometeraided and Absolute Distance Measurement (ADM)-only based laser trackers. In addition, portable Coordinate Measurement Machine (CMM) arms and some novel indirect measurement techniques were employed. This paper will detail both the use of CAD coordinate-based metrology technology and the techniques developed and employed for dimensional control of NSCX subassemblies. The results achieved and possible improvements to techniques will be discussed.« less
NASA Astrophysics Data System (ADS)
Zink, Frank Edward
The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.
De-embedding technique for accurate modeling of compact 3D MMIC CPW transmission lines
NASA Astrophysics Data System (ADS)
Pohan, U. H.; KKyabaggu, P. B.; Sinulingga, E. P.
2018-02-01
Requirement for high-density and high-functionality microwave and millimeter-wave circuits have led to the innovative circuit architectures such as three-dimensional multilayer MMICs. The major advantage of the multilayer techniques is that one can employ passive and active components based on CPW technology. In this work, MMIC Coplanar Waveguide(CPW)components such as Transmission Line (TL) are modeled in their 3D layouts. Main characteristics of CPWTL suffered from the probe pads’ parasitic and resonant frequency effects have been studied. By understanding the parasitic effects, then the novel de-embedding technique are developed accurately in order to predict high frequency characteristics of the designed MMICs. The novel de-embedding technique has shown to be critical in reducing the probe pad parasitic significantly from the model. As results, high frequency characteristics of the designed MMICs have been presented with minimumparasitic effects of the probe pads. The de-embedding process optimises the determination of main characteristics of Compact 3D MMIC CPW transmission lines.
NASA Technical Reports Server (NTRS)
Jennings, W. P.; Olsen, N. L.; Walter, M. J.
1976-01-01
The development of testing techniques useful in airplane ground resonance testing, wind tunnel aeroelastic model testing, and airplane flight flutter testing is presented. Included is the consideration of impulsive excitation, steady-state sinusoidal excitation, and random and pseudorandom excitation. Reasons for the selection of fast sine sweeps for transient excitation are given. The use of the fast fourier transform dynamic analyzer (HP-5451B) is presented, together with a curve fitting data process in the Laplace domain to experimentally evaluate values of generalized mass, model frequencies, dampings, and mode shapes. The effects of poor signal to noise ratios due to turbulence creating data variance are discussed. Data manipulation techniques used to overcome variance problems are also included. The experience is described that was gained by using these techniques since the early stages of the SST program. Data measured during 747 flight flutter tests, and SST, YC-14, and 727 empennage flutter model tests are included.
Surface temperatures and glassy state investigations in tribology
NASA Technical Reports Server (NTRS)
Bair, S.; Winer, W. O.
1979-01-01
The limiting shear stress shear rheological model was applied to property measurements pursuant to the use of the constitutive equation and the application of the constitutive equation to elastrohydrodynamic (EHD) traction. Experimental techniques were developed to subject materials to isothermal compression which is similar to the history the materials were subjected to in EHD contacts. In addition, an apparatus was developed for measuring the shear stress-strain behavior of solid lubricating materials. Four commercially available materials were examined under pressure. They exhibit elastic and limiting shear stress behavior similar to that of liquid lubricants. The application of the limiting shear stress model to traction predictions was extended employing the primary materials properties measured in the laboratory. The shear rheological model was also applied to a Grubin-like EHD inlet analysis for predicting film thicknesses when employing the limiting shear stress model material behavior.
Song, Xiao-Dong; Zhang, Gan-Lin; Liu, Feng; Li, De-Cheng; Zhao, Yu-Guo
2016-11-01
The influence of anthropogenic activities and natural processes involved high uncertainties to the spatial variation modeling of soil available zinc (AZn) in plain river network regions. Four datasets with different sampling densities were split over the Qiaocheng district of Bozhou City, China. The difference of AZn concentrations regarding soil types was analyzed by the principal component analysis (PCA). Since the stationarity was not indicated and effective ranges of four datasets were larger than the sampling extent (about 400 m), two investigation tools, namely F3 test and stationarity index (SI), were employed to test the local non-stationarity. Geographically weighted regression (GWR) technique was performed to describe the spatial heterogeneity of AZn concentrations under the non-stationarity assumption. GWR based on grouped soil type information (GWRG for short) was proposed so as to benefit the local modeling of soil AZn within each soil-landscape unit. For reference, the multiple linear regression (MLR) model, a global regression technique, was also employed and incorporated the same predictors as in the GWR models. Validation results based on 100 times realization demonstrated that GWRG outperformed MLR and can produce similar or better accuracy than the GWR approach. Nevertheless, GWRG can generate better soil maps than GWR for limit soil data. Two-sample t test of produced soil maps also confirmed significantly different means. Variogram analysis of the model residuals exhibited weak spatial correlation, rejecting the use of hybrid kriging techniques. As a heuristically statistical method, the GWRG was beneficial in this study and potentially for other soil properties.
ASSESSMENT OF GENETIC DAMAGE INDICATORS IN FISH IN LABORATORY, MESOCOSM AND WATERSHED STUDIES
The micronucleus (MN) and single cell gel electrophoresis (SCG) ("Comet") techniques for measuring DNA damage are being evaluated for their potential use as indicators of exposure of fish populations. Laboratory studies employed acute exposures of bluegill sunfish to five model g...
Mastery Learning in Physical Education.
ERIC Educational Resources Information Center
Annarino, Anthony
This paper discusses the design of a physical education curriculum to be used in advanced secondary physical education programs and in university basic instructional programs; the design is based on the premise of mastery learning and employs programed instructional techniques. The effective implementation of a mastery learning model necessitates…
Teaching Action Research: The Role of Demographics
ERIC Educational Resources Information Center
Mcmurray, Adela J.
2006-01-01
This article summarizes a longitudinal study of employed MBA students with particular emphasis on findings involving their choice of action research model to implement personal and organizational change in their environment. A multi-method approach merging both quantitative and qualitative techniques was utilized. A questionnaire consisting of…
Sensitivity analyses for simulating pesticide impacts on honey bee colonies
USDA-ARS?s Scientific Manuscript database
We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop+Pesticide model. Simulations are performed of hive population trajectories with and without pesticide exposure to determine the eff...
The joint US/UK 1990 epoch world magnetic model
NASA Technical Reports Server (NTRS)
Quinn, John M.; Coleman, Rachel J.; Peck, Michael R.; Lauber, Stephen E.
1991-01-01
A detailed summary of the data used, analyses performed, modeling techniques employed, and results obtained in the course of the 1990 Epoch World Magnetic Modeling effort are given. Also, use and limitations of the GEOMAG algorithm are presented. Charts and tables related to the 1990 World Magnetic Model (WMM-90) for the Earth's main field and secular variation in Mercator and polar stereographic projections are presented along with useful tables of several magnetic field components and their secular variation on a 5-degree worldwide grid.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.
Microcomputer Applications with PC LAN (Local Area Network) in Battleships.
1988-12-01
NETWORKS 5 C. TRANSMISSION TECHNIQUES 6 D. MEDIUM ACCESS CONTROL METHODS 6 1. CSMA CD 6 2. Control Token 7 3. Slotted Ring 7 E...network model in the Turkish Battleships will employ the broadband technique. D. MEDIUM ACCESS CONTROL METHODS The access method is one of the most...better performance at heavier loads. 3. Slotted Ring This method is used with a ring network. The ring is initialized to contain a fixed number of
Supercritical tests of a self-optimizing, variable-Camber wind tunnel model
NASA Technical Reports Server (NTRS)
Levinsky, E. S.; Palko, R. L.
1979-01-01
A testing procedure was used in a 16-foot Transonic Propulsion Wind Tunnel which leads to optimum wing airfoil sections without stopping the tunnel for model changes. Being experimental, the optimum shapes obtained incorporate various three-dimensional and nonlinear viscous and transonic effects not included in analytical optimization methods. The method is a closed-loop, computer-controlled, interactive procedure and employs a Self-Optimizing Flexible Technology wing semispan model that conformally adapts the airfoil section at two spanwise control stations to maximize or minimize various prescribed merit functions subject to both equality and inequality constraints. The model, which employed twelve independent hydraulic actuator systems and flexible skins, was also used for conventional testing. Although six of seven optimizations attempted were at least partially convergent, further improvements in model skin smoothness and hydraulic reliability are required to make the technique fully operational.
Liu, Langechuan; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao
2014-01-01
Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016 μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an optimization map indicating the regions of design that provide a balance between these metrics was obtained. The map shows that, for a given set of optical parameters, scintillator thickness and pixel pitch can be judiciously chosen to maximize performance without resorting to thicker, more costly scintillators. Conclusions: Modeling radiation and optical effects in thick, segmented scintillators through use of a hybrid technique can provide a practical way to gain insight as to how to optimize the performance of such devices in radiotherapy imaging. Assisted by such modeling, the development of practical designs should greatly facilitate low-dose, soft tissue visualization employing MV CBCT imaging in external beam radiotherapy. PMID:24877827
Videometric Applications in Wind Tunnels
NASA Technical Reports Server (NTRS)
Burner, A. W.; Radeztsky, R. H.; Liu, Tian-Shu
1997-01-01
Videometric measurements in wind tunnels can be very challenging due to the limited optical access, model dynamics, optical path variability during testing, large range of temperature and pressure, hostile environment, and the requirements for high productivity and large amounts of data on a daily basis. Other complications for wind tunnel testing include the model support mechanism and stringent surface finish requirements for the models in order to maintain aerodynamic fidelity. For these reasons nontraditional photogrammetric techniques and procedures sometimes must be employed. In this paper several such applications are discussed for wind tunnels which include test conditions with Mach number from low speed to hypersonic, pressures from less than an atmosphere to nearly seven atmospheres, and temperatures from cryogenic to above room temperature. Several of the wind tunnel facilities are continuous flow while one is a short duration blowdown facility. Videometric techniques and calibration procedures developed to measure angle of attack, the change in wing twist and bending induced by aerodynamic load, and the effects of varying model injection rates are described. Some advantages and disadvantages of these techniques are given and comparisons are made with non-optical and more traditional video photogrammetric techniques.
Parallel 3D-TLM algorithm for simulation of the Earth-ionosphere cavity
NASA Astrophysics Data System (ADS)
Toledo-Redondo, Sergio; Salinas, Alfonso; Morente-Molinera, Juan Antonio; Méndez, Antonio; Fornieles, Jesús; Portí, Jorge; Morente, Juan Antonio
2013-03-01
A parallel 3D algorithm for solving time-domain electromagnetic problems with arbitrary geometries is presented. The technique employed is the Transmission Line Modeling (TLM) method implemented in Shared Memory (SM) environments. The benchmarking performed reveals that the maximum speedup depends on the memory size of the problem as well as multiple hardware factors, like the disposition of CPUs, cache, or memory. A maximum speedup of 15 has been measured for the largest problem. In certain circumstances of low memory requirements, superlinear speedup is achieved using our algorithm. The model is employed to model the Earth-ionosphere cavity, thus enabling a study of the natural electromagnetic phenomena that occur in it. The algorithm allows complete 3D simulations of the cavity with a resolution of 10 km, within a reasonable timescale.
Modeling Laser Effects on Imaging Spacecraft Using the SSM
NASA Astrophysics Data System (ADS)
Buehler, P.; Smith, J.; Farmer, J.; Bonn, D.
The Satellite Survivability Module (SSM) is an end-to-end, physics-based, performance prediction model for directed energy engagement of orbiting spacecraft. Two engagement types are currently supported: laser engagement of the focal plane array of an imaging spacecraft; and Radio Frequency (RF) engagement of spacecraft components. For laser engagements, the user creates a spacecraft, its optical system, any protection techniques used by the optical system, a laser threat, and an atmosphere through which the laser will pass. For RF engagements, the user creates a spacecraft (as a set of subsystem components), any protection techniques, and an RF source. SSM then models the engagement and its impact on the spacecraft using four impact levels: degradation, saturation, damage, and destruction. Protection techniques, if employed, will mitigate engagement effects. SSM currently supports several two laser and three RF protection techniques. SSM allows the user to create and implement a variety of "what if" scenarios. Satellites can be placed in a variety of orbits. Threats can be placed anywhere on the Earth. Satellites and threats can be mixed and matched to examine possibilities. Protection techniques for a particular spacecraft can be turned on or off individually; and can be arranged in any order to simulate more complicated protection schemes. Results can be displayed as 2-D or 3-D visualizations, or as textual reports. In order to test SSM capabilities, the Ball team used it to model engagement scenarios for a space experiment scheduled for the 2011 time frame. SSM was created by Ball Aerospace & Technologies Corp. Systems Engineering Solutions in Albuquerque, New Mexico as an add-on module for the Satellite Tool Kit (STK). The current version of SSM (1.0) interfaces with STK through the Programmer's Library (STK/PL). Future versions of SSM will employ STK/Connect to provide the user access to STK functionality. The work is currently funded by the Air Force Research Laboratory, Space Vehicles directorate at Kirtland AFB, New Mexico, under contract number FA9453-06-C-0096.
2016-06-01
characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira
Modelling the Energy Efficient Sensor Nodes for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Dahiya, R.; Arora, A. K.; Singh, V. R.
2015-09-01
Energy is an important requirement of wireless sensor networks for better performance. A widely employed energy-saving technique is to place nodes in sleep mode, corresponding to low-power consumption as well as to reduce operational capabilities. In this paper, Markov model of a sensor network is developed. The node is considered to enter a sleep mode. This model is used to investigate the system performance in terms of energy consumption, network capacity and data delivery delay.
Deeb, Omar; Shaik, Basheerulla; Agrawal, Vijay K
2014-10-01
Quantitative Structure-Activity Relationship (QSAR) models for binding affinity constants (log Ki) of 78 flavonoid ligands towards the benzodiazepine site of GABA (A) receptor complex were calculated using the machine learning methods: artificial neural network (ANN) and support vector machine (SVM) techniques. The models obtained were compared with those obtained using multiple linear regression (MLR) analysis. The descriptor selection and model building were performed with 10-fold cross-validation using the training data set. The SVM and MLR coefficient of determination values are 0.944 and 0.879, respectively, for the training set and are higher than those of ANN models. Though the SVM model shows improvement of training set fitting, the ANN model was superior to SVM and MLR in predicting the test set. Randomization test is employed to check the suitability of the models.
Shima, Fumiaki; Narita, Hirokazu; Hiura, Ayami; Shimoda, Hiroshi; Akashi, Mitsuru
2017-03-01
There is considerable global demand for three-dimensional (3D) functional tissues which mimic our native organs and tissues for use as in vitro drug screening systems and in regenerative medicine. In particular, there has been an increasing number of patients who suffer from arterial diseases such as arteriosclerosis. As such, in vitro 3D arterial wall models that can evaluate the effects of novel medicines and a novel artificial graft for the treatment are required. In our previous study, we reported the rapid construction of 3D tissues by employing a layer-by-layer (LbL) technique and revealed their potential applications in the pharmaceutical fields and tissue engineering. In this study, we successfully constructed a 3D arterial wall model containing vasa vasorum by employing a LbL technique for the first time. The cells were coated with extracellular matrix nanofilms and seeded into a culture insert using a cell accumulation method. This model had a three-layered hierarchical structure: a fibroblast layer, a smooth muscle layer, and an endothelial layer, which resembled the native arterial wall. Our method could introduce vasa vasorum into a fibroblast layer in vitro and the 3D arterial wall model showed barrier function which was evaluated by immunostaining and transendothelial electrical resistance measurement. Furthermore, electron microscopy observations revealed that the vasa vasorum was composed of single-layered endothelial cells, and the endothelial tubes were surrounded by the basal lamina, which are known to promote maturation and stabilization in native blood capillaries. These models should be useful for tissue engineering, regenerative medicine, and pharmaceutical applications. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 105A: 814-823, 2017. © 2016 Wiley Periodicals, Inc.
Teixeira, Kelly Sivocy Sampaio; da Cruz Fonseca, Said Gonçalves; de Moura, Luís Carlos Brigido; de Moura, Mario Luís Ribeiro; Borges, Márcia Herminia Pinheiro; Barbosa, Euzébio Guimaraes; De Lima E Moura, Túlio Flávio Accioly
2018-02-05
The World Health Organization recommends that TB treatment be administered using combination therapy. The methodologies for quantifying simultaneously associated drugs are highly complex, being costly, extremely time consuming and producing chemical residues harmful to the environment. The need to seek alternative techniques that minimize these drawbacks is widely discussed in the pharmaceutical industry. Therefore, the objective of this study was to develop and validate a multivariate calibration model in association with the near infrared spectroscopy technique (NIR) for the simultaneous determination of rifampicin, isoniazid, pyrazinamide and ethambutol. These models allow the quality control of these medicines to be optimized using simple, fast, low-cost techniques that produce no chemical waste. In the NIR - PLS method, spectra readings were acquired in the 10,000-4000cm -1 range using an infrared spectrophotometer (IRPrestige - 21 - Shimadzu) with a resolution of 4cm -1 , 20 sweeps, under controlled temperature and humidity. For construction of the model, the central composite experimental design was employed on the program Statistica 13 (StatSoft Inc.). All spectra were treated by computational tools for multivariate analysis using partial least squares regression (PLS) on the software program Pirouette 3.11 (Infometrix, Inc.). Variable selections were performed by the QSAR modeling program. The models developed by NIR in association with multivariate analysis provided good prediction of the APIs for the external samples and were therefore validated. For the tablets, however, the slightly different quantitative compositions of excipients compared to the mixtures prepared for building the models led to results that were not statistically similar, despite having prediction errors considered acceptable in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Method and program product for determining a radiance field in an optical environment
NASA Technical Reports Server (NTRS)
Reinersman, Phillip N. (Inventor); Carder, Kendall L. (Inventor)
2007-01-01
A hybrid method is presented by which Monte Carlo techniques are combined with iterative relaxation techniques to solve the Radiative Transfer Equation in arbitrary one-, two- or three-dimensional optical environments. The optical environments are first divided into contiguous regions, or elements, with Monte Carlo techniques then being employed to determine the optical response function of each type of element. The elements are combined, and the iterative relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. This hybrid model is capable of providing estimates of the under-water light field needed to expedite inspection of ship hulls and port facilities. It is also capable of providing estimates of the subaerial light field for structured, absorbing or non-absorbing environments such as shadows of mountain ranges within and without absorption spectral bands such as water vapor or CO.sub.2 bands.
Weather or Not To Teach Junior High Meteorology.
ERIC Educational Resources Information Center
Knorr, Thomas P.
1984-01-01
Presents a technique for teaching meteorology allowing students to observe and analyze consecutive weather maps and relate local conditions; a model illustrating the three-dimensional nature of the atmosphere is employed. Instructional methods based on studies of daily weather maps to trace systems sweeping across the United States are discussed.…
Research on golden-winged warblers: recent progress and current needs
Henry M. Streby; Ronald W. Rohrbaugh; David A. Buehler; David E. Andersen; Rachel Vallender; David I. King; Tom Will
2016-01-01
Considerable advances have been made in knowledge about Golden-winged Warblers (Vermivora chrysoptera) in the past decade. Recent employment of molecular analysis, stable-isotope analysis, telemetry-based monitoring of survival and behavior, and spatially explicit modeling techniques have added to, and revised, an already broad base of published...
Predicting the Emplacement of Improvised Explosive Devices: An Innovative Solution
ERIC Educational Resources Information Center
Lerner, Warren D.
2013-01-01
In this quantitative correlational study, simulated data were employed to examine artificial-intelligence techniques or, more specifically, artificial neural networks, as they relate to the location prediction of improvised explosive devices (IEDs). An ANN model was developed to predict IED placement, based upon terrain features and objects…
Testing a Conceptual Change Model Framework for Visual Data
ERIC Educational Resources Information Center
Finson, Kevin D.; Pedersen, Jon E.
2015-01-01
An emergent data analysis technique was employed to test the veracity of a conceptual framework constructed around visual data use and instruction in science classrooms. The framework incorporated all five key components Vosniadou (2007a, 2007b) described as existing in a learner's schema: framework theory, presuppositions, conceptual domains,…
ERIC Educational Resources Information Center
Rodriguez, Lulu; Dimitrova, Daniela V.
2011-01-01
While framing research has centered mostly on the evaluations of media texts, visual news discourse has remained relatively unexamined. This study surveys the visual framing techniques and methods employed in previous studies and proposes a four-tiered model of identifying and analyzing visual frames: (1) visuals as denotative systems, (2) visuals…
A proposed technique for vehicle tracking, direction, and speed determination
NASA Astrophysics Data System (ADS)
Fisher, Paul S.; Angaye, Cleopas O.; Fisher, Howard P.
2004-12-01
A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don"t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.
Sarpietro, Maria Grazia; Giuffrida, Maria Chiara; Ottimo, Sara; Micieli, Dorotea; Castelli, Francesco
2011-04-25
Three coumarins, scopoletin (1), esculetin (2), and esculin (3), were investigated by differential scanning calorimetry and Langmuir-Blodgett techniques to gain information about the interaction of these compounds with cellular membranes. Phospholipids assembled as multilamellar vesicles or monolayers (at the air-water interface) were used as biomembrane models. Differential scanning calorimetry was employed to study the interaction of these coumarins with multilamellar vesicles and to evaluate their absorption by multilamellar vesicles. These experiments indicated that 1-3 interact in this manner to different extents. The Langmuir-Blodgett technique was used to study the effect of these coumarins on the organization of phospholipids assembled as a monolayer. The data obtained were in agreement with those obtained in the calorimetric experiments.
Systems Biology in Immunology – A Computational Modeling Perspective
Germain, Ronald N.; Meier-Schellersheim, Martin; Nita-Lazar, Aleksandra; Fraser, Iain D. C.
2011-01-01
Systems biology is an emerging discipline that combines high-content, multiplexed measurements with informatic and computational modeling methods to better understand biological function at various scales. Here we present a detailed review of the methods used to create computational models and conduct simulations of immune function, We provide descriptions of the key data gathering techniques employed to generate the quantitative and qualitative data required for such modeling and simulation and summarize the progress to date in applying these tools and techniques to questions of immunological interest, including infectious disease. We include comments on what insights modeling can provide that complement information obtained from the more familiar experimental discovery methods used by most investigators and why quantitative methods are needed to eventually produce a better understanding of immune system operation in health and disease. PMID:21219182
Artificial intelligence techniques for modeling database user behavior
NASA Technical Reports Server (NTRS)
Tanner, Steve; Graves, Sara J.
1990-01-01
The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.
Image mosaicing for automated pipe scanning
NASA Astrophysics Data System (ADS)
Summan, Rahul; Dobie, Gordon; Guarato, Francesco; MacLeod, Charles; Marshall, Stephen; Forrester, Cailean; Pierce, Gareth; Bolton, Gary
2015-03-01
Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability of image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice.
Image mosaicing for automated pipe scanning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summan, Rahul, E-mail: rahul.summan@strath.ac.uk; Dobie, Gordon, E-mail: rahul.summan@strath.ac.uk; Guarato, Francesco, E-mail: rahul.summan@strath.ac.uk
Remote visual inspection (RVI) is critical for the inspection of the interior condition of pipelines particularly in the nuclear and oil and gas industries. Conventional RVI equipment produces a video which is analysed online by a trained inspector employing expert knowledge. Due to the potentially disorientating nature of the footage, this is a time intensive and difficult activity. In this paper a new probe for such visual inspections is presented. The device employs a catadioptric lens coupled with feature based structure from motion to create a 3D model of the interior surface of a pipeline. Reliance upon the availability ofmore » image features is mitigated through orientation and distance estimates from an inertial measurement unit and encoder respectively. Such a model affords a global view of the data thus permitting a greater appreciation of the nature and extent of defects. Furthermore, the technique estimates the 3D position and orientation of the probe thus providing information to direct remedial action. Results are presented for both synthetic and real pipe sections. The former enables the accuracy of the generated model to be assessed while the latter demonstrates the efficacy of the technique in a practice.« less
NASA Astrophysics Data System (ADS)
Uysal, Selcuk Can
In this research, MATLAB SimulinkRTM was used to develop a cooled engine model for industrial gas turbines and aero-engines. The model consists of uncooled on-design, mean-line turbomachinery design and a cooled off-design analysis in order to evaluate the engine performance parameters by using operating conditions, polytropic efficiencies, material information and cooling system details. The cooling analysis algorithm involves a 2nd law analysis to calculate losses from the cooling technique applied. The model is used in a sensitivity analysis that evaluates the impacts of variations in metal Biot number, thermal barrier coating Biot number, film cooling effectiveness, internal cooling effectiveness and maximum allowable blade temperature on main engine performance parameters of aero and industrial gas turbine engines. The model is subsequently used to analyze the relative performance impact of employing Anti-Vortex Film Cooling holes (AVH) by means of data obtained for these holes by Detached Eddy Simulation-CFD Techniques that are valid for engine-like turbulence intensity conditions. Cooled blade configurations with AVH and other different external cooling techniques were used in a performance comparison study. (Abstract shortened by ProQuest.).
From Data to Images:. a Shape Based Approach for Fluorescence Tomography
NASA Astrophysics Data System (ADS)
Dorn, O.; Prieto, K. E.
2012-12-01
Fluorescence tomography is treated as a shape reconstruction problem for a coupled system of two linear transport equations in 2D. The shape evolution is designed in order to minimize the least squares data misfit cost functional either in the excitation frequency or in the emission frequency. Furthermore, a level set technique is employed for numerically modelling the evolving shapes. Numerical results are presented which demonstrate the performance of this novel technique in the situation of noisy simulated data in 2D.
Evaluation of vibrated fluidized bed techniques in coating hemosorbents.
Morley, D B
1991-06-01
A coating technique employing a vibrated fluidized bed was used to apply an ultrathin (2 microns) cellulose nitrate coating to synthetic bead activated charcoal. In vitro characteristics of the resulting coated sorbent, including permeability to model small and middle molecules, and mechanical integrity, were evaluated to determine the suitability of the process in coating granular sorbents used in hemoperfusion. Initial tests suggest the VFB-applied CN coating is both highly uniform and tightly adherent and warrants further investigation as a hemosorbent coating.
An advanced technique for the prediction of decelerator system dynamics.
NASA Technical Reports Server (NTRS)
Talay, T. A.; Morris, W. D.; Whitlock, C. H.
1973-01-01
An advanced two-body six-degree-of-freedom computer model employing an indeterminate structures approach has been developed for the parachute deployment process. The program determines both vehicular and decelerator responses to aerodynamic and physical property inputs. A better insight into the dynamic processes that occur during parachute deployment has been developed. The model is of value in sensitivity studies to isolate important parameters that affect the vehicular response.
A Hybrid Method for Opinion Finding Task (KUNLP at TREC 2008 Blog Track)
2008-11-01
retrieve relevant documents. For the Opinion Retrieval subtask, we propose a hybrid model of lexicon-based approach and machine learning approach for...estimating and ranking the opinionated documents. For the Polarized Opinion Retrieval subtask, we employ machine learning for predicting the polarity...and linear combination technique for ranking polar documents. The hybrid model which utilize both lexicon-based approach and machine learning approach
Management Sciences Division Annual Report (10th)
1993-01-01
of the Weapon System Management Information System (WSMIS). TheI Aircraft Sustainability Model ( ASM ) is the computational technique employed by...provisioning. We enhanced the capabilities of RBIRD by using the Aircraft Sustainability Model ( ASM ) for the spares calculation. ASM offers many... ASM for several years to 3 compute spares for war. It is also fully compatible with the Air Force’s peacetime spares computation system (D041). This
ERIC Educational Resources Information Center
Davis, Heather A.; Chang, Mei-Lin; Andrzejewski, Carey E.; Poirier, Ryan R.
2014-01-01
The purpose of this study was to examine changes in students' relational engagement across the transition to high school in three schools reformed to improve the quality of student-teacher relationships. In order to analyze this data we employed latent growth curve (LGC) modeling techniques (n = 637). We ran three LGC models on three…
Note: Design of FPGA based system identification module with application to atomic force microscopy
NASA Astrophysics Data System (ADS)
Ghosal, Sayan; Pradhan, Sourav; Salapaka, Murti
2018-05-01
The science of system identification is widely utilized in modeling input-output relationships of diverse systems. In this article, we report field programmable gate array (FPGA) based implementation of a real-time system identification algorithm which employs forgetting factors and bias compensation techniques. The FPGA module is employed to estimate the mechanical properties of surfaces of materials at the nano-scale with an atomic force microscope (AFM). The FPGA module is user friendly which can be interfaced with commercially available AFMs. Extensive simulation and experimental results validate the design.
Spatial analysis on future housing markets: economic development and housing implications.
Liu, Xin; Wang, Lizhe
2014-01-01
A coupled projection method combining formal modelling and other statistical techniques was developed to delineate the relationship between economic and social drivers for net new housing allocations. Using the example of employment growth in Tyne and Wear, UK, until 2016, the empirical analysis yields housing projections at the macro- and microspatial levels (e.g., region to subregion to elected ward levels). The results have important implications for the strategic planning of locations for housing and employment, demonstrating both intuitively and quantitatively how local economic developments affect housing demand.
Spatial Analysis on Future Housing Markets: Economic Development and Housing Implications
Liu, Xin; Wang, Lizhe
2014-01-01
A coupled projection method combining formal modelling and other statistical techniques was developed to delineate the relationship between economic and social drivers for net new housing allocations. Using the example of employment growth in Tyne and Wear, UK, until 2016, the empirical analysis yields housing projections at the macro- and microspatial levels (e.g., region to subregion to elected ward levels). The results have important implications for the strategic planning of locations for housing and employment, demonstrating both intuitively and quantitatively how local economic developments affect housing demand. PMID:24892097
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
NASA Technical Reports Server (NTRS)
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.
Comparison of Conceptual and Neural Network Rainfall-Runoff Models
NASA Astrophysics Data System (ADS)
Vidyarthi, V. K.; Jain, A.
2014-12-01
Rainfall-runoff (RR) model is a key component of any water resource application. There are two types of techniques usually employed for RR modeling: physics based and data-driven techniques. Although the physics based models have been used for operational purposes for a very long time, they provide only reasonable accuracy in modeling and forecasting. On the other hand, the Artificial Neural Networks (ANNs) have been reported to provide superior modeling performance; however, they have not been acceptable by practitioners, decision makers and water resources engineers as operational tools. The ANNs one of the data driven techniques, became popular for efficient modeling of the complex natural systems in the last couple of decades. In this paper, the comparative results for conceptual and ANN models in RR modeling are presented. The conceptual models were developed by the use of rainfall-runoff library (RRL) and genetic algorithm (GA) was used for calibration of these models. Feed-forward neural network model structure trained by Levenberg-Marquardt (LM) training algorithm has been adopted here to develop all the ANN models. The daily rainfall, runoff and various climatic data derived from Bird creek basin, Oklahoma, USA were employed to develop all the models included here. Daily potential evapotranspiration (PET), which was used in conceptual model development, was calculated by the use of Penman equation. The input variables were selected on the basis of correlation analysis. The performance evaluation statistics such as average absolute relative error (AARE), Pearson's correlation coefficient (R) and threshold statistics (TS) were used for assessing the performance of all the models developed here. The results obtained in this study show that the ANN models outperform the conventional conceptual models due to their ability to learn the non-linearity and complexity inherent in data of rainfall-runoff process in a more efficient manner. There is a strong need to carry out such studies to prove the superiority of ANN models over conventional methods in an attempt to make them acceptable by water resources community responsible for the operation of water resources systems.
NASA Technical Reports Server (NTRS)
Sherif, S.A.; Hunt, P. L.; Holladay, J. B.; Lear, W. E.; Steadham, J. M.
1998-01-01
Jet pumps are devices capable of pumping fluids to a higher pressure by inducing the motion of a secondary fluid employing a high speed primary fluid. The main components of a jet pump are a primary nozzle, secondary fluid injectors, a mixing chamber, a throat, and a diffuser. The work described in this paper models the flow of a two-phase primary fluid inducing a secondary liquid (saturated or subcooled) injected into the jet pump mixing chamber. The model is capable of accounting for phase transformations due to compression, expansion, and mixing. The model is also capable of incorporating the effects of the temperature and pressure dependency in the analysis. The approach adopted utilizes an isentropic constant pressure mixing in the mixing chamber and at times employs iterative techniques to determine the flow conditions in the different parts of the jet pump.
Analysis of spreadable cheese by Raman spectroscopy and chemometric tools.
Oliveira, Kamila de Sá; Callegaro, Layce de Souza; Stephani, Rodrigo; Almeida, Mariana Ramos; de Oliveira, Luiz Fernando Cappa
2016-03-01
In this work, FT-Raman spectroscopy was explored to evaluate spreadable cheese samples. A partial least squares discriminant analysis was employed to identify the spreadable cheese samples containing starch. To build the models, two types of samples were used: commercial samples and samples manufactured in local industries. The method of supervised classification PLS-DA was employed to classify the samples as adulterated or without starch. Multivariate regression was performed using the partial least squares method to quantify the starch in the spreadable cheese. The limit of detection obtained for the model was 0.34% (w/w) and the limit of quantification was 1.14% (w/w). The reliability of the models was evaluated by determining the confidence interval, which was calculated using the bootstrap re-sampling technique. The results show that the classification models can be used to complement classical analysis and as screening methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rapid analysis of pharmaceutical drugs using LIBS coupled with multivariate analysis.
Tiwari, P K; Awasthi, S; Kumar, R; Anand, R K; Rai, P K; Rai, A K
2018-02-01
Type 2 diabetes drug tablets containing voglibose having dose strengths of 0.2 and 0.3 mg of various brands have been examined, using laser-induced breakdown spectroscopy (LIBS) technique. The statistical methods such as the principal component analysis (PCA) and the partial least square regression analysis (PLSR) have been employed on LIBS spectral data for classifying and developing the calibration models of drug samples. We have developed the ratio-based calibration model applying PLSR in which relative spectral intensity ratios H/C, H/N and O/N are used. Further, the developed model has been employed to predict the relative concentration of element in unknown drug samples. The experiment has been performed in air and argon atmosphere, respectively, and the obtained results have been compared. The present model provides rapid spectroscopic method for drug analysis with high statistical significance for online control and measurement process in a wide variety of pharmaceutical industrial applications.
Efficiency measurement of health care organizations: What models are used?
Jaafaripooyan, Ebrahim; Emamgholipour, Sara; Raei, Behzad
2017-01-01
Background: Literature abounds with various techniques for efficiency measurement of health care organizations (HCOs), which should be used cautiously and appropriately. The present study aimed at discovering the rules regulating the interplay among the number of inputs, outputs, and decision- making units (DMUs) and identifying all methods used for the measurement of Iranian HCOs and critically appraising all DEA studies on Iranian HCOs in their application of such rules. Methods: The present study employed a systematic search of all studies related to efficiency measurement of Iranian HCOs. A search was conducted in different databases such as PubMed and Scopus between 2001 and 2015 to identify the studies related to the measurement in health care. The retrieved studies passed through a multi-stage (title, abstract, body) filtering process. Data extraction table for each study was completed and included method, number of inputs and outputs, DMUs, and their efficiency score. Results: Various methods were found for efficiency measurement. Overall, 122 studies were retrieved, of which 73 had exclusively employed DEA technique for measuring the efficiency of HCOs in Iran, and 23 with hybrid models (including DEA). Only 6 studies had explicitly used the rules of thumb. Conclusion: The number of inputs, outputs, and DMUs should be cautiously selected in DEA like techniques, as their proportionality can directly affect the discriminatory power of the technique. The given literature seemed to be, to a large extent, unsuccessful in attending to such proportionality. This study collected a list of key rules (of thumb) on the interplay of inputs, outputs, and DMUs, which could be considered by most researchers keen to apply DEA technique.
Smith, Caitlin; Huey, Stanley J; McDaniel, Dawn D
2015-05-01
Research with substance-abusing samples suggests that eliciting commitment language during treatment may improve motivation to change, increase treatment engagement, and promote positive treatment outcomes. However, the relationship between in-session client language and treatment success is not well-understood for youth offender populations. This study evaluated the relationship between commitment language, treatment engagement (i.e., homework completion), and weekly employment outcomes for six gang-affiliated juvenile offenders participating in an employment counseling intervention. Weekly counseling sessions were audio-recorded, transcribed, and coded for commitment language strength. Multilevel models were fit to the data to examine the relationship between commitment language and counseling homework or employment outcomes within participants over time. Commitment language strength predicted subsequent homework completion but not weekly employment. These findings imply that gang-affiliated delinquent youth who express motivation to change during employment counseling will be more likely to comply with counselor-initiated homework. Further research on counselor techniques for promoting commitment language among juvenile gang offenders is needed. © The Author(s) 2013.
Modeling Self-Heating Effects in Nanoscale Devices
NASA Astrophysics Data System (ADS)
Raleva, K.; Shaik, A. R.; Vasileska, D.; Goodnick, S. M.
2017-08-01
Accurate thermal modeling and the design of microelectronic devices and thin film structures at the micro- and nanoscales poses a challenge to electrical engineers who are less familiar with the basic concepts and ideas in sub-continuum heat transport. This book aims to bridge that gap. Efficient heat removal methods are necessary to increase device performance and device reliability. The authors provide readers with a combination of nanoscale experimental techniques and accurate modeling methods that must be employed in order to determine a device's temperature profile.
Identification of quasi-steady compressor characteristics from transient data
NASA Technical Reports Server (NTRS)
Nunes, K. B.; Rock, S. M.
1984-01-01
The principal goal was to demonstrate that nonlinear compressor map parameters, which govern an in-stall response, can be identified from test data using parameter identification techniques. The tasks included developing and then applying an identification procedure to data generated by NASA LeRC on a hybrid computer. Two levels of model detail were employed. First was a lumped compressor rig model; second was a simplified turbofan model. The main outputs are the tools and procedures generated to accomplish the identification.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
NASA Astrophysics Data System (ADS)
Blanco, Francesco; La Rocca, Paola; Petta, Catia; Riggi, Francesco
2009-01-01
An educational model simulation of the sound produced by lightning in the sky has been employed to demonstrate realistic signatures of thunder and its connection to the particular structure of the lightning channel. Algorithms used in the past have been revisited and implemented, making use of current computer techniques. The basic properties of the mathematical model, together with typical results and suggestions for additional developments are discussed. The paper is intended as a teaching aid for students and teachers in the context of introductory physics courses at university level.
NASA Technical Reports Server (NTRS)
Munday, J. C., Jr.; Gordon, H. H.; Welch, C. S.; Williams, G.
1976-01-01
Projects for sewage outfall siting for pollution control in the lower Chesapeake Bay wetlands are reported. A dye-buoy/photogrammetry and remote sensing technique was employed to gather circulation data used in outfall siting. This technique is greatly favored over alternate methods because it is inexpensive, produces results quickly, and reveals Lagrangian current paths which are preferred in making siting decisions. Wetlands data were obtained by interpretation of color and color infrared photographic imagery from several altitudes. Historical sequences of photographs are shown that were used to document wetlands changes. Sequential infrared photography of inlet basins was employed to determine tidal prisms, which were input to mathematical models to be used by state agencies in pollution control. A direct and crucial link between remote sensing and management decisions was demonstrated in the various projects.
Real time wave forecasting using wind time history and numerical model
NASA Astrophysics Data System (ADS)
Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.
Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.
NASA Astrophysics Data System (ADS)
Kuo, Chih-Hao
Efficient and accurate modeling of electromagnetic scattering from layered rough surfaces with buried objects finds applications ranging from detection of landmines to remote sensing of subsurface soil moisture. The formulation of a hybrid numerical/analytical solution to electromagnetic scattering from layered rough surfaces is first presented in this dissertation. The solution to scattering from each rough interface is sought independently based on the extended boundary condition method (EBCM), where the scattered fields of each rough interface are expressed as a summation of plane waves and then cast into reflection/transmission matrices. To account for interactions between multiple rough boundaries, the scattering matrix method (SMM) is applied to recursively cascade reflection and transmission matrices of each rough interface and obtain the composite reflection matrix from the overall scattering medium. The validation of this method against the Method of Moments (MoM) and Small Perturbation Method (SPM) is addressed and the numerical results which investigate the potential of low frequency radar systems in estimating deep soil moisture are presented. Computational efficiency of the proposed method is also discussed. In order to demonstrate the capability of this method in modeling coherent multiple scattering phenomena, the proposed method has been employed to analyze backscattering enhancement and satellite peaks due to surface plasmon waves from layered rough surfaces. Numerical results which show the appearance of enhanced backscattered peaks and satellite peaks are presented. Following the development of the EBCM/SMM technique, a technique which incorporates a buried object in layered rough surfaces by employing the T-matrix method and the cylindrical-to-spatial harmonics transformation is proposed. Validation and numerical results are provided. Finally, a multi-frequency polarimetric inversion algorithm for the retrieval of subsurface soil properties using VHF/UHF band radar measurements is devised. The top soil dielectric constant is first determined using an L-band inversion algorithm. For the retrieval of subsurface properties, a time-domain inversion technique is employed together with a parameter optimization for the pulse shape of time delay echoes from VHF/UHF band radar observations. Numerical studies to investigate the accuracy of the proposed inversion technique in presence of errors are addressed.
Linking the Pilot Structural Model and Pilot Workload
NASA Technical Reports Server (NTRS)
Bachelder, Edward; Hess, Ronald; Aponso, Bimal; Godfroy-Cooper, Martine
2018-01-01
Behavioral models are developed that closely reproduced pulsive control response of two pilots using markedly different control techniques while conducting a tracking task. An intriguing find was that the pilots appeared to: 1) produce a continuous, internally-generated stick signal that they integrated in time; 2) integrate the actual stick position; and 3) compare the two integrations to either issue or cease a pulse command. This suggests that the pilots utilized kinesthetic feedback in order to sense and integrate stick position, supporting the hypothesis that pilots can access and employ the proprioceptive inner feedback loop proposed by Hess's pilot Structural Model. A Pilot Cost Index was developed, whose elements include estimated workload, performance, and the degree to which the pilot employs kinesthetic feedback. Preliminary results suggest that a pilot's operating point (parameter values) may be based on control style and index minimization.
Sophocleous, M.A.; Koelliker, J.K.; Govindaraju, R.S.; Birdie, T.; Ramireddygari, S.R.; Perkins, S.P.
1999-01-01
The objective of this article is to develop and implement a comprehensive computer model that is capable of simulating the surface-water, ground-water, and stream-aquifer interactions on a continuous basis for the Rattlesnake Creek basin in south-central Kansas. The model is to be used as a tool for evaluating long-term water-management strategies. The agriculturally-based watershed model SWAT and the ground-water model MODFLOW with stream-aquifer interaction routines, suitably modified, were linked into a comprehensive basin model known as SWATMOD. The hydrologic response unit concept was implemented to overcome the quasi-lumped nature of SWAT and represent the heterogeneity within each subbasin of the basin model. A graphical user-interface and a decision support system were also developed to evaluate scenarios involving manipulation of water fights and agricultural land uses on stream-aquifer system response. An extensive sensitivity analysis on model parameters was conducted, and model limitations and parameter uncertainties were emphasized. A combination of trial-and-error and inverse modeling techniques were employed to calibrate the model against multiple calibration targets of measured ground-water levels, streamflows, and reported irrigation amounts. The split-sample technique was employed for corroborating the calibrated model. The model was run for a 40 y historical simulation period, and a 40 y prediction period. A number of hypothetical management scenarios involving reductions and variations in withdrawal rates and patterns were simulated. The SWATMOD model was developed as a hydrologically rational low-flow model for analyzing, in a user-friendly manner, the conditions in the basin when there is a shortage of water.
ERIC Educational Resources Information Center
Smith, Lindsey J. Wolff; Beretvas, S. Natasha
2017-01-01
Conventional multilevel modeling works well with purely hierarchical data; however, pure hierarchies rarely exist in real datasets. Applied researchers employ ad hoc procedures to create purely hierarchical data. For example, applied educational researchers either delete mobile participants' data from the analysis or identify the student only with…
ERIC Educational Resources Information Center
Mattern, Krista D.; Marini, Jessica P.; Shaw, Emily J.
2015-01-01
Throughout the college retention literature, there is a recurring theme that students leave college for a variety of reasons making retention a difficult phenomenon to model. In the current study, cluster analysis techniques were employed to investigate whether multiple empirically based profiles of nonreturning students existed to more fully…
ERIC Educational Resources Information Center
Schatschneider, Christopher; Wagner, Richard K.; Hart, Sara A.; Tighe, Elizabeth L.
2016-01-01
The present study employed data simulation techniques to investigate the 1-year stability of alternative classification schemes for identifying children with reading disabilities. Classification schemes investigated include low performance, unexpected low performance, dual-discrepancy, and a rudimentary form of constellation model of reading…
Chaos control applied to cardiac rhythms represented by ECG signals
NASA Astrophysics Data System (ADS)
Borem Ferreira, Bianca; Amorim Savi, Marcelo; Souza de Paula, Aline
2014-10-01
The control of irregular or chaotic heartbeats is a key issue in cardiology. In this regard, chaos control techniques represent a good alternative since they suggest treatments different from those traditionally used. This paper deals with the application of the extended time-delayed feedback control method to stabilize pathological chaotic heart rhythms. Electrocardiogram (ECG) signals are employed to represent the cardiovascular behavior. A mathematical model is employed to generate ECG signals using three modified Van der Pol oscillators connected with time delay couplings. This model provides results that qualitatively capture the general behavior of the heart. Controlled ECG signals show the ability of the strategy either to control or to suppress the chaotic heart dynamics generating less-critical behaviors.
Mass and momentum turbulent transport experiments with confined swirling coaxial jets
NASA Technical Reports Server (NTRS)
Roback, R.; Johnson, B. V.
1983-01-01
Swirling coaxial jets mixing downstream, discharging into an expanded duct was conducted to obtain data for the evaluation and improvement of turbulent transport models currently used in a variety of computational procedures throughout the combustion community. A combination of laser velocimeter (LV) and laser induced fluorescence (LIF) techniques was employed to obtain mean and fluctuating velocity and concentration distributions which were used to derive mass and momentum turbulent transport parameters currently incorporated into various combustor flow models. Flow visualization techniques were also employed to determine qualitatively the time dependent characteristics of the flow and the scale of turbulence. The results of these measurements indicated that the largest momentum turbulent transport was in the r-z plane. Peak momentum turbulent transport rates were approximately the same as those for the nonswirling flow condition. The mass turbulent transport process for swirling flow was complicated. Mixing occurred in several steps of axial and radial mass transport and was coupled with a large radial mean convective flux. Mixing for swirling flow was completed in one-third the length required for nonswirling flow.
NASA Astrophysics Data System (ADS)
Eckert, R.; Neyhart, J. T.; Burd, L.; Polikar, R.; Mandayam, S. A.; Tseng, M.
2003-03-01
Mammography is the best method available as a non-invasive technique for the early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions due to fat and radiodense (light) regions due to connective and epithelial tissue. The amount of radiodense tissue can be used as a marker for predicting breast cancer risk. Previously, we have shown that the use of statistical models is a reliable technique for segmenting radiodense tissue. This paper presents improvements in the model that allow for further development of an automated system for segmentation of radiodense tissue. The segmentation algorithm employs a two-step process. In the first step, segmentation of tissue and non-tissue regions of a digitized X-ray mammogram image are identified using a radial basis function neural network. The second step uses a constrained Neyman-Pearson algorithm, developed especially for this research work, to determine the amount of radiodense tissue. Results obtained using the algorithm have been validated by comparing with estimates provided by a radiologist employing previously established methods.
Seal Analysis for the Ares-I Upper Stage Fuel Tank Manhole Cover
NASA Technical Reports Server (NTRS)
Phillips, Dawn R.; Wingate, Robert J.
2010-01-01
Techniques for studying the performance of Naflex pressure-assisted seals in the Ares-I Upper Stage liquid hydrogen tank manhole cover seal joint are explored. To assess the feasibility of using the identical seal design for the Upper Stage as was used for the Space Shuttle External Tank manhole covers, a preliminary seal deflection analysis using the ABAQUS commercial finite element software is employed. The ABAQUS analyses are performed using three-dimensional symmetric wedge finite element models. This analysis technique is validated by first modeling a heritage External Tank liquid hydrogen tank manhole cover joint and correlating the results to heritage test data. Once the technique is validated, the Upper Stage configuration is modeled. The Upper Stage analyses are performed at 1.4 times the expected pressure to comply with the Constellation Program factor of safety requirement on joint separation. Results from the analyses performed with the External Tank and Upper Stage models demonstrate the effects of several modeling assumptions on the seal deflection. The analyses for Upper Stage show that the integrity of the seal is successfully maintained.
2004-01-01
login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a
Study of High Temperature Failure Mechanisms in Ceramics
1988-06-01
The major experimental 4 techniques employed in the program are the use of small- angle neutron scattering to characterize cavity nucleation and growth...creep crackgrowth. Of particular interest are the development of a stochastic model of grainboundary sliding and a micromechanical model that relates...Accession For NTIS GF.A&I DTIC T,’ IDi st ribut Ion’ ;i Avillii~diii l l= (~~ I. RESEARCH OBJECTIVES I. Utilize small- angle neutron scattering to
Scattering of Acoustic Energy from Rough Deep Ocean Seafloor: a Numerical Modeling Approach.
NASA Astrophysics Data System (ADS)
Robertsson, Johan Olof Anders
1995-01-01
The highly heterogeneous and anelastic nature of deep ocean seafloor results in complex reverberation as acoustic energy incident from the overlaying water column interacts and scatters from it. To gain a deeper understanding of the mechanisms causing the reverberation in sonar and seafloor scattering experiments, we have developed numerical simulation techniques that are capable of modeling the principal physical properties of complex seafloor structures. A new viscoelastic finite-difference technique for modeling anelastic wave propagation in 2-D and 3-D heterogeneous media, as well as a computationally optimally efficient method for quantifying the anelastic properties in terms of viscoelastic mechanics are presented. A method for reducing numerical dispersion using a Galerkin-wavelet formulation that enables large computational savings is also presented. The widely different regimes of wave propagation occurring in ocean acoustic problems motivate the use of hybrid simulation techniques. HARVEST (Hybrid Adaptive Regime Visco-Elastic Simulation Technique) combines solutions from Gaussian beams, viscoelastic finite-differences, and Kirchhoff extrapolation, to simulate large offset scattering problems. Several scattering hypotheses based on finite -difference simulations of short-range acoustic scattering from realistic seafloor models are presented. Anelastic sediments on the seafloor are found to have a significant impact on the backscattered field from low grazing angle scattering experiments. In addition, small perturbations in the sediment compressional velocity can also dramatically alter the backscattered field due to transitions between pre- and post-critical reflection regimes. The hybrid techniques are employed to simulate deep ocean acoustic reverberation data collected in the vicinity of the northern mid-Atlantic ridge. In general, the simulated data compare well to the real data. Noise partly due to side-lobes in the beam-pattern of the receiver -array is the principal source of reverberation at lower levels. Overall, the employed seafloor models were found to model the real seafloor well. Inaccurately predicted events may partly be attributed to the intrinsic uncertainty in the stochastic seafloor models. For optimal comparison between real and HARVEST simulated data the experimental geometry should be chosen so that 3-D effects may be ignored, and to yield a cross-range resolution in the beam-formed acoustic data that is small relative to the lineation of the seafloor.
Jacobian projection reduced-order models for dynamic systems with contact nonlinearities
NASA Astrophysics Data System (ADS)
Gastaldi, Chiara; Zucca, Stefano; Epureanu, Bogdan I.
2018-02-01
In structural dynamics, the prediction of the response of systems with localized nonlinearities, such as friction dampers, is of particular interest. This task becomes especially cumbersome when high-resolution finite element models are used. While state-of-the-art techniques such as Craig-Bampton component mode synthesis are employed to generate reduced order models, the interface (nonlinear) degrees of freedom must still be solved in-full. For this reason, a new generation of specialized techniques capable of reducing linear and nonlinear degrees of freedom alike is emerging. This paper proposes a new technique that exploits spatial correlations in the dynamics to compute a reduction basis. The basis is composed of a set of vectors obtained using the Jacobian of partial derivatives of the contact forces with respect to nodal displacements. These basis vectors correspond to specifically chosen boundary conditions at the contacts over one cycle of vibration. The technique is shown to be effective in the reduction of several models studied using multiple harmonics with a coupled static solution. In addition, this paper addresses another challenge common to all reduction techniques: it presents and validates a novel a posteriori error estimate capable of evaluating the quality of the reduced-order solution without involving a comparison with the full-order solution.
Recent progress in the NDE of cast ship propulsion components
NASA Astrophysics Data System (ADS)
Spies, Martin; Rieder, Hans; Dillhöfer, Alexander; Rauhut, Markus; Taeubner, Kai; Kreier, Peter
2014-02-01
The failure of propulsion components of ships and ferries can lead to serious environmental and economic damage or even the loss of lives. For ultrasonic inspection of such large components we employ mechanized scanning and defect reconstruction using the Synthetic Aperture Focusing Technique (SAFT). We report on results obtained in view of the detection of defects with different inspection techniques. Also, we address the issue of Probability of Detection by reporting results obtained in POD and MAPOD-studies (Model-Assisted POD) using experimental and simulated data. Finally, we show recent results of surface and sub-surface inspection using optical and eddy current techniques.
Masonry structures built with fictile tubules: Experimental and numerical analyses
NASA Astrophysics Data System (ADS)
Tiberti, Simone; Scuro, Carmelo; Codispoti, Rosamaria; Olivito, Renato S.; Milani, Gabriele
2017-11-01
Masonry structures with fictile tubules were a distinctive building technique of the Mediterranean area. This technique dates back to Roman and early Christian times, used to build vaulted constructions and domes with various geometrical forms by virtue of their modular structure. In the present work, experimental tests were carried out to identify the mechanical properties of hollow clay fictile tubules and a possible reinforcing technique for existing buildings employing such elements. The experimental results were then validated by devising and analyzing numerical models with the FE software Abaqus, also aimed at investigating the structural behavior of an arch via linear and nonlinear static analyses.
A Comparison of Techniques for Determining Mass Outflow Rates in the Type 2 Quasar Markarian 34
NASA Astrophysics Data System (ADS)
Revalski, Mitchell; Crenshaw, D. Michael; Fischer, Travis C.; Kraemer, Steven B.; Schmitt, Henrique R.; Dashtamirova, Dzhuliya; Pope, Crystal L.
2018-06-01
We present spatially resolved measurements of the mass outflow rates and energetics for the Narrow Line Region (NLR) outflows in the type 2 quasar Markarian 34. Using data from the Hubble Space Telescope and Apache point observatory, together with Cloudy photoionization models, we calculate the radial mass distribution of ionized gas and map its kinematics. We compare the results of this technique to global outflow rates that characterize NLR outflows with a single outflow rate and energetic measurement. We find that NLR mass estimates based on emission line luminosities produce more consistent results than techniques employing filling factors.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
Analysis of explicit model predictive control for path-following control
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080
Analysis of explicit model predictive control for path-following control.
Lee, Junho; Chang, Hyuk-Jun
2018-01-01
In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.
Off-the-job training for VATS employing anatomically correct lung models.
Obuchi, Toshiro; Imakiire, Takayuki; Miyahara, Sou; Nakashima, Hiroyasu; Hamanaka, Wakako; Yanagisawa, Jun; Hamatake, Daisuke; Shiraishi, Takeshi; Moriyama, Shigeharu; Iwasaki, Akinori
2012-02-01
We evaluated our simulated major lung resection employing anatomically correct lung models as "off-the-job training" for video-assisted thoracic surgery trainees. A total of 76 surgeons voluntarily participated in our study. They performed video-assisted thoracic surgical lobectomy employing anatomically correct lung models, which are made of sponges so that vessels and bronchi can be cut using usual surgical techniques with typical forceps. After the simulation surgery, participants answered questionnaires on a visual analogue scale, in terms of their level of interest and the reality of our training method as off-the-job training for trainees. We considered that the closer a score was to 10, the more useful our method would be for training new surgeons. Regarding the appeal or level of interest in this simulation surgery, the mean score was 8.3 of 10, and regarding reality, it was 7.0. The participants could feel some of the real sensations of the surgery and seemed to be satisfied to perform the simulation lobectomy. Our training method is considered to be suitable as an appropriate type of surgical off-the-job training.
NASA Astrophysics Data System (ADS)
Ito, Reika; Yoshidome, Takashi
2018-01-01
Markov state models (MSMs) are a powerful approach for analyzing the long-time behaviors of protein motion using molecular dynamics simulation data. However, their quantitative performance with respect to the physical quantities is poor. We believe that this poor performance is caused by the failure to appropriately classify protein conformations into states when constructing MSMs. Herein, we show that the quantitative performance of an order parameter is improved when a manifold-learning technique is employed for the classification in the MSM. The MSM construction using the K-center method, which has been previously used for classification, has a poor quantitative performance.
NASA Technical Reports Server (NTRS)
Venable, D. D.
1983-01-01
A semi-analytic Monte Carlo simulation methodology (SALMON) was discussed. This simulation technique is particularly well suited for addressing fundamental radiative transfer problems in oceanographic LIDAR (optical radar), and also provides a framework for investigating the effects of environmental factors on LIDAR system performance. The simulation model was extended for airborne laser fluorosensors to allow for inhomogeneities in the vertical distribution of constituents in clear sea water. Results of the simulations for linearly varying step concentrations of chlorophyll are presented. The SALMON technique was also employed to determine how the LIDAR signals from an inhomogeneous media differ from those from homogeneous media.
Skin Friction and Transition Location Measurement on Supersonic Transport Models
NASA Technical Reports Server (NTRS)
Kennelly, Robert A., Jr.; Goodsell, Aga M.; Olsen, Lawrence E. (Technical Monitor)
2000-01-01
Flow visualization techniques were used to obtain both qualitative and quantitative skin friction and transition location data in wind tunnel tests performed on two supersonic transport models at Mach 2.40. Oil-film interferometry was useful for verifying boundary layer transition, but careful monitoring of model surface temperatures and systematic examination of the effects of tunnel start-up and shutdown transients will be required to achieve high levels of accuracy for skin friction measurements. A more common technique, use of a subliming solid to reveal transition location, was employed to correct drag measurements to a standard condition of all-turbulent flow on the wing. These corrected data were then analyzed to determine the additional correction required to account for the effect of the boundary layer trip devices.
Soft computing methods for geoidal height transformation
NASA Astrophysics Data System (ADS)
Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.
2009-07-01
Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.
Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Mubin, Marizan; Saad, Ismail
2016-01-01
In the existing electroencephalogram (EEG) signals peak classification research, the existing models, such as Dumpala, Acir, Liu, and Dingle peak models, employ different set of features. However, all these models may not be able to offer good performance for various applications and it is found to be problem dependent. Therefore, the objective of this study is to combine all the associated features from the existing models before selecting the best combination of features. A new optimization algorithm, namely as angle modulated simulated Kalman filter (AMSKF) will be employed as feature selector. Also, the neural network random weight method is utilized in the proposed AMSKF technique as a classifier. In the conducted experiment, 11,781 samples of peak candidate are employed in this study for the validation purpose. The samples are collected from three different peak event-related EEG signals of 30 healthy subjects; (1) single eye blink, (2) double eye blink, and (3) eye movement signals. The experimental results have shown that the proposed AMSKF feature selector is able to find the best combination of features and performs at par with the existing related studies of epileptic EEG events classification.
NASA Astrophysics Data System (ADS)
Geszke-Moritz, Małgorzata; Moritz, Michał
2016-04-01
Four mesoporous siliceous materials such as SBA-16, SBA-15, PHTS and MCF functionalized with (3-aminopropyl)triethoxysilane were successfully prepared and applied as the carriers for poorly water-soluble drug diflunisal. Several techniques including nitrogen sorption analysis, XRD, TEM, FTIR and thermogravimetric analysis were employed to characterize mesoporous matrices. Adsorption isotherms were analyzed using Langmuir, Freundlich, Temkin and Dubinin-Radushkevich models. In order to find the best-fit isotherm for each model, both linear and nonlinear regressions were carried out. The equilibrium data were best fitted by the Langmuir isotherm model revealing maximum adsorption capacity of 217.4 mg/g for aminopropyl group-modified SBA-15. The negative values of Gibbs free energy change indicated that the adsorption of diflunisal is a spontaneous process. Weibull release model was employed to describe the dissolution profile of diflunisal. At pH 4.5 all prepared mesoporous matrices exhibited the improvement of drug dissolution kinetics as compared to the dissolution rate of pure diflunisal.
A novel application of artificial neural network for wind speed estimation
NASA Astrophysics Data System (ADS)
Fang, Da; Wang, Jianzhou
2017-05-01
Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.
Monthly monsoon rainfall forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Ganti, Ravikumar
2014-10-01
Indian agriculture sector heavily depends on monsoon rainfall for successful harvesting. In the past, prediction of rainfall was mainly performed using regression models, which provide reasonable accuracy in the modelling and forecasting of complex physical systems. Recently, Artificial Neural Networks (ANNs) have been proposed as efficient tools for modelling and forecasting. A feed-forward multi-layer perceptron type of ANN architecture trained using the popular back-propagation algorithm was employed in this study. Other techniques investigated for modeling monthly monsoon rainfall include linear and non-linear regression models for comparison purposes. The data employed in this study include monthly rainfall and monthly average of the daily maximum temperature in the North Central region in India. Specifically, four regression models and two ANN model's were developed. The performance of various models was evaluated using a wide variety of standard statistical parameters and scatter plots. The results obtained in this study for forecasting monsoon rainfalls using ANNs have been encouraging. India's economy and agricultural activities can be effectively managed with the help of the availability of the accurate monsoon rainfall forecasts.
Biglino, Giovanni; Giardini, Alessandro; Hsia, Tain-Yen; Figliola, Richard; Taylor, Andrew M.; Schievano, Silvia
2013-01-01
First stage palliation of hypoplastic left heart syndrome, i.e., the Norwood operation, results in a complex physiological arrangement, involving different shunting options (modified Blalock-Taussig, RV-PA conduit, central shunt from the ascending aorta) and enlargement of the hypoplastic ascending aorta. Engineering techniques, both computational and experimental, can aid in the understanding of the Norwood physiology and their correct implementation can potentially lead to refinement of the decision-making process, by means of patient-specific simulations. This paper presents some of the available tools that can corroborate clinical evidence by providing detailed insight into the fluid dynamics of the Norwood circulation as well as alternative surgical scenarios (i.e., virtual surgery). Patient-specific anatomies can be manufactured by means of rapid prototyping and such models can be inserted in experimental set-ups (mock circulatory loops) that can provide a valuable source of validation data as well as hydrodynamic information. Such models can be tuned to respond to differing the patient physiologies. Experimental set-ups can also be compatible with visualization techniques, like particle image velocimetry and cardiovascular magnetic resonance, further adding to the knowledge of the local fluid dynamics. Multi-scale computational models include detailed three-dimensional (3D) anatomical information coupled to a lumped parameter network representing the remainder of the circulation. These models output both overall hemodynamic parameters while also enabling to investigate the local fluid dynamics of the aortic arch or the shunt. As an alternative, pure lumped parameter models can also be employed to model Stage 1 palliation, taking advantage of a much lower computational cost, albeit missing the 3D anatomical component. Finally, analytical techniques, such as wave intensity analysis, can be employed to study the Norwood physiology, providing a mechanistic perspective on the ventriculo-arterial coupling for this specific surgical scenario. PMID:24400277
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less
Problem based learning with scaffolding technique on geometry
NASA Astrophysics Data System (ADS)
Bayuningsih, A. S.; Usodo, B.; Subanti, S.
2018-05-01
Geometry as one of the branches of mathematics has an important role in the study of mathematics. This research aims to explore the effectiveness of Problem Based Learning (PBL) with scaffolding technique viewed from self-regulation learning toward students’ achievement learning in mathematics. The research data obtained through mathematics learning achievement test and self-regulated learning (SRL) questionnaire. This research employed quasi-experimental research. The subjects of this research are students of the junior high school in Banyumas Central Java. The result of the research showed that problem-based learning model with scaffolding technique is more effective to generate students’ mathematics learning achievement than direct learning (DL). This is because in PBL model students are more able to think actively and creatively. The high SRL category student has better mathematic learning achievement than middle and low SRL categories, and then the middle SRL category has better than low SRL category. So, there are interactions between learning model with self-regulated learning in increasing mathematic learning achievement.
Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang
2017-12-12
Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.
Disease modeling in genetic kidney diseases: zebrafish.
Schenk, Heiko; Müller-Deile, Janina; Kinast, Mark; Schiffer, Mario
2017-07-01
Growing numbers of translational genomics studies are based on the highly efficient and versatile zebrafish (Danio rerio) vertebrate model. The increasing types of zebrafish models have improved our understanding of inherited kidney diseases, since they not only display pathophysiological changes but also give us the opportunity to develop and test novel treatment options in a high-throughput manner. New paradigms in inherited kidney diseases have been developed on the basis of the distinct genome conservation of approximately 70 % between zebrafish and humans in terms of existing gene orthologs. Several options are available to determine the functional role of a specific gene or gene sets. Permanent genome editing can be induced via complete gene knockout by using the CRISPR/Cas-system, among others, or via transient modification by using various morpholino techniques. Cross-species rescues succeeding knockdown techniques are employed to determine the functional significance of a target gene or a specific mutation. This article summarizes the current techniques and discusses their perspectives.
Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang
2017-01-01
Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868
Goya Jorge, Elizabeth; Rayar, Anita Maria; Barigye, Stephen J; Jorge Rodríguez, María Elisa; Sylla-Iyarreta Veitía, Maité
2016-06-07
A quantitative structure-activity relationship (QSAR) study of the 2,2-diphenyl-l-picrylhydrazyl (DPPH•) radical scavenging ability of 1373 chemical compounds, using DRAGON molecular descriptors (MD) and the neural network technique, a technique based on the multilayer multilayer perceptron (MLP), was developed. The built model demonstrated a satisfactory performance for the training ( R 2 = 0.713 ) and test set ( Q ext 2 = 0.654 ) , respectively. To gain greater insight on the relevance of the MD contained in the MLP model, sensitivity and principal component analyses were performed. Moreover, structural and mechanistic interpretation was carried out to comprehend the relationship of the variables in the model with the modeled property. The constructed MLP model was employed to predict the radical scavenging ability for a group of coumarin-type compounds. Finally, in order to validate the model's predictions, an in vitro assay for one of the compounds (4-hydroxycoumarin) was performed, showing a satisfactory proximity between the experimental and predicted pIC50 values.
New Kinematical Constraints on Cosmic Acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rapetti, David; Allen, Steve W.; Amin, Mustafa A.
2007-05-25
We present and employ a new kinematical approach to ''dark energy'' studies. We construct models in terms of the dimensionless second and third derivatives of the scale factor a(t) with respect to cosmic time t, namely the present-day value of the deceleration parameter q{sub 0} and the cosmic jerk parameter, j(t). An elegant feature of this parameterization is that all {Lambda}CDM models have j(t)=1 (constant), which facilitates simple tests for departures from the {Lambda}CDM paradigm. Applying our model to redshift-independent distance measurements, from type Ia supernovae and X-ray cluster gas mass fraction measurements, we obtain clear statistical evidence for amore » late time transition from a decelerating to an accelerating phase. For a flat model with constant jerk, j(t)=j, we measure q{sub 0}=-0.81 {+-} 0.14 and j=2.16 +0.81 -0.75, results that are consistent with {Lambda}CDM at about the 1{sigma} confidence level. In comparison to dynamical analyses, the kinematical approach uses a different model set and employs a minimum of prior information, being independent of any particular gravity theory. The results obtained with this new approach therefore provide important additional information and we argue that both kinematical and dynamical techniques should be employed in future dark energy studies, where possible.« less
Evaluation of flash-flood discharge forecasts in complex terrain using precipitation
Yates, D.; Warner, T.T.; Brandes, E.A.; Leavesley, G.H.; Sun, Jielun; Mueller, C.K.
2001-01-01
Operational prediction of flash floods produced by thunderstorm (convective) precipitation in mountainous areas requires accurate estimates or predictions of the precipitation distribution in space and time. The details of the spatial distribution are especially critical in complex terrain because the watersheds are generally small in size, and small position errors in the forecast or observed placement of the precipitation can distribute the rain over the wrong watershed. In addition to the need for good precipitation estimates and predictions, accurate flood prediction requires a surface-hydrologic model that is capable of predicting stream or river discharge based on the precipitation-rate input data. Different techniques for the estimation and prediction of convective precipitation will be applied to the Buffalo Creek, Colorado flash flood of July 1996, where over 75 mm of rain from a thunderstorm fell on the watershed in less than 1 h. The hydrologic impact of the precipitation was exacerbated by the fact that a significant fraction of the watershed experienced a wildfire approximately two months prior to the rain event. Precipitation estimates from the National Weather Service's operational Weather Surveillance Radar-Doppler 1988 and the National Center for Atmospheric Research S-band, research, dual-polarization radar, colocated to the east of Denver, are compared. In addition, very short range forecasts from a convection-resolving dynamic model, which is initialized variationally using the radar reflectivity and Doppler winds, are compared with forecasts from an automated-algorithmic forecast system that also employs the radar data. The radar estimates of rain rate, and the two forecasting systems that employ the radar data, have degraded accuracy by virtue of the fact that they are applied in complex terrain. Nevertheless, the radar data and forecasts from the dynamic model and the automated algorithm could be operationally useful for input to surface-hydrologic models employed for flood warning. Precipitation data provided by these various techniques at short time scales and at fine spatial resolutions are employed as detailed input to a distributed-parameter hydrologic model for flash-flood prediction and analysis. With the radar-based precipitation estimates employed as input, the simulated flood discharge was similar to that observed. The dynamic-model precipitation forecast showed the most promise in providing a significant discharge-forecast lead time. The algorithmic system's precipitation forecast did not demonstrate as much skill, but the associated discharge forecast would still have been sufficient to have provided an alert of impending flood danger.
Annual Tree Growth Predictions From Periodic Measurements
Quang V. Cao
2004-01-01
Data from annual measurements of a loblolly pine (Pinus taeda L.) plantation were available for this study. Regression techniques were employed to model annual changes of individual trees in terms of diameters, heights, and survival probabilities. Subsets of the data that include measurements every 2, 3, 4, 5, and 6 years were used to fit the same...
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
An Evaluation of the Synergistic Simulation of the Federal Open Market Committee.
ERIC Educational Resources Information Center
Bartlett, Robin Lynn; Amsler, Christine E.
The Federal Open Market Committee (FOMC) simulation employed three techniques: case study, role playing, and model building, in order to acquaint college students studying money and banking with the creation of monetary policy. The specific goals of the FOMC simulation were: (1) to familiarize students with the data used in monetary policy…
Is Pre-K Classroom Quality Associated with Kindergarten and Middle-School Academic Skills?
ERIC Educational Resources Information Center
Anderson, Sara; Phillips, Deborah
2017-01-01
We employed data from a longitudinal investigation of over 1,000 children who participated in Tulsa's universal school-based pre-K program in 2005, and path modeling techniques, to examine the contribution of pre-K classroom quality to both kindergarten- and middle-school academic skills. We also examined gender and income-related differences in…
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
Photoacoustic and luminescence spectroscopy of benzil crystals
NASA Astrophysics Data System (ADS)
Bonno, B.; Laporte, J. L.; Rousset, Y.
1991-06-01
In the present work, both photoacoustic and luminescence techniques were employed to study molecular crystals. This paper presents an extension of the standard Rosencwaig-Gersho photoacoustic model to molecular crystals, which includes finite-deexcitation-time effects and excited-state populations. In the temperature range 100-300 K, the phosphorescence quantum yield and thermal diffusivity of benzil crystals were determined.
Second- and Higher-Order Virial Coefficients Derived from Equations of State for Real Gases
ERIC Educational Resources Information Center
Parkinson, William A.
2009-01-01
Derivation of the second- and higher-order virial coefficients for models of the gaseous state is demonstrated by employing a direct differential method and subsequent term-by-term comparison to power series expansions. This communication demonstrates the application of this technique to van der Waals representations of virial coefficients.…
Machine Learning and Inverse Problem in Geodynamics
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.
2017-12-01
During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques can successfully predict the magnitude of the mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex problems in mantle dynamics by employing deep learning algorithms for estimation of mantle properties such as viscosity, elastic parameters, and thermal and chemical anomalies.
Monitoring D-Region Variability from Lightning Measurements
NASA Technical Reports Server (NTRS)
Simoes, Fernando; Berthelier, Jean-Jacques; Pfaff, Robert; Bilitza, Dieter; Klenzing, Jeffery
2011-01-01
In situ measurements of ionospheric D-region characteristics are somewhat scarce and rely mostly on sounding rockets. Remote sensing techniques employing Very Low Frequency (VLF) transmitters can provide electron density estimates from subionospheric wave propagation modeling. Here we discuss how lightning waveform measurements, namely sferics and tweeks, can be used for monitoring the D-region variability and day-night transition, and for local electron density estimates. A brief comparison among D-region aeronomy models is also presented.
Slice sampling technique in Bayesian extreme of gold price modelling
NASA Astrophysics Data System (ADS)
Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham
2013-09-01
In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.
Estimation of urban runoff and water quality using remote sensing and artificial intelligence.
Ha, S R; Park, S Y; Park, D H
2003-01-01
Water quality and quantity of runoff are strongly dependent on the landuse and landcover (LULC) criteria. In this study, we developed a more improved parameter estimation procedure for the environmental model using remote sensing (RS) and artificial intelligence (AI) techniques. Landsat TM multi-band (7bands) and Korea Multi-Purpose Satellite (KOMPSAT) panchromatic data were selected for input data processing. We employed two kinds of artificial intelligence techniques, RBF-NN (radial-basis-function neural network) and ANN (artificial neural network), to classify LULC of the study area. A bootstrap resampling method, a statistical technique, was employed to generate the confidence intervals and distribution of the unit load. SWMM was used to simulate the urban runoff and water quality and applied to the study watershed. The condition of urban flow and non-point contaminations was simulated with rainfall-runoff and measured water quality data. The estimated total runoff, peak time, and pollutant generation varied considerably according to the classification accuracy and percentile unit load applied. The proposed procedure would efficiently be applied to water quality and runoff simulation in a rapidly changing urban area.
NASA Astrophysics Data System (ADS)
Okuyama, Tadahiro
Kuhn-Tucker model, which has studied in recent years, is a benefit valuation technique using the revealed-preference data, and the feature is to treatvarious patterns of corner solutions flexibly. It is widely known for the benefit calculation using the revealed-preference data that a value of a benefit changes depending on a functional form. However, there are little studies which examine relationship between utility functions and values of benefits in Kuhn-Tucker model. The purpose of this study is to analysis an influence of the functional form to the value of a benefit. Six types of utility functions are employed for benefit calculations. The data of the recreational activity of 26 beaches of Miyagi Prefecture were employed. Calculation results indicated that Phaneuf and Siderelis (2003) and Whitehead et al.(2010)'s functional forms are useful for benefit calculations.
Thermoelectric technique to precisely control hyperthermic exposures of human whole blood.
DuBose, D A; Langevin, R C; Morehouse, D H
1996-12-01
The need in military research to avoid exposing humans to harsh environments and reduce animal use requires the development of in vitro models for the study of hyperthermic injury. A thermoelectric module (TEM) system was employed to heat human whole blood (HWB) in a manner similar to that experienced by heat-stroked rats. This system precisely and accurately replicated mild, moderate, and extreme heat-stress exposures. Temperature changes could be monitored without the introduction of a test sample thermistor, which reduced contamination problems. HWB with hematocrits of 45 or 50% had similar heating curves, indicating that the system compensated for differences in sample character. The unit's size permitted its containment within a standard carbon dioxide incubator to further control sample environment. These results indicate that the TEM system can precisely control temperature change in this heat stress in vitro model employing HWB. Information obtained from such a model could contribute to military preparedness.
NASA Astrophysics Data System (ADS)
Antoniadis, Konstantinos D.; Tertsinidou, Georgia J.; Assael, Marc J.; Wakeham, William A.
2016-08-01
The paper considers the conditions that are necessary to secure accurate measurement of the apparent thermal conductivity of two-phase systems comprising nanoscale particles of one material suspended in a fluid phase of a different material. It is shown that instruments operating according to the transient hot-wire technique can, indeed, produce excellent measurements when a finite element method (FEM) is employed to describe the instrument for the exact geometry of the hot wire. Furthermore, it is shown that an approximate analytic solution can be employed with equal success, over the time range of 0.1 s to 1 s, provided that (a) two wires are employed, so that end effects are canceled, (b) each wire is very thin, less than 30 \\upmu m diameter, so that the line source model and the corresponding corrections are valid, (c) low values of the temperature rise, less than 4 K, are employed in order to minimize the effect of convection on the heat transfer in the time of measurement of 1 s, and (d) insulated wires are employed for measurements in electrically conducting or polar liquids to avoid current leakage or other electrical distortions. According to these criteria, a transient hot-wire instrument has been designed, constructed, and employed for the measurement of the enhancement of the thermal conductivity of water when TiO2 or multi-wall carbon nanotubes (MWCNT) are added. These new results, together with a critical evaluation of other measurements, demonstrate the importance of proper implementation of the technique.
NASA Astrophysics Data System (ADS)
Shiri, Jalal; Kisi, Ozgur; Yoon, Heesung; Lee, Kang-Kun; Hossein Nazemi, Amir
2013-07-01
The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records.
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Light water reactor lower head failure analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rempe, J.L.; Chavez, S.A.; Thinnes, G.L.
1993-10-01
This document presents the results from a US Nuclear Regulatory Commission-sponsored research program to investigate the mode and timing of vessel lower head failure. Major objectives of the analysis were to identify plausible failure mechanisms and to develop a method for determining which failure mode would occur first in different light water reactor designs and accident conditions. Failure mechanisms, such as tube ejection, tube rupture, global vessel failure, and localized vessel creep rupture, were studied. Newly developed models and existing models were applied to predict which failure mechanism would occur first in various severe accident scenarios. So that a broadermore » range of conditions could be considered simultaneously, calculations relied heavily on models with closed-form or simplified numerical solution techniques. Finite element techniques-were employed for analytical model verification and examining more detailed phenomena. High-temperature creep and tensile data were obtained for predicting vessel and penetration structural response.« less
NASA Astrophysics Data System (ADS)
Xu, Kunshan; Qiu, Xingqi; Tian, Xiaoshuai
2018-01-01
The metal magnetic memory testing (MMMT) technique has been extensively applied in various fields because of its unique advantages of easy operation, low cost and high efficiency. However, very limited theoretical research has been conducted on application of MMMT to buried defects. To promote study in this area, the equivalent magnetic charge method is employed to establish a self-magnetic flux leakage (SMFL) model of a buried defect. Theoretical results based on the established model successfully capture basic characteristics of the SMFL signals of buried defects, as confirmed via experiment. In particular, the newly developed model can calculate the buried depth of a defect based on the SMFL signals obtained via testing. The results show that the new model can successfully assess the characteristics of buried defects, which is valuable in the application of MMMT in non-destructive testing.
Adhesion of perfume-filled microcapsules to model fabric surfaces.
He, Yanping; Bowen, James; Andrews, James W; Liu, Min; Smets, Johan; Zhang, Zhibing
2014-01-01
The retention and adhesion of melamine formaldehyde (MF) microcapsules on a model fabric surface in aqueous solution were investigated using a customised flow chamber technique and atomic force microscopy (AFM). A cellulose film was employed as a model fabric surface. Modification of the cellulose with chitosan was found to increase the retention and adhesion of microcapsules on the model fabric surface. The AFM force-displacement data reveal that bridging forces resulting from the extension of cellulose chains dominate the adhesion between the microcapsule and the unmodified cellulose film, whereas electrostatic attraction helps the microcapsules adhere to the chitosan-modified cellulose film. The correlation between results obtained using these two complementary techniques suggests that the flow chamber device can be potentially used for rapid screening of the effect of chemical modification on the adhesion of microparticles to surfaces, reducing the time required to achieve an optimal formulation.
An electrical circuit model for simulation of indoor radon concentration.
Musavi Nasab, S M; Negarestani, A
2013-01-01
In this study, a new model based on electric circuit theory was introduced to simulate the behaviour of indoor radon concentration. In this model, a voltage source simulates radon generation in walls, conductivity simulates migration through walls and voltage across a capacitor simulates radon concentration in a room. This simulation considers migration of radon through walls by diffusion mechanism in one-dimensional geometry. Data reported in a typical Greek house were employed to examine the application of this technique of simulation to the behaviour of radon.
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1984-01-01
Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
NASA Astrophysics Data System (ADS)
Zheng, Q.; Dickson, S.; Guo, Y.
2007-12-01
A good understanding of the physico-chemical processes (i.e., advection, dispersion, attachment/detachment, straining, sedimentation etc.) governing colloid transport in fractured media is imperative in order to develop appropriate bioremediation and/or bioaugmentation strategies for contaminated fractured aquifers, form management plans for groundwater resources to prevent pathogen contamination, and identify suitable radioactive waste disposal sites. However, research in this field is still in its infancy due to the complex heterogeneous nature of fractured media and the resulting difficulty in characterizing this media. The goal of this research is to investigate the effects of aperture field variability, flow rate and ionic strength on colloid transport processes in well characterized single fractures. A combination of laboratory-scale experiments, numerical simulations, and imaging techniques were employed to achieve this goal. Transparent replicas were cast from natural rock fractures, and a light transmission technique was employed to measure their aperture fields directly. The surface properties of the synthetic fractures were characterized by measuring the zeta-potential under different ionic strengths. A 33 (3 increased to the power of 3) factorial experiment was implemented to investigate the influence of aperture field variability, flow rate, and ionic strength on different colloid transport processes in the laboratory-scale fractures, specifically dispersion and attachment/detachment. A fluorescent stain technique was employed to photograph the colloid transport processes, and an analytical solution to the one-dimensional transport equation was fit to the colloid breakthrough curves to calculate the average transport velocity, dispersion coefficient, and attachment/detachment coefficient. The Reynolds equation was solved to obtain the flow field in the measured aperture fields, and the random walk particle tracking technique was employed to model the colloid transport experiments. The images clearly show the development of preferential pathways for colloid transport in the different aperture fields and under different flow conditions. Additionally, a correlation between colloid deposition and fracture wall topography was identified. This presentation will demonstrate (1) differential transport between colloid and solute in single fractures, and the relationship between differential transport and aperture field statistics; (2) the relationship between the colloid dispersion coefficient and aperture field statistics; and (3) the relationship between attachment/detachment, aperture field statistics, fracture wall topography, flow rate, and ionic strength. In addition, this presentation will provide insight into the application of the random walk particle tracking technique for modeling colloid transport in variable-aperture fractures.
Adapting GOMS to Model Human-Robot Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, Jill; Scholtz, Jean; Kieras, David
2007-03-09
Human-robot interaction (HRI) has been maturing in tandem with robots’ commercial success. In the last few years HRI researchers have been adopting—and sometimes adapting—human-computer interaction (HCI) evaluation techniques to assess the efficiency and intuitiveness of HRI designs. For example, Adams (2005) used Goal Directed Task Analysis to determine the interaction needs of officers from the Nashville Metro Police Bomb Squad. Scholtz et al. (2004) used Endsley’s (1988) Situation Awareness Global Assessment Technique to determine robotic vehicle supervisors’ awareness of when vehicles were in trouble and thus required closer monitoring or intervention. Yanco and Drury (2004) employed usability testing to determinemore » (among other things) how well a search-andrescue interface supported use by first responders. One set of HCI tools that has so far seen little exploration in the HRI domain, however, is the class of modeling and evaluation techniques known as formal methods.« less
The dynamics and control of large flexible space structures, 6
NASA Technical Reports Server (NTRS)
Bainum, P. M.
1983-01-01
The controls analysis based on a truncated finite element model of the 122m. Hoop/Column Antenna System focuses on an analysis of the controllability as well as the synthesis of control laws. Graph theoretic techniques are employed to consider controllability for different combinations of number and locations of actuators. Control law synthesis is based on an application of the linear regulator theory as well as pole placement techniques. Placement of an actuator on the hoop can result in a noticeable improvement in the transient characteristics. The problem of orientation and shape control of an orbiting flexible beam, previously examined, is now extended to include the influence of solar radiation environmental forces. For extremely flexible thin structures modification of control laws may be required and techniques for accomplishing this are explained. Effects of environmental torques are also included in previously developed models of orbiting flexible thin platforms.
ERIC Educational Resources Information Center
Kidney, John
This self-instructional module, the eleventh in a series of 16 on techniques for coordinating work experience programs, deals with federal and state employment laws. Addressed in the module are federal and state employment laws pertaining to minimum wage for student learners, minimum wage for full-time students, unemployment insurance, child labor…
Current progress in patient-specific modeling
2010-01-01
We present a survey of recent advancements in the emerging field of patient-specific modeling (PSM). Researchers in this field are currently simulating a wide variety of tissue and organ dynamics to address challenges in various clinical domains. The majority of this research employs three-dimensional, image-based modeling techniques. Recent PSM publications mostly represent feasibility or preliminary validation studies on modeling technologies, and these systems will require further clinical validation and usability testing before they can become a standard of care. We anticipate that with further testing and research, PSM-derived technologies will eventually become valuable, versatile clinical tools. PMID:19955236
Resonance and streaming of armored microbubbles
NASA Astrophysics Data System (ADS)
Spelman, Tamsin; Bertin, Nicolas; Stephen, Olivier; Marmottant, Philippe; Lauga, Eric
2015-11-01
A new experimental technique involves building a hollow capsule which partially encompasses a microbubble, creating an ``armored microbubble'' with long lifespan. Under acoustic actuation, such bubble produces net streaming flows. In order to theoretically model the induced flow, we first extend classical models of free bubbles to describe the streaming flow around a spherical body for any known axisymmetric shape oscillation. A potential flow model is then employed to determine the resonance modes of the armored microbubble. We finally use a more detailed viscous model to calculate the surface shape oscillations at the experimental driving frequency, and from this we predict the generated streaming flows.
ERIC Educational Resources Information Center
Montoya, Isaac D.
2008-01-01
Three classification techniques (Chi-square Automatic Interaction Detection [CHAID], Classification and Regression Tree [CART], and discriminant analysis) were tested to determine their accuracy in predicting Temporary Assistance for Needy Families program recipients' future employment. Technique evaluation was based on proportion of correctly…
Imaging System Model Crammed Into A 32K Microcomputer
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1986-12-01
An imaging system model, based upon linear systems theory, has been developed for a microcomputer with less than 32K of free random access memory (RAM). The model includes diffraction effects of the optics, aberrations in the optics, and atmospheric propagation transfer functions. Variables include pupil geometry, magnitude and character of the aberrations, and strength of atmospheric turbulence ("seeing"). Both coherent and incoherent image formation can be evaluated. The techniques employed for crowding the model into a very small computer will be discussed in detail. Simplifying assumptions for the diffraction and aberration phenomena will be shown along with practical considerations in modeling the optical system. Particular emphasis is placed on avoiding inaccuracies in modeling the pupil and the associated optical transfer function knowing limits on spatial frequency content and resolution. Memory and runtime constraints are analyzed stressing the efficient use of assembly language Fourier transform routines, disk input/output, and graphic displays. The compromises between computer time, limited RAM, and scientific accuracy will be given with techniques for balancing these parameters for individual needs.
FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.
Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid
2014-01-01
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.
Feed-Forward Neural Network Prediction of the Mechanical Properties of Sandcrete Materials
Asteris, Panagiotis G.; Roussis, Panayiotis C.; Douvika, Maria G.
2017-01-01
This work presents a soft-sensor approach for estimating critical mechanical properties of sandcrete materials. Feed-forward (FF) artificial neural network (ANN) models are employed for building soft-sensors able to predict the 28-day compressive strength and the modulus of elasticity of sandcrete materials. To this end, a new normalization technique for the pre-processing of data is proposed. The comparison of the derived results with the available experimental data demonstrates the capability of FF ANNs to predict with pinpoint accuracy the mechanical properties of sandcrete materials. Furthermore, the proposed normalization technique has been proven effective and robust compared to other normalization techniques available in the literature. PMID:28598400
Techniques in teaching statistics : linking research production and research use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Moyano, I .; Smith, A.; Univ. of Massachusetts at Boston)
In the spirit of closing the 'research-practice gap,' the authors extend evidence-based principles to statistics instruction in social science graduate education. The authors employ a Delphi method to survey experienced statistics instructors to identify teaching techniques to overcome the challenges inherent in teaching statistics to students enrolled in practitioner-oriented master's degree programs. Among the teaching techniques identi?ed as essential are using real-life examples, requiring data collection exercises, and emphasizing interpretation rather than results. Building on existing research, preliminary interviews, and the ?ndings from the study, the authors develop a model describing antecedents to the strength of the link between researchmore » and practice.« less
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
NASA Technical Reports Server (NTRS)
Woodbury, G. E.; Wallace, J. W.
1974-01-01
An investigation was conducted of new techniques used to determine the complete transonic drag characteristics of a series of free-flight drop-test models using principally radar tracking data. The full capabilities of the radar tracking and meteorological measurement systems were utilized. In addition, preflight trajectory design, exact kinematic equations, and visual-analytical filtering procedures were employed. The results of this study were compared with the results obtained from analysis of the onboard, accelerometer and pressure sensor data of the only drop-test model that was instrumented. The accelerometer-pressure drag curve was approximated by the radar-data drag curve. However, a small amplitude oscillation on the latter curve precluded a precise definition of its drag rise.
Optimisation of Critical Infrastructure Protection: The SiVe Project on Airport Security
NASA Astrophysics Data System (ADS)
Breiing, Marcus; Cole, Mara; D'Avanzo, John; Geiger, Gebhard; Goldner, Sascha; Kuhlmann, Andreas; Lorenz, Claudia; Papproth, Alf; Petzel, Erhard; Schwetje, Oliver
This paper outlines the scientific goals, ongoing work and first results of the SiVe research project on critical infrastructure security. The methodology is generic while pilot studies are chosen from airport security. The outline proceeds in three major steps, (1) building a threat scenario, (2) development of simulation models as scenario refinements, and (3) assessment of alternatives. Advanced techniques of systems analysis and simulation are employed to model relevant airport structures and processes as well as offences. Computer experiments are carried out to compare and optimise alternative solutions. The optimality analyses draw on approaches to quantitative risk assessment recently developed in the operational sciences. To exploit the advantages of the various techniques, an integrated simulation workbench is build up in the project.
NASA Astrophysics Data System (ADS)
Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.
2017-08-01
It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.
Jorgensen, Bradley S; Martin, John F; Pearce, Meryl; Willis, Eileen
2013-01-30
Research employing household water consumption data has sought to test models of water demand and conservation using variables from attitude theory. A significant, albeit unrecognised, challenge has been that attitude models describe individual-level motivations while consumption data is recorded at the household level thereby creating inconsistency between units of theory and measurement. This study employs structural equation modelling and moderated regression techniques to addresses the level of analysis problem, and tests hypotheses by isolating effects on water conservation in single-person households. Furthermore, the results question the explanatory utility of habit strength, perceived behavioural control, and intentions for understanding metered water conservation in single-person households. For example, evidence that intentions predict water conservation or that they interact with habit strength in single-person households was contrary to theoretical expectations. On the other hand, habit strength, self-reports of past water conservation, and perceived behavioural control were good predictors of intentions to conserve water. Copyright © 2012 Elsevier Ltd. All rights reserved.
Improvements to Wire Bundle Thermal Modeling for Ampacity Determination
NASA Technical Reports Server (NTRS)
Rickman, Steve L.; Iannello, Christopher J.; Shariff, Khadijah
2017-01-01
Determining current carrying capacity (ampacity) of wire bundles in aerospace vehicles is critical not only to safety but also to efficient design. Published standards provide guidance on determining wire bundle ampacity but offer little flexibility for configurations where wire bundles of mixed gauges and currents are employed with varying external insulation jacket surface properties. Thermal modeling has been employed in an attempt to develop techniques to assist in ampacity determination for these complex configurations. Previous developments allowed analysis of wire bundle configurations but was constrained to configurations comprised of less than 50 elements. Additionally, for vacuum analyses, configurations with very low emittance external jackets suffered from numerical instability in the solution. A new thermal modeler is presented allowing for larger configurations and is not constrained for low bundle infrared emissivity calculations. Formulation of key internal radiation and interface conductance parameters is discussed including the effects of temperature and air pressure on wire to wire thermal conductance. Test cases comparing model-predicted ampacity and that calculated from standards documents are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartas, Raul; Mimendia, Aitor; Valle, Manel del
2009-05-23
Calibration models for multi-analyte electronic tongues have been commonly built using a set of sensors, at least one per analyte under study. Complex signals recorded with these systems are formed by the sensors' responses to the analytes of interest plus interferents, from which a multivariate response model is then developed. This work describes a data treatment method for the simultaneous quantification of two species in solution employing the signal from a single sensor. The approach used here takes advantage of the complex information recorded with one electrode's transient after insertion of sample for building the calibration models for both analytes.more » The departure information from the electrode was firstly processed by discrete wavelet for transforming the signals to extract useful information and reduce its length, and then by artificial neural networks for fitting a model. Two different potentiometric sensors were used as study case for simultaneously corroborating the effectiveness of the approach.« less
Pricing foreign equity option under stochastic volatility tempered stable Lévy processes
NASA Astrophysics Data System (ADS)
Gong, Xiaoli; Zhuang, Xintian
2017-10-01
Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
Simulation analysis of the transparency of cornea and sclera
NASA Astrophysics Data System (ADS)
Yang, Chih-Yao; Tseng, Snow H.
2017-02-01
Both consist of collagen fibrils, sclera is opaque whereas cornea is transparent for optical wavelengths. By employing the pseudospectral time-domain (PSTD) simulation technique, we model light impinging upon cornea and sclera, respectively. To analyze the scattering characteristics of light, the cornea and sclera are modeled by different sizes and arrangements of the non-absorbing collagen fibrils. Various factors are analyzed, including the wavelength of incident light, the thickness of the scattering media, position of the collagen fibrils, size distribution of the fibrils.
Nuclear shell model code CRUNCHER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resler, D.A.; Grimes, S.M.
1988-05-01
A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.
NASA Technical Reports Server (NTRS)
1973-01-01
A computer programmer's manual for a digital computer which will permit rapid and accurate parametric analysis of current and advanced attitude control propulsion systems is presented. The concept is for a cold helium pressurized, subcritical cryogen fluid supplied, bipropellant gas-fed attitude control propulsion system. The cryogen fluids are stored as liquids under low pressure and temperature conditions. The mathematical model provides a generalized form for the procedural technique employed in setting up the analysis program.
Neutral gas sympathetic cooling of an ion in a Paul trap.
Chen, Kuang; Sullivan, Scott T; Hudson, Eric R
2014-04-11
A single ion immersed in a neutral buffer gas is studied. An analytical model is developed that gives a complete description of the dynamics and steady-state properties of the ions. An extension of this model, using techniques employed in the mathematics of economics and finance, is used to explain the recent observation of non-Maxwellian statistics for these systems. Taken together, these results offer an explanation of the long-standing issues associated with sympathetic cooling of an ion by a neutral buffer gas.
Neutral Gas Sympathetic Cooling of an Ion in a Paul Trap
NASA Astrophysics Data System (ADS)
Chen, Kuang; Sullivan, Scott T.; Hudson, Eric R.
2014-04-01
A single ion immersed in a neutral buffer gas is studied. An analytical model is developed that gives a complete description of the dynamics and steady-state properties of the ions. An extension of this model, using techniques employed in the mathematics of economics and finance, is used to explain the recent observation of non-Maxwellian statistics for these systems. Taken together, these results offer an explanation of the long-standing issues associated with sympathetic cooling of an ion by a neutral buffer gas.
Modeling of switching regulator power stages with and without zero-inductor-current dwell time
NASA Technical Reports Server (NTRS)
Lee, F. C.; Yu, Y.; Triner, J. E.
1976-01-01
State space techniques are employed to derive accurate models for buck, boost, and buck/boost converter power stages operating with and without zero-inductor-current dwell time. A generalized procedure is developed which treats the continuous-inductor-current mode without the dwell time as a special case of the discontinuous-current mode, when the dwell time vanishes. An abrupt change of system behavior including a reduction of the system order when the dwell time appears is shown both analytically and experimentally.
Selective field evaporation in field-ion microscopy for ordered alloys
NASA Astrophysics Data System (ADS)
Ge, Xi-jin; Chen, Nan-xian; Zhang, Wen-qing; Zhu, Feng-wu
1999-04-01
Semiempirical pair potentials, obtained by applying the Chen-inversion technique to a cohesion equation of Rose et al. [Phys. Rev. B 29, 2963 (1984)], are employed to assess the bonding energies of surface atoms of intermetallic compounds. This provides a new calculational model of selective field evaporation in field-ion microscopy (FIM). Based on this model, a successful interpretation of FIM image contrasts for Fe3Al, PtCo, Pt3Co, Ni4Mo, Ni3Al, and Ni3Fe is given.
Application of differential transformation method for solving dengue transmission mathematical model
NASA Astrophysics Data System (ADS)
Ndii, Meksianis Z.; Anggriani, Nursanti; Supriatna, Asep K.
2018-03-01
The differential transformation method (DTM) is a semi-analytical numerical technique which depends on Taylor series and has application in many areas including Biomathematics. The aim of this paper is to employ the differential transformation method (DTM) to solve system of non-linear differential equations for dengue transmission mathematical model. Analytical and numerical solutions are determined and the results are compared to that of Runge-Kutta method. We found a good agreement between DTM and Runge-Kutta method.
Satellite-enhanced dynamical downscaling for the analysis of extreme events
NASA Astrophysics Data System (ADS)
Nunes, Ana M. B.
2016-09-01
The use of regional models in the downscaling of general circulation models provides a strategy to generate more detailed climate information. In that case, boundary-forcing techniques can be useful to maintain the large-scale features from the coarse-resolution global models in agreement with the inner modes of the higher-resolution regional models. Although those procedures might improve dynamics, downscaling via regional modeling still aims for better representation of physical processes. With the purpose of improving dynamics and physical processes in regional downscaling of global reanalysis, the Regional Spectral Model—originally developed at the National Centers for Environmental Prediction—employs a newly reformulated scale-selective bias correction, together with the 3-hourly assimilation of the satellite-based precipitation estimates constructed from the Climate Prediction Center morphing technique. The two-scheme technique for the dynamical downscaling of global reanalysis can be applied in analyses of environmental disasters and risk assessment, with hourly outputs, and resolution of about 25 km. Here the satellite-enhanced dynamical downscaling added value is demonstrated in simulations of the first reported hurricane in the western South Atlantic Ocean basin through comparisons with global reanalyses and satellite products available in ocean areas.
Scholey, J J; Wilcox, P D; Wisnom, M R; Friswell, M I
2009-06-01
A model for quantifying the performance of acoustic emission (AE) systems on plate-like structures is presented. Employing a linear transfer function approach the model is applicable to both isotropic and anisotropic materials. The model requires several inputs including source waveforms, phase velocity and attenuation. It is recognised that these variables may not be readily available, thus efficient measurement techniques are presented for obtaining phase velocity and attenuation in a form that can be exploited directly in the model. Inspired by previously documented methods, the application of these techniques is examined and some important implications for propagation characterisation in plates are discussed. Example measurements are made on isotropic and anisotropic plates and, where possible, comparisons with numerical solutions are made. By inputting experimentally obtained data into the model, quantitative system metrics are examined for different threshold values and sensor locations. By producing plots describing areas of hit success and source location error, the ability to measure the performance of different AE system configurations is demonstrated. This quantitative approach will help to place AE testing on a more solid foundation, underpinning its use in industrial AE applications.
ERIC Educational Resources Information Center
Pridemore, William Alex; Trahan, Adam; Chamlin, Mitchell B.
2009-01-01
There is substantial evidence of detrimental psychological sequelae following disasters, including terrorist attacks. The effect of these events on extreme responses such as suicide, however, is unclear. We tested competing hypotheses about such effects by employing autoregressive integrated moving average techniques to model the impact of…
A Model Independent S/W Framework for Search-Based Software Testing
Baik, Jongmoon
2014-01-01
In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model. PMID:25302314
Designs for surge immunity in critical electronic facilities
NASA Technical Reports Server (NTRS)
Roberts, Edward F., Jr.
1991-01-01
In recent years, Federal Aviation Administration (FAA) embarked on a program replacing older tube type electronic equipment with newer solid state equipment. This replacement program dramatically increased the susceptibility of the FAA's facilities to lightning related damages. The proposal is presented of techniques which may be employed to lessen the susceptibility of new FAA electronic facility designs to failures resulting from lightning related surges and transients as well as direct strikes. The general concept espoused is one of a consistent system approach employing both perimeter and internal protection. It compares the technique presently employed to reduce electronic noise with other techniques which reduce noise while lowering susceptibility to lightning related damage. It is anticipated that these techniques will be employed in the design of an Air Traffic Control Tower in a high isokeraunic area. This facility would be subjected to rigorous monitoring over a multi-year period to provide quantitative data hopefully supporting the advantage of this design.
An overview of the model integration process: From pre ...
Integration of models requires linking models which can be developed using different tools, methodologies, and assumptions. We performed a literature review with the aim of improving our understanding of model integration process, and also presenting better strategies for building integrated modeling systems. We identified five different phases to characterize integration process: pre-integration assessment, preparation of models for integration, orchestration of models during simulation, data interoperability, and testing. Commonly, there is little reuse of existing frameworks beyond the development teams and not much sharing of science components across frameworks. We believe this must change to enable researchers and assessors to form complex workflows that leverage the current environmental science available. In this paper, we characterize the model integration process and compare integration practices of different groups. We highlight key strategies, features, standards, and practices that can be employed by developers to increase reuse and interoperability of science software components and systems. The paper provides a review of the literature regarding techniques and methods employed by various modeling system developers to facilitate science software interoperability. The intent of the paper is to illustrate the wide variation in methods and the limiting effect the variation has on inter-framework reuse and interoperability. A series of recommendation
NASA Technical Reports Server (NTRS)
Bleck, Rainer; Bao, Jian-Wen; Benjamin, Stanley G.; Brown, John M.; Fiorino, Michael; Henderson, Thomas B.; Lee, Jin-Luen; MacDonald, Alexander E.; Madden, Paul; Middlecoff, Jacques;
2015-01-01
A hydrostatic global weather prediction model based on an icosahedral horizontal grid and a hybrid terrain following/ isentropic vertical coordinate is described. The model is an extension to three spatial dimensions of a previously developed, icosahedral, shallow-water model featuring user-selectable horizontal resolution and employing indirect addressing techniques. The vertical grid is adaptive to maximize the portion of the atmosphere mapped into the isentropic coordinate subdomain. The model, best described as a stacked shallow-water model, is being tested extensively on real-time medium-range forecasts to ready it for possible inclusion in operational multimodel ensembles for medium-range to seasonal prediction.
NASA Technical Reports Server (NTRS)
Jones, Kenneth M.; Biedron, Robert T.; Whitlock, Mark
1995-01-01
A computational study was performed to determine the predictive capability of a Reynolds averaged Navier-Stokes code (CFL3D) for two-dimensional and three-dimensional multielement high-lift systems. Three configurations were analyzed: a three-element airfoil, a wing with a full span flap and a wing with a partial span flap. In order to accurately model these complex geometries, two different multizonal structured grid techniques were employed. For the airfoil and full span wing configurations, a chimera or overset grid technique was used. The results of the airfoil analysis illustrated that although the absolute values of lift were somewhat in error, the code was able to predict reasonably well the variation with Reynolds number and flap position. The full span flap analysis demonstrated good agreement with experimental surface pressure data over the wing and flap. Multiblock patched grids were used to model the partial span flap wing. A modification to an existing patched- grid algorithm was required to analyze the configuration as modeled. Comparisons with experimental data were very good, indicating the applicability of the patched-grid technique to analyses of these complex geometries.
Study of modeling aspects of long period fiber grating using three-layer fiber geometry
NASA Astrophysics Data System (ADS)
Singh, Amit
2015-03-01
The author studied and demonstrated the various modeling aspects of long period fiber grating (LPFG) such as the core effective index, cladding effective index, coupling coefficient, coupled mode theory, and transmission spectrum of the LPFG using three-layer fiber geometry. Actually, there are two different techniques used for theoretical modeling of the long period fiber grating. The first technique was used by Vengsarkar et al who described the phenomenon of long-period fiber gratings, and the second technique was reported by Erdogan who revealed the inaccuracies and shortcomings of the original method, thereby providing an accurate and updated alternative. The main difference between these two different approaches lies in their fiber geometry. Venserkar et al used two-layer fiber geometry which is simple but employs weakly guided approximation, whereas Erdogan used three-layer fiber geometry which is complex but also the most accurate technique for theoretical study of the LPFG. The author further discussed about the behavior of the transmission spectrum by altering different grating parameters such as the grating length, ultraviolet (UV) induced-index change, and grating period to achieve the desired flexibility. The author simulated the various results with the help of MATLAB.
Characterization of microwave discharge plasmas for surface processing
NASA Astrophysics Data System (ADS)
Nikolic, Milka
We have developed several diagnostic techniques to characterize two types of microwave (MW) discharge plasmas: a supersonic flowing argon MW discharge maintained in a cylindrical quartz cavity at frequency ƒ = 2.45 GHz and a pulse repetitive MW discharge in air at ƒ = 9.5 GHz. Low temperature MW discharges have been proven to posses attractive properties for plasma cleaning and etching of niobium surfaces of superconductive radio frequency (SRF) cavities. Plasma based surface modification technologies offer a promising alternative for etching and cleaning of SRF cavities. These technologies are low cost, environmentally friendly and easily controllable, and present a possible alternative to currently used acid based wet technologies, such as buffered chemical polishing (BCP), or electrochemical polishing (EP). In fact, weakly ionized. non-equilibrium, and low temperature gas discharges represent a powerful tool for surface processing due to the strong chemical reactivity of plasma radicals. Therefore, characterizing these discharges by applying non-perturbing, in situ measurement techniques is of vital importance. Optical emission spectroscopy has been employed to analyze the molecular structure and evaluate rotational and vibrational temperatures in these discharges. The internal plasma structure was studied by applying a tomographic numerical method based on the two-dimensional Radon formula. An automated optical measurement system has been developed for reconstruction of local plasma parameters. It was found that excited argon states are concentrated near the tube walls, thus confirming the assumption that the post discharge plasma is dominantly sustained by a travelling surface wave. Employing a laser induced fluorescence technique in combination with the time synchronization device allowed us to obtain time-resolved population densities of some excited atomic levels in argon. We have developed a technique for absolute measurements of electron density based on the time-resolved absolute intensity of a Nitrogen spectral band belonging to the Second Positive System, the kinetic model and the detailed particle balance of the N2 (C 3piu) state. Measured electron density waveforms are in fair agreement with electron densities obtained using the Stark broadening technique. In addition, time dependent population densities of Ar I metastable and resonant levels were obtained by employing a kinetic model developed based on analysis of population density rates of excited Ar I p levels. Both the experimental results and numerical models for both types of gas discharges indicate that multispecies chemistry of gases plays an important role in understanding the dynamics and characterizing the properties of these discharges.
Pedagogical Techniques Employed by the Television Show "MythBusters"
NASA Astrophysics Data System (ADS)
Zavrel, Erik
2016-11-01
"MythBusters," the long-running though recently discontinued Discovery Channel science entertainment television program, has proven itself to be far more than just a highly rated show. While its focus is on entertainment, the show employs an array of pedagogical techniques to communicate scientific concepts to its audience. These techniques include: achieving active learning, avoiding jargon, employing repetition to ensure comprehension, using captivating demonstrations, cultivating an enthusiastic disposition, and increasing intrinsic motivation to learn. In this content analysis, episodes from the show's 10-year history were examined for these techniques. "MythBusters" represents an untapped source of pedagogical techniques, which science educators may consider availing themselves of in their tireless effort to better reach their students. Physics educators in particular may look to "MythBusters" for inspiration and guidance in how to incorporate these techniques into their own teaching and help their students in the learning process.
NASA Technical Reports Server (NTRS)
Sances, Dillon J.; Gangadharan, Sathya N.; Sudermann, James E.; Marsell, Brandon
2010-01-01
Liquid sloshing within spacecraft propellant tanks causes rapid energy dissipation at resonant modes, which can result in attitude destabilization of the vehicle. Identifying resonant slosh modes currently requires experimental testing and mechanical pendulum analogs to characterize the slosh dynamics. Computational Fluid Dynamics (CFD) techniques have recently been validated as an effective tool for simulating fuel slosh within free-surface propellant tanks. Propellant tanks often incorporate an internal flexible diaphragm to separate ullage and propellant which increases modeling complexity. A coupled fluid-structure CFD model is required to capture the damping effects of a flexible diaphragm on the propellant. ANSYS multidisciplinary engineering software employs a coupled solver for analyzing two-way Fluid Structure Interaction (FSI) cases such as the diaphragm propellant tank system. Slosh models generated by ANSYS software are validated by experimental lateral slosh test results. Accurate data correlation would produce an innovative technique for modeling fuel slosh within diaphragm tanks and provide an accurate and efficient tool for identifying resonant modes and the slosh dynamic response.
NASA Astrophysics Data System (ADS)
Chen, Quansheng; Qi, Shuai; Li, Huanhuan; Han, Xiaoyan; Ouyang, Qin; Zhao, Jiewen
2014-10-01
To rapidly and efficiently detect the presence of adulterants in honey, three-dimensional fluorescence spectroscopy (3DFS) technique was employed with the help of multivariate calibration. The data of 3D fluorescence spectra were compressed using characteristic extraction and the principal component analysis (PCA). Then, partial least squares (PLS) and back propagation neural network (BP-ANN) algorithms were used for modeling. The model was optimized by cross validation, and its performance was evaluated according to root mean square error of prediction (RMSEP) and correlation coefficient (R) in prediction set. The results showed that BP-ANN model was superior to PLS models, and the optimum prediction results of the mixed group (sunflower ± longan ± buckwheat ± rape) model were achieved as follow: RMSEP = 0.0235 and R = 0.9787 in the prediction set. The study demonstrated that the 3D fluorescence spectroscopy technique combined with multivariate calibration has high potential in rapid, nondestructive, and accurate quantitative analysis of honey adulteration.
Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models.
Le Muzic, M; Mindek, P; Sorger, J; Autin, L; Goodsell, D; Viola, I
2016-06-01
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes.
Visibility Equalizer Cutaway Visualization of Mesoscopic Biological Models
Le Muzic, M.; Mindek, P.; Sorger, J.; Autin, L.; Goodsell, D.; Viola, I.
2017-01-01
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine-tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer-based approach for visibility specification was valuable and effective for both, scientific and educational purposes. PMID:28344374
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Confusion-limited galaxy fields. I - Simulated optical and near-infrared images
NASA Technical Reports Server (NTRS)
Chokshi, Arati; Wright, Edward L.
1988-01-01
Techniques for simulating images of galaxy fields are presented that extend to high redshifts and a surface density of galaxies high enough to produce overlapping images. The observed properties of galaxies and galaxy-ensembles in the 'local' universe are extrapolated to high redshifts using reasonable scenarios for the evolution of galaxies and their spatial distribution. This theoretical framework is then employed with Monte Carlo techniques to create fairly realistic two-dimensional distributions of galaxies plus optical and near-infrared sky images in a variety of model universes, using the appropriate density, luminosity, and angular size versus redshift relations.
An improved water-filled impedance tube.
Wilson, Preston S; Roy, Ronald A; Carey, William M
2003-06-01
A water-filled impedance tube capable of improved measurement accuracy and precision is reported. The measurement instrument employs a variation of the standardized two-sensor transfer function technique. Performance improvements were achieved through minimization of elastic waveguide effects and through the use of sound-hard wall-mounted acoustic pressure sensors. Acoustic propagation inside the water-filled impedance tube was found to be well described by a plane wave model, which is a necessary condition for the technique. Measurements of the impedance of a pressure-release terminated transmission line, and the reflection coefficient from a water/air interface, were used to verify the system.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Zhao, B.; Wang, S. X.; Xing, J.; ...
2015-01-30
An innovative extended response surface modeling technique (ERSM v1.0) is developed to characterize the nonlinear response of fine particles (PM₂̣₅) to large and simultaneous changes of multiple precursor emissions from multiple regions and sectors. The ERSM technique is developed based on the conventional response surface modeling (RSM) technique; it first quantifies the relationship between PM₂̣₅ concentrations and the emissions of gaseous precursors from each single region using the conventional RSM technique, and then assesses the effects of inter-regional transport of PM₂̣₅ and its gaseous precursors on PM₂̣₅ concentrations in the target region. We apply this novel technique with a widelymore » used regional chemical transport model (CTM) over the Yangtze River delta (YRD) region of China, and evaluate the response of PM₂̣₅ and its inorganic components to the emissions of 36 pollutant–region–sector combinations. The predicted PM₂̣₅ concentrations agree well with independent CTM simulations; the correlation coefficients are larger than 0.98 and 0.99, and the mean normalized errors (MNEs) are less than 1 and 2% for January and August, respectively. It is also demonstrated that the ERSM technique could reproduce fairly well the response of PM₂̣₅ to continuous changes of precursor emission levels between zero and 150%. Employing this new technique, we identify the major sources contributing to PM₂̣₅ and its inorganic components in the YRD region. The nonlinearity in the response of PM₂̣₅ to emission changes is characterized and the underlying chemical processes are illustrated.« less
Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models
NASA Astrophysics Data System (ADS)
Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.
2012-04-01
The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.
Michell, Karen Elizabeth; Rispel, Laetitia C
2017-03-01
This article explores stakeholders' perceptions of the quality of occupational health service (OHS) delivery in South Africa. Using a purposive sampling technique, 11 focus group discussions (FGDs) were conducted in three provinces. Focus group participants ( n = 69) were recruited through professional organizations of occupational physicians and occupational health nurses as well as employer representatives of major industries in South Africa. Transcriptions of FGDs were analyzed using thematic content analysis. South Africa has diverse models of OHS delivery with varying quality. Focus group participants criticized the outsourced model of service delivery and the excessive focus on physical examinations to achieve legal compliance. These problems are exacerbated by a perceived lack of employer emphasis on occupational health, insufficient human and financial resources, and lack of specific quality of care standards for occupational health. Improvement in the quality of OHS delivery is essential to realize South Africa's quest for universal health coverage.
Buffet test in the National Transonic Facility
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Hergert, Dennis W.; Butler, Thomas W.; Herring, Fred M.
1992-01-01
A buffet test of a commercial transport model was accomplished in the National Transonic Facility at the NASA Langley Research Center. This aeroelastic test was unprecedented for this wind tunnel and posed a high risk to the facility. This paper presents the test results from a structural dynamics and aeroelastic response point of view and describes the activities required for the safety analysis and risk assessment. The test was conducted in the same manner as a flutter test and employed onboard dynamic instrumentation, real time dynamic data monitoring, automatic, and manual tunnel interlock systems for protecting the model. The procedures and test techniques employed for this test are expected to serve as the basis for future aeroelastic testing in the National Transonic Facility. This test program was a cooperative effort between the Boeing Commercial Airplane Company and the NASA Langley Research Center.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sehgal, Ray M.; Maroudas, Dimitrios, E-mail: maroudas@ecs.umass.edu, E-mail: ford@ecs.umass.edu; Ford, David M., E-mail: maroudas@ecs.umass.edu, E-mail: ford@ecs.umass.edu
We have developed a coarse-grained description of the phase behavior of the isolated 38-atom Lennard-Jones cluster (LJ{sub 38}). The model captures both the solid-solid polymorphic transitions at low temperatures and the complex cluster breakup and melting transitions at higher temperatures. For this coarse model development, we employ the manifold learning technique of diffusion mapping. The outcome of the diffusion mapping analysis over a broad temperature range indicates that two order parameters are sufficient to describe the cluster's phase behavior; we have chosen two such appropriate order parameters that are metrics of condensation and overall crystallinity. In this well-justified coarse-variable space,more » we calculate the cluster's free energy landscape (FEL) as a function of temperature, employing Monte Carlo umbrella sampling. These FELs are used to quantify the phase behavior and onsets of phase transitions of the LJ{sub 38} cluster.« less
Computer aided radiation analysis for manned spacecraft
NASA Technical Reports Server (NTRS)
Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.
1991-01-01
In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.
A distributed computing model for telemetry data processing
NASA Astrophysics Data System (ADS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
Carney, Timothy Jay; Morgan, Geoffrey P.; Jones, Josette; McDaniel, Anna M.; Weaver, Michael; Weiner, Bryan; Haggstrom, David A.
2014-01-01
Our conceptual model demonstrates our goal to investigate the impact of clinical decision support (CDS) utilization on cancer screening improvement strategies in the community health care (CHC) setting. We employed a dual modeling technique using both statistical and computational modeling to evaluate impact. Our statistical model used the Spearman’s Rho test to evaluate the strength of relationship between our proximal outcome measures (CDS utilization) against our distal outcome measure (provider self-reported cancer screening improvement). Our computational model relied on network evolution theory and made use of a tool called Construct-TM to model the use of CDS measured by the rate of organizational learning. We employed the use of previously collected survey data from community health centers Cancer Health Disparities Collaborative (HDCC). Our intent is to demonstrate the added valued gained by using a computational modeling tool in conjunction with a statistical analysis when evaluating the impact a health information technology, in the form of CDS, on health care quality process outcomes such as facility-level screening improvement. Significant simulated disparities in organizational learning over time were observed between community health centers beginning the simulation with high and low clinical decision support capability. PMID:24953241
A distributed computing model for telemetry data processing
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
Addressing Obesity in the Workplace: The Role of Employers
Heinen, Luann; Darling, Helen
2009-01-01
Context: Employers have pursued many strategies over the years to control health care costs and improve care. Disappointed by efforts to manage costs through the use of insurance-related techniques (e.g., prior authorization, restricted provider networks), employers have also begun to try to manage health by addressing their employees' key lifestyle risks. Reducing obesity (along with tobacco use and inactivity) is a priority for employers seeking to lower the incidence and severity of chronic illness and the associated demand for health services. Methods: This article describes the employer's perspective on the cost impact of obesity, discusses current practices in employer-sponsored wellness and weight management programs, provides examples from U.S. companies illustrating key points of employers' leverage and opportunities, and suggests policy directions to support the expansion of employers' initiatives, especially for smaller employers. Findings: Researchers and policymakers often overlook the extensive efforts and considerable impact of employer-sponsored wellness and health improvement programs. Greater focus on opportunities in the workplace is merited, however, for the evidence base supporting the economic and health impacts of employer-sponsored health promotion and wellness is growing, although not as quickly as the experience base of large employers. Conclusions: Public and private employers can serve their own economic interests by addressing obesity. Health care organizations, particularly hospitals, as well as public employers can be important role models. Policy development is needed to accelerate change, especially for smaller employers (those with fewer than 500 employees), which represent the majority of U.S. employers and are far less likely to offer health promotion programs. PMID:19298417
Addressing obesity in the workplace: the role of employers.
Heinen, LuAnn; Darling, Helen
2009-03-01
Employers have pursued many strategies over the years to control health care costs and improve care. Disappointed by efforts to manage costs through the use of insurance-related techniques (e.g., prior authorization, restricted provider networks), employers have also begun to try to manage health by addressing their employees' key lifestyle risks. Reducing obesity (along with tobacco use and inactivity) is a priority for employers seeking to lower the incidence and severity of chronic illness and the associated demand for health services. This article describes the employer's perspective on the cost impact of obesity, discusses current practices in employer-sponsored wellness and weight management programs, provides examples from U.S. companies illustrating key points of employers' leverage and opportunities, and suggests policy directions to support the expansion of employers' initiatives, especially for smaller employers. Researchers and policymakers often overlook the extensive efforts and considerable impact of employer-sponsored wellness and health improvement programs. Greater focus on opportunities in the workplace is merited, however, for the evidence base supporting the economic and health impacts of employer-sponsored health promotion and wellness is growing, although not as quickly as the experience base of large employers. Public and private employers can serve their own economic interests by addressing obesity. Health care organizations, particularly hospitals, as well as public employers can be important role models. Policy development is needed to accelerate change, especially for smaller employers (those with fewer than 500 employees), which represent the majority of U.S. employers and are far less likely to offer health promotion programs.
Evaluation of Parallel-Element, Variable-Impedance, Broadband Acoustic Liner Concepts
NASA Technical Reports Server (NTRS)
Jones, Michael G.; Howerton, Brian M.; Ayle, Earl
2012-01-01
Recent trends in aircraft engine design have highlighted the need for acoustic liners that provide broadband sound absorption with reduced liner thickness. Three such liner concepts are evaluated using the NASA normal incidence tube. Two concepts employ additive manufacturing techniques to fabricate liners with variable chamber depths. The first relies on scrubbing losses within narrow chambers to provide acoustic resistance necessary for sound absorption. The second employs wide chambers that provide minimal resistance, and relies on a perforated sheet to provide acoustic resistance. The variable-depth chambers used in both concepts result in reactance spectra near zero. The third liner concept employs mesh-caps (resistive sheets) embedded at variable depths within adjacent honeycomb chambers to achieve a desired impedance spectrum. Each of these liner concepts is suitable for use as a broadband sound absorber design, and a transmission line model is presented that provides good comparison with their respective acoustic impedance spectra. This model can therefore be used to design acoustic liners to accurately achieve selected impedance spectra. Finally, the effects of increasing the perforated facesheet thickness are demonstrated, and the validity of prediction models based on lumped element and wave propagation approaches is investigated. The lumped element model compares favorably with measured results for liners with thin facesheets, but the wave propagation model provides good comparisons for a wide range of facesheet thicknesses.
NASA Technical Reports Server (NTRS)
Wendel, Thomas R.; Boland, Joseph R.; Hahne, David E.
1991-01-01
Flight-control laws are developed for a wind-tunnel aircraft model flying at a high angle of attack by using a synthesis technique called direct eigenstructure assignment. The method employs flight guidelines and control-power constraints to develop the control laws, and gain schedules and nonlinear feedback compensation provide a framework for considering the nonlinear nature of the attack angle. Linear and nonlinear evaluations show that the control laws are effective, a conclusion that is further confirmed by a scale model used for free-flight testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamanini, Nicola; Wright, Matthew, E-mail: nicola.tamanini@cea.fr, E-mail: matthew.wright.13@ucl.ac.uk
We investigate the cosmological dynamics of the recently proposed extended chameleon models at both background and linear perturbation levels. Dynamical systems techniques are employed to fully characterize the evolution of the universe at the largest distances, while structure formation is analysed at sub-horizon scales within the quasi-static approximation. The late time dynamical transition from dark matter to dark energy domination can be well described by almost all extended chameleon models considered, with no deviations from ΛCDM results at both background and perturbation levels. The results obtained in this work confirm the cosmological viability of extended chameleons as alternative dark energymore » models.« less
Dual-Pump CARS Measurements in the University of Virginia's Dual-Mode Scramjet: Configuration "A"
NASA Technical Reports Server (NTRS)
Cutler, Andrew D.; Magnotti, Gaetano; Gallo, Emanuela; Danehy, Paul M.; Rockwell, Robert; Goyne, Christopher P.; McDaniel, James
2012-01-01
In this paper we describe efforts to obtain canonical data sets to assist computational modelers in their development of models for the prediction of mixing and combustion in scramjet combustors operating in the ramjet-scramjet transition regime. The CARS technique is employed to acquire temporally and spatially resolved measurements of temperature and species mole-fraction at four planes, one upstream of an H2 fuel injector and three downstream. The technique is described and results are presented for cases with and without chemical reaction. The vibrational energy mode in the heated airstream of the combustor was observed to be frozen at near facility heater conditions and significant nonuniformities in temperature were observed, attributed to nonuniformities of temperature exiting the heater. The measurements downstream of fuel injection show development of mixing and combustion, and are already proving useful to the modelers.
A simple Lagrangian forecast system with aviation forecast potential
NASA Technical Reports Server (NTRS)
Petersen, R. A.; Homan, J. H.
1983-01-01
A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Dynamic modeling of GMA fillet welding using cross-correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellinga, M.; Huissoon, J.; Kerr, H.
1996-12-31
The feasibility of employing the cross-correlation system identification technique as a dynamic modeling method for the GMAW process was examined. This approach has the advantages of modeling speed, the ability to operate in low signal to noise environments, the ease of digital implementation, and the lack of model order assumption, making it ideal in a welding application. The width of the weld pool was the parameter investigated as a function of torch travel speed. Both on-line and off-line width measurements were used to identify the impulse response. Experimental results are presented and comparisons made with both step and ramp response.
High-Fidelity Micromechanics Model Developed for the Response of Multiphase Materials
NASA Technical Reports Server (NTRS)
Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.
2002-01-01
A new high-fidelity micromechanics model has been developed under funding from the NASA Glenn Research Center for predicting the response of multiphase materials with arbitrary periodic microstructures. The model's analytical framework is based on the homogenization technique, but the method of solution for the local displacement and stress fields borrows concepts previously employed in constructing the higher order theory for functionally graded materials. The resulting closed-form macroscopic and microscopic constitutive equations, valid for both uniaxial and multiaxial loading of periodic materials with elastic and inelastic constitutive phases, can be incorporated into a structural analysis computer code. Consequently, this model now provides an alternative, accurate method.
Rapid prototyping and AI programming environments applied to payload modeling
NASA Technical Reports Server (NTRS)
Carnahan, Richard S., Jr.; Mendler, Andrew P.
1987-01-01
This effort focused on using artificial intelligence (AI) programming environments and rapid prototyping to aid in both space flight manned and unmanned payload simulation and training. Significant problems addressed are the large amount of development time required to design and implement just one of these payload simulations and the relative inflexibility of the resulting model to accepting future modification. Results of this effort have suggested that both rapid prototyping and AI programming environments can significantly reduce development time and cost when applied to the domain of payload modeling for crew training. The techniques employed are applicable to a variety of domains where models or simulations are required.
NASA Astrophysics Data System (ADS)
Sabanskis, A.; Virbulis, J.
2018-05-01
Mathematical modelling is employed to numerically analyse the dynamics of the Czochralski (CZ) silicon single crystal growth. The model is axisymmetric, its thermal part describes heat transfer by conduction and thermal radiation, and allows to predict the time-dependent shape of the crystal-melt interface. Besides the thermal field, the point defect dynamics is modelled using the finite element method. The considered process consists of cone growth and cylindrical phases, including a short period of a reduced crystal pull rate, and a power jump to avoid large diameter changes. The influence of the thermal stresses on the point defects is also investigated.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Laramie, Sydney M.; Milshtein, Jarrod D.; Breault, Tanya M.; Brushett, Fikile R.; Thompson, Levi T.
2016-09-01
Non-aqueous redox flow batteries (NAqRFBs) have recently received considerable attention as promising high energy density, low cost grid-level energy storage technologies. Despite these attractive features, NAqRFBs are still at an early stage of development and innovative design techniques are necessary to improve performance and decrease costs. In this work, we investigate multi-electron transfer, common ion exchange NAqRFBs. Common ion systems decrease the supporting electrolyte requirement, which subsequently improves active material solubility and decreases electrolyte cost. Voltammetric and electrolytic techniques are used to study the electrochemical performance and chemical compatibility of model redox active materials, iron (II) tris(2,2‧-bipyridine) tetrafluoroborate (Fe(bpy)3(BF4)2) and ferrocenylmethyl dimethyl ethyl ammonium tetrafluoroborate (Fc1N112-BF4). These results help disentangle complex cycling behavior observed in flow cell experiments. Further, a simple techno-economic model demonstrates the cost benefits of employing common ion exchange NAqRFBs, afforded by decreasing the salt and solvent contributions to total chemical cost. This study highlights two new concepts, common ion exchange and multi-electron transfer, for NAqRFBs through a demonstration flow cell employing model active species. In addition, the compatibility analysis developed for asymmetric chemistries can apply to other promising species, including organics, metal coordination complexes (MCCs) and mixed MCC/organic systems, enabling the design of low cost NAqRFBs.
A controls engineering approach for analyzing airplane input-output characteristics
NASA Technical Reports Server (NTRS)
Arbuckle, P. Douglas
1991-01-01
An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.
A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems
NASA Astrophysics Data System (ADS)
Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron
2017-12-01
This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
DNA-binding study of anticancer drug cytarabine by spectroscopic and molecular docking techniques.
Shahabadi, Nahid; Falsafi, Monireh; Maghsudi, Maryam
2017-01-02
The interaction of anticancer drug cytarabine with calf thymus DNA (CT-DNA) was investigated in vitro under simulated physiological conditions by multispectroscopic techniques and molecular modeling study. The fluorescence spectroscopy and UV absorption spectroscopy indicated drug interacted with CT-DNA in a groove-binding mode, while the binding constant of UV-vis and the number of binding sites were 4.0 ± 0.2 × 10 4 L mol -1 and 1.39, respectively. The fluorimetric studies showed that the reaction between the drugs with CT-DNA is exothermic. Circular dichroism spectroscopy was employed to measure the conformational change of DNA in the presence of cytarabine. Furthermore, the drug induces detectable changes in its viscosity for DNA interaction. The molecular modeling results illustrated that cytarabine strongly binds to groove of DNA by relative binding energy of docked structure -20.61 KJ mol -1 . This combination of multiple spectroscopic techniques and molecular modeling methods can be widely used in the investigation on the interaction of small molecular pollutants and drugs with biomacromolecules for clarifying the molecular mechanism of toxicity or side effect in vivo.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Instrumentation and telemetry systems for free-flight drop model testing
NASA Technical Reports Server (NTRS)
Hyde, Charles R.; Massie, Jeffrey J.
1993-01-01
This paper presents instrumentation and telemetry system techniques used in free-flight research drop model testing at the NASA Langley Research Center. The free-flight drop model test technique is used to conduct flight dynamics research of high performance aircraft using dynamically scaled models. The free-flight drop model flight testing supplements research using computer analysis and wind tunnel testing. The drop models are scaled to approximately 20 percent of the size of the actual aircraft. This paper presents an introduction to the Free-Flight Drop Model Program which is followed by a description of the current instrumentation and telemetry systems used at the NASA Langley Research Center, Plum Tree Test Site. The paper describes three telemetry downlinks used to acquire the data, video, and radar tracking information from the model. Also described are two telemetry uplinks, one used to fly the model employing a ground-based flight control computer and a second to activate commands for visual tracking and parachute recovery of the model. The paper concludes with a discussion of free-flight drop model instrumentation and telemetry system development currently in progress for future drop model projects at the NASA Langley Research Center.
An Educational Technology Tool That Developed in the Natural Flow of Life among Students: WhatsApp
ERIC Educational Resources Information Center
Cetinkaya, Levent
2017-01-01
This study was carried out to identify the benefits and drawbacks of using mobile social network application WhatsApp in the education of Secondary Education students. In this research, survey model was used and open-ended question form to 145 students together with semi-structured interview technique to 6 students were employed and answer to the…
Video methods in the quantification of children's exposures.
Ferguson, Alesia C; Canales, Robert A; Beamer, Paloma; Auyeung, Willa; Key, Maya; Munninghoff, Amy; Lee, Kevin Tse-Wing; Robertson, Alexander; Leckie, James O
2006-05-01
In 1994, Stanford University's Exposure Research Group (ERG) conducted its first pilot study to collect micro-level activity time series (MLATS) data for young children. The pilot study involved videotaping four children of farm workers in the Salinas Valley of California and converting their videotaped activities to valuable text files of contact behavior using video-translation techniques. These MLATS are especially useful for describing intermittent dermal (i.e., second-by-second account of surfaces and objects contacted) and non-dietary ingestion (second-by-second account of objects or hands placed in the mouth) contact behavior. Second-by-second records of children contact behavior are amenable to quantitative and statistical analysis and allow for more accurate model estimates of human exposure and dose to environmental contaminants. Activity patterns data for modeling inhalation exposure (i.e., accounts of microenvironments visited) can also be extracted from the MLATS data. Since the pilot study, ERG has collected an immense MLATS data set for 92 children using more developed and refined videotaping and video-translation methodologies. This paper describes all aspects required for the collection of MLATS including: subject recruitment techniques, videotaping and video-translation processes, and potential data analysis. This paper also describes the quality assurance steps employed for these new MLATS projects, including: training, data management, and the application of interobserver and intraobserver agreement during video translation. The discussion of these issues and ERG's experiences in dealing with them can assist other groups in the conduct of research that employs these more quantitative techniques.
Antimisting kerosene atomization and flammability
NASA Technical Reports Server (NTRS)
Fleeter, R.; Petersen, R. A.; Toaz, R. D.; Jakub, A.; Sarohia, V.
1982-01-01
Various parameters found to affect the flammability of antimisting kerosene (Jet A + polymer additive) are investigated. Digital image processing was integrated into a technique for measurement of fuel spray characteristics. This technique was developed to avoid many of the error sources inherent to other spray assessment techniques and was applied to the study of engine fuel nozzle atomization performance with Jet A and antimisting fuel. Aircraft accident fuel spill and ignition dynamics were modeled in a steady state simulator allowing flammability to be measured as a function of airspeed, fuel flow rate, fuel jet Reynolds number and polymer concentration. The digital imaging technique was employed to measure spray characteristics in this simulation and these results were related to flammability test results. Scaling relationships were investigated through correlation of experimental results with characteristic dimensions spanning more than two orders of magnitude.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model
Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid
2014-01-01
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854
Zhang, Ziheng; Martin, Jonathan; Wu, Jinfeng; Wang, Haijiang; Promislow, Keith; Balcom, Bruce J
2008-08-01
Water management is critical to optimize the operation of polymer electrolyte membrane fuel cells. At present, numerical models are employed to guide water management in such fuel cells. Accurate measurements of water content variation in polymer electrolyte membrane fuel cells are required to validate these models and to optimize fuel cell behavior. We report a direct water content measurement across the Nafion membrane in an operational polymer electrolyte membrane fuel cell, employing double half k-space spin echo single point imaging techniques. The MRI measurements with T2 mapping were undertaken with a parallel plate resonator to avoid the effects of RF screening. The parallel plate resonator employs the electrodes inherent to the fuel cell to create a resonant circuit at RF frequencies for MR excitation and detection, while still operating as a conventional fuel cell at DC. Three stages of fuel cell operation were investigated: activation, operation and dehydration. Each profile was acquired in 6 min, with 6 microm nominal resolution and a SNR of better than 15.
NASA Astrophysics Data System (ADS)
Afifi, Ahmed; Nakaguchi, Toshiya; Tsumura, Norimichi
2010-03-01
In many medical applications, the automatic segmentation of deformable organs from medical images is indispensable and its accuracy is of a special interest. However, the automatic segmentation of these organs is a challenging task according to its complex shape. Moreover, the medical images usually have noise, clutter, or occlusion and considering the image information only often leads to meager image segmentation. In this paper, we propose a fully automated technique for the segmentation of deformable organs from medical images. In this technique, the segmentation is performed by fitting a nonlinear shape model with pre-segmented images. The kernel principle component analysis (KPCA) is utilized to capture the complex organs deformation and to construct the nonlinear shape model. The presegmentation is carried out by labeling each pixel according to its high level texture features extracted using the overcomplete wavelet packet decomposition. Furthermore, to guarantee an accurate fitting between the nonlinear model and the pre-segmented images, the particle swarm optimization (PSO) algorithm is employed to adapt the model parameters for the novel images. In this paper, we demonstrate the competence of proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans of different patients.
Testing the PV-Theta Mapping Technique in a 3-D CTM Model Simulation
NASA Technical Reports Server (NTRS)
Frith, Stacey M.
2004-01-01
Mapping lower stratospheric ozone into potential vorticity (PV)- potential temperature (Theta) coordinates is a common technique employed to analyze sparse data sets. Ozone transformed into a flow-following dynamical coordinate system is insensitive to meteorological variations. Therefore data from a wide range of times/locations can be compared, so long as the measurements were made in the same airmass (as defined by PV). Moreover, once a relationship between ozone and PV/Theta is established, a full 3D ozone field can be estimated from this relationship and the 3D analyzed PV field. However, ozone data mapped in this fashion can be hampered by noisy PV fields, or "mis-matches" in the resolution and/or exact location of the ozone and PV measurements. In this study, we investigate the PV-ozone relationship using output from a recent 50-year run of the Goddard 3D chemical transport model (CTM). Model constituents are transported using off-line dynamics from the finite volume general circulation model (FVGCM). By using the internally consistent model PV and ozone fields, we minimize noise due to mis-matching and resolution issues. We calculate correlations between model ozone and PV throughout the stratosphere, and test the sensitivity of the technique to initial data resolution. To do this we degrade the model data to that of various satellite instruments, then compare the mapped fields derived from the sub-sampled data to the full resolution model data. With these studies we can determine appropriate limits for the PV-theta mapping technique in latitude, altitude, and as a function of original data resolution.
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
NASA Astrophysics Data System (ADS)
Hilt, Attila; Pozsonyi, László
2012-09-01
Fixed access networks widely employ fiber-optical techniques due to the extremely wide bandwidth offered to subscribers. In the last decade, there has also been an enormous increase of user data visible in mobile systems. The importance of fiber-optical techniques within the fixed transmission/transport networks of mobile systems is therefore inevitably increasing. This article summarizes a few reasons and gives examples why and how fiber-optic techniques are employed efficiently in second-generation networks.
Characterization of new drug delivery nanosystems using atomic force microscopy
NASA Astrophysics Data System (ADS)
Spyratou, Ellas; Mourelatou, Elena A.; Demetzos, C.; Makropoulou, Mersini; Serafetinides, A. A.
2015-01-01
Liposomes are the most attractive lipid vesicles for targeted drug delivery in nanomedicine, behaving also as cell models in biophotonics research. The characterization of the micro-mechanical properties of drug carriers is an important issue and many analytical techniques are employed, as, for example, optical tweezers and atomic force microscopy. In this work, polyol hyperbranched polymers (HBPs) have been employed along with liposomes for the preparation of new chimeric advanced drug delivery nanosystems (Chi-aDDnSs). Aliphatic polyester HBPs with three different pseudogenerations G2, G3 and G4 with 16, 32, and 64 peripheral hydroxyl groups, respectively, have been incorporated in liposomal formulation. The atomic force microscopy (AFM) technique was used for the comparative study of the morphology and the mechanical properties of Chi-aDDnSs and conventional DDnS. The effects of both the HBPs architecture and the polyesters pseudogeneration number in the stability and the stiffness of chi-aDDnSs were examined. From the force-distance curves of AFM spectroscopy, the Young's modulus was calculated.
Alyami, Hamad; Dahmash, Eman; Bowen, James
2017-01-01
Powder blend homogeneity is a critical attribute in formulation development of low dose and potent active pharmaceutical ingredients (API) yet a complex process with multiple contributing factors. Excipient characteristics play key role in efficient blending process and final product quality. In this work the effect of excipient type and properties, blending technique and processing time on content uniformity was investigated. Powder characteristics for three commonly used excipients (starch, pregelatinised starch and microcrystalline cellulose) were initially explored using laser diffraction particle size analyser, angle of repose for flowability, followed by thorough evaluations of surface topography employing scanning electron microscopy and interferometry. Blend homogeneity was evaluated based on content uniformity analysis of the model API, ergocalciferol, using a validated analytical technique. Flowability of powders were directly related to particle size and shape, while surface topography results revealed the relationship between surface roughness and ability of excipient with high surface roughness to lodge fine API particles within surface groves resulting in superior uniformity of content. Of the two blending techniques, geometric blending confirmed the ability to produce homogeneous blends at low dilution when processed for longer durations, whereas manual ordered blending failed to achieve compendial requirement for content uniformity despite mixing for 32 minutes. Employing the novel dry powder hybrid mixer device, developed at Aston University laboratory, results revealed the superiority of the device and enabled the production of homogenous blend irrespective of excipient type and particle size. Lower dilutions of the API (1% and 0.5% w/w) were examined using non-sieved excipients and the dry powder hybrid mixing device enabled the development of successful blends within compendial requirements and low relative standard deviation. PMID:28609454
Alyami, Hamad; Dahmash, Eman; Bowen, James; Mohammed, Afzal R
2017-01-01
Powder blend homogeneity is a critical attribute in formulation development of low dose and potent active pharmaceutical ingredients (API) yet a complex process with multiple contributing factors. Excipient characteristics play key role in efficient blending process and final product quality. In this work the effect of excipient type and properties, blending technique and processing time on content uniformity was investigated. Powder characteristics for three commonly used excipients (starch, pregelatinised starch and microcrystalline cellulose) were initially explored using laser diffraction particle size analyser, angle of repose for flowability, followed by thorough evaluations of surface topography employing scanning electron microscopy and interferometry. Blend homogeneity was evaluated based on content uniformity analysis of the model API, ergocalciferol, using a validated analytical technique. Flowability of powders were directly related to particle size and shape, while surface topography results revealed the relationship between surface roughness and ability of excipient with high surface roughness to lodge fine API particles within surface groves resulting in superior uniformity of content. Of the two blending techniques, geometric blending confirmed the ability to produce homogeneous blends at low dilution when processed for longer durations, whereas manual ordered blending failed to achieve compendial requirement for content uniformity despite mixing for 32 minutes. Employing the novel dry powder hybrid mixer device, developed at Aston University laboratory, results revealed the superiority of the device and enabled the production of homogenous blend irrespective of excipient type and particle size. Lower dilutions of the API (1% and 0.5% w/w) were examined using non-sieved excipients and the dry powder hybrid mixing device enabled the development of successful blends within compendial requirements and low relative standard deviation.
The importance of employment status in determining exit rates from nursing.
Daniels, Frieda; Laporte, Audrey; Lemieux-Charles, Louise; Baumann, Andrea; Onate, Kanecy; Deber, Raisa
2012-01-01
To mitigate nurse shortages, health care decision makers tend to employ retention strategies that assume nurses employed in full-time, part-time, or casual positions and working in different sectors have similar preferences for work. However, this assumption has not been validated in the literature. The relationship between a nurse's propensity to exit the nurse profession in Ontario and employment status was explored by building an extended Cox Proportional Hazards Regression Model using a counting process technique. The differential exit patterns between part-time and casual nurses suggest that the common practice of treating part-time and casual nurses as equivalent is misleading. Health care decision makers should consider nurse retention strategies specifically targeting casual nurses because this segment of the profession is at the greatest risk of leaving. Nurse executives and nurse managers should investigate the different work preferences of part-time and casual nurses to devise tailored rather than "one-size fits all" nurse retention strategies to retain casual nurses.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
A fast analytical undulator model for realistic high-energy FEL simulations
NASA Astrophysics Data System (ADS)
Tatchyn, R.; Cremer, T.
1997-02-01
A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Fuzzy Model-based Pitch Stabilization and Wing Vibration Suppression of Flexible Wing Aircraft.
NASA Technical Reports Server (NTRS)
Ayoubi, Mohammad A.; Swei, Sean Shan-Min; Nguyen, Nhan T.
2014-01-01
This paper presents a fuzzy nonlinear controller to regulate the longitudinal dynamics of an aircraft and suppress the bending and torsional vibrations of its flexible wings. The fuzzy controller utilizes full-state feedback with input constraint. First, the Takagi-Sugeno fuzzy linear model is developed which approximates the coupled aeroelastic aircraft model. Then, based on the fuzzy linear model, a fuzzy controller is developed to utilize a full-state feedback and stabilize the system while it satisfies the control input constraint. Linear matrix inequality (LMI) techniques are employed to solve the fuzzy control problem. Finally, the performance of the proposed controller is demonstrated on the NASA Generic Transport Model (GTM).
FDDO and DSMC analyses of rarefied gas flow through 2D nozzles
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas expanding through a two-dimensional nozzle and into a surrounding low-density environment. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, the variable hard sphere model is used as a molecular model and the no time counter method is employed as a collision sampling technique. The results of both the FDDO and the DSMC methods show good agreement. The FDDO method requires less computational effort than the DSMC method by factors of 10 to 40 in CPU time, depending on the degree of rarefaction.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
NASA Astrophysics Data System (ADS)
Sharpanskykh, Alexei; Treur, Jan
Employing rich internal agent models of actors in large-scale socio-technical systems often results in scalability issues. The problem addressed in this paper is how to improve computational properties of a complex internal agent model, while preserving its behavioral properties. The problem is addressed for the case of an existing affective-cognitive decision making model instantiated for an emergency scenario. For this internal decision model an abstracted behavioral agent model is obtained, which ensures a substantial increase of the computational efficiency at the cost of approximately 1% behavioural error. The abstraction technique used can be applied to a wide range of internal agent models with loops, for example, involving mutual affective-cognitive interactions.
Hot-bench simulation of the active flexible wing wind-tunnel model
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.; Houck, Jacob A.
1990-01-01
Two simulations, one batch and one real-time, of an aeroelastically-scaled wind-tunnel model were developed. The wind-tunnel model was a full-span, free-to-roll model of an advanced fighter concept. The batch simulation was used to generate and verify the real-time simulation and to test candidate control laws prior to implementation. The real-time simulation supported hot-bench testing of a digital controller, which was developed to actively control the elastic deformation of the wind-tunnel model. Time scaling was required for hot-bench testing. The wind-tunnel model, the mathematical models for the simulations, the techniques employed to reduce the hot-bench time-scale factors, and the verification procedures are described.
Modeling of thermal expansion coefficient of perovskite oxide for solid oxide fuel cell cathode
NASA Astrophysics Data System (ADS)
Heydari, F.; Maghsoudipour, A.; Alizadeh, M.; Khakpour, Z.; Javaheri, M.
2015-09-01
Artificial intelligence models have the capacity to eliminate the need for expensive experimental investigation in various areas of manufacturing processes, including the material science. This study investigates the applicability of adaptive neuro-fuzzy inference system (ANFIS) approach for modeling the performance parameters of thermal expansion coefficient (TEC) of perovskite oxide for solid oxide fuel cell cathode. Oxides (Ln = La, Nd, Sm and M = Fe, Ni, Mn) have been prepared and characterized to study the influence of the different cations on TEC. Experimental results have shown TEC decreases favorably with substitution of Nd3+ and Mn3+ ions in the lattice. Structural parameters of compounds have been determined by X-ray diffraction, and field emission scanning electron microscopy has been used for the morphological study. Comparison results indicated that the ANFIS technique could be employed successfully in modeling thermal expansion coefficient of perovskite oxide for solid oxide fuel cell cathode, and considerable savings in terms of cost and time could be obtained by using ANFIS technique.
Portable document format file showing the surface models of cadaver whole body.
Shin, Dong Sun; Chung, Min Suk; Park, Jin Seo; Park, Hyung Seon; Lee, Sangho; Moon, Young Lae; Jang, Hae Gwon
2012-08-01
In the Visible Korean project, 642 three-dimensional (3D) surface models have been built from the sectioned images of a male cadaver. It was recently discovered that popular PDF file enables users to approach the numerous surface models conveniently on Adobe Reader. Purpose of this study was to present a PDF file including systematized surface models of human body as the beneficial contents. To achieve the purpose, fitting software packages were employed in accordance with the procedures. Two-dimensional (2D) surface models including the original sectioned images were embedded into the 3D surface models. The surface models were categorized into systems and then groups. The adjusted surface models were inserted to a PDF file, where relevant multimedia data were added. The finalized PDF file containing comprehensive data of a whole body could be explored in varying manners. The PDF file, downloadable freely from the homepage (http://anatomy.co.kr), is expected to be used as a satisfactory self-learning tool of anatomy. Raw data of the surface models can be extracted from the PDF file and employed for various simulations for clinical practice. The technique to organize the surface models will be applied to manufacture of other PDF files containing various multimedia contents.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Towards a predictive thermal explosion model for energetic materials
NASA Astrophysics Data System (ADS)
Yoh, Jack J.; McClelland, Matthew A.; Maienschein, Jon L.; Wardell, Jeffrey F.
2005-01-01
We present an overview of models and computational strategies for simulating the thermal response of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the behavior of energetic materials systems exposed to strong thermal environments such as fires. We apply these models and computational techniques to a thermal explosion experiment involving the slow heating of a confined explosive. The model includes the transition from slow heating to rapid deflagration in which the time scale decreases from days to hundreds of microseconds. Thermal, mechanical, and chemical effects are modeled during all phases of this process. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics. In addition, we investigate the sensitivity of wall expansion rates to numerical strategies and parameters. Results from a one-dimensional model show that violence is influenced by the presence of a gap between the explosive and container. In addition, a comparison is made between 2D model and measured results for the explosion temperature and tube wall expansion profiles.
2014-01-01
Background Inter-professional learning has been promoted as the solution to many clinical management issues. One such issue is the correct use of asthma inhaler devices. Up to 80% of people with asthma use their inhaler device incorrectly. The implications of this are poor asthma control and quality of life. Correct inhaler technique can be taught, however these educational instructions need to be repeated if correct technique is to be maintained. It is important to maximise the opportunities to deliver this education in primary care. In light of this, it is important to explore how health care providers, in particular pharmacists and general medical practitioners, can work together in delivering inhaler technique education to patients, over time. Therefore, there is a need to develop and evaluate effective inter-professional education, which will address the need to educate patients in the correct use of their inhalers as well as equip health care professionals with skills to engage in collaborative relationships with each other. Methods This mixed methods study involves the development and evaluation of three modules of continuing education, Model 1, Model 2 and Model 3. A fourth group, Model 4, acting as a control. Model 1 consists of face-to-face continuing professional education on asthma inhaler technique, aimed at pharmacists, general medical practitioners and their practice nurses. Model 2 is an electronic online continuing education module based on Model 1 principles. Model 3 is also based on asthma inhaler technique education but employs a learning intervention targeting health care professional relationships and is based on sociocultural theory. This study took the form of a parallel group, repeated measure design. Following the completion of continuing professional education, health care professionals recruited people with asthma and followed them up for 6 months. During this period, inhaler device technique training was delivered and data on patient inhaler technique, clinical and humanistic outcomes were collected. Outcomes related to professional collaborative relationships were also measured. Discussion Challenges presented included the requirement of significant financial resources for development of study materials and limited availability of validated tools to measure health care professional collaboration over time. PMID:24708800
Bosnic-Anticevich, Sinthia Z; Stuart, Meg; Mackson, Judith; Cvetkovski, Biljana; Sainsbury, Erica; Armour, Carol; Mavritsakis, Sofia; Mendrela, Gosia; Travers-Mason, Pippa; Williamson, Margaret
2014-04-07
Inter-professional learning has been promoted as the solution to many clinical management issues. One such issue is the correct use of asthma inhaler devices. Up to 80% of people with asthma use their inhaler device incorrectly. The implications of this are poor asthma control and quality of life. Correct inhaler technique can be taught, however these educational instructions need to be repeated if correct technique is to be maintained. It is important to maximise the opportunities to deliver this education in primary care. In light of this, it is important to explore how health care providers, in particular pharmacists and general medical practitioners, can work together in delivering inhaler technique education to patients, over time. Therefore, there is a need to develop and evaluate effective inter-professional education, which will address the need to educate patients in the correct use of their inhalers as well as equip health care professionals with skills to engage in collaborative relationships with each other. This mixed methods study involves the development and evaluation of three modules of continuing education, Model 1, Model 2 and Model 3. A fourth group, Model 4, acting as a control.Model 1 consists of face-to-face continuing professional education on asthma inhaler technique, aimed at pharmacists, general medical practitioners and their practice nurses.Model 2 is an electronic online continuing education module based on Model 1 principles.Model 3 is also based on asthma inhaler technique education but employs a learning intervention targeting health care professional relationships and is based on sociocultural theory.This study took the form of a parallel group, repeated measure design. Following the completion of continuing professional education, health care professionals recruited people with asthma and followed them up for 6 months. During this period, inhaler device technique training was delivered and data on patient inhaler technique, clinical and humanistic outcomes were collected. Outcomes related to professional collaborative relationships were also measured. Challenges presented included the requirement of significant financial resources for development of study materials and limited availability of validated tools to measure health care professional collaboration over time.
NASA Astrophysics Data System (ADS)
Pham, Binh Thai; Tien Bui, Dieu; Pourghasemi, Hamid Reza; Indra, Prakash; Dholakia, M. B.
2017-04-01
The objective of this study is to make a comparison of the prediction performance of three techniques, Functional Trees (FT), Multilayer Perceptron Neural Networks (MLP Neural Nets), and Naïve Bayes (NB) for landslide susceptibility assessment at the Uttarakhand Area (India). Firstly, a landslide inventory map with 430 landslide locations in the study area was constructed from various sources. Landslide locations were then randomly split into two parts (i) 70 % landslide locations being used for training models (ii) 30 % landslide locations being employed for validation process. Secondly, a total of eleven landslide conditioning factors including slope angle, slope aspect, elevation, curvature, lithology, soil, land cover, distance to roads, distance to lineaments, distance to rivers, and rainfall were used in the analysis to elucidate the spatial relationship between these factors and landslide occurrences. Feature selection of Linear Support Vector Machine (LSVM) algorithm was employed to assess the prediction capability of these conditioning factors on landslide models. Subsequently, the NB, MLP Neural Nets, and FT models were constructed using training dataset. Finally, success rate and predictive rate curves were employed to validate and compare the predictive capability of three used models. Overall, all the three models performed very well for landslide susceptibility assessment. Out of these models, the MLP Neural Nets and the FT models had almost the same predictive capability whereas the MLP Neural Nets (AUC = 0.850) was slightly better than the FT model (AUC = 0.849). The NB model (AUC = 0.838) had the lowest predictive capability compared to other models. Landslide susceptibility maps were final developed using these three models. These maps would be helpful to planners and engineers for the development activities and land-use planning.
Plank, Gernot; Zhou, Lufang; Greenstein, Joseph L; Cortassa, Sonia; Winslow, Raimond L; O'Rourke, Brian; Trayanova, Natalia A
2008-01-01
Computer simulations of electrical behaviour in the whole ventricles have become commonplace during the last few years. The goals of this article are (i) to review the techniques that are currently employed to model cardiac electrical activity in the heart, discussing the strengths and weaknesses of the various approaches, and (ii) to implement a novel modelling approach, based on physiological reasoning, that lifts some of the restrictions imposed by current state-of-the-art ionic models. To illustrate the latter approach, the present study uses a recently developed ionic model of the ventricular myocyte that incorporates an excitation–contraction coupling and mitochondrial energetics model. A paradigm to bridge the vastly disparate spatial and temporal scales, from subcellular processes to the entire organ, and from sub-microseconds to minutes, is presented. Achieving sufficient computational efficiency is the key to success in the quest to develop multiscale realistic models that are expected to lead to better understanding of the mechanisms of arrhythmia induction following failure at the organelle level, and ultimately to the development of novel therapeutic applications. PMID:18603526
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonnelli, Eduardo; Diniz, Ricardo
2014-11-11
This is a complementary work about the behavior analysis of the neutron lifetimes that was developed in the IPEN/MB-01 nuclear reactor facility. The macroscopic neutron noise technique was experimentally employed using pulse mode detectors for two stages of control rods insertion, where a total of twenty levels of subcriticality have been carried out. It was also considered that the neutron reflector density was treated as an additional group of delayed neutrons, being a sophisticated approach in the two-region kinetic theoretical model.
Dimensionless Analysis and Mathematical Modeling of Electromagnetic Levitation (EML) of Metals
NASA Astrophysics Data System (ADS)
Gao, Lei; Shi, Zhe; Li, Donghui; Yang, Yindong; Zhang, Guifang; McLean, Alexander; Chattopadhyay, Kinnor
2016-02-01
Electromagnetic levitation (EML), a contactless metal melting method, can be used to produce ultra-pure metals and alloys. In the EML process, the levitation force exerted on the droplet is of paramount importance and is affected by many parameters. In this paper, the relationship between levitation force and parameters affecting the levitation process were investigated by dimensionless analysis. The general formula developed by dimensionless analysis was tested and evaluated by numerical modeling. This technique can be employed to design levitation systems for a variety of materials.
On turbulent flows dominated by curvature effects
NASA Technical Reports Server (NTRS)
Cheng, G. C.; Farokhi, S.
1992-01-01
A technique for improving the numerical predictions of turbulent flows with the effect of streamline curvature is developed. Separated flows and the flow in a curved duct are examples of flowfields where streamline curvature plays a dominant role. New algebraic formulations for the eddy viscosity incorporating the k-epsilon turbulence model are proposed to account for various effects of streamline curvature. The loci of flow reversal of the separated flows over various backward-facing steps are employed to test the capability of the proposed turbulence model in capturing the effect of local curvature.
Use of Laboratory Data to Model Interstellar Chemistry
NASA Technical Reports Server (NTRS)
Vidali, Gianfranco; Roser, J. E.; Manico, G.; Pirronello, V.
2006-01-01
Our laboratory research program is about the formation of molecules on dust grains analogues in conditions mimicking interstellar medium environments. Using surface science techniques, in the last ten years we have investigated the formation of molecular hydrogen and other molecules on different types of dust grain analogues. We analyzed the results to extract quantitative information on the processes of molecule formation on and ejection from dust grain analogues. The usefulness of these data lies in the fact that these results have been employed by theoreticians in models of the chemical evolution of ISM environments.
SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows
NASA Astrophysics Data System (ADS)
Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu
2017-12-01
A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.
NASA Astrophysics Data System (ADS)
Taylan, Osman
2017-02-01
High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.
Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.
Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J
2018-03-01
Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.
Experiment and simulation study of laser dicing silicon with water-jet
NASA Astrophysics Data System (ADS)
Bao, Jiading; Long, Yuhong; Tong, Youqun; Yang, Xiaoqing; Zhang, Bin; Zhou, Zupeng
2016-11-01
Water-jet laser processing is an internationally advanced technique, which combines the advantages of laser processing with water jet cutting. In the study, the experiment of water-jet laser dicing are conducted with ns pulsed laser of 1064 nm irradiating, and Smooth Particle Hydrodynamic (SPH) technique by AUTODYN software was modeled to research the fluid dynamics of water and melt when water jet impacting molten material. The silicon surface morphology of the irradiated spots has an appearance as one can see in porous formation. The surface morphology exhibits a large number of cavities which indicates as bubble nucleation sites. The observed surface morphology shows that the explosive melt expulsion could be a dominant process for the laser ablating silicon in liquids with nanosecond pulse laser of 1064 nm irradiating. Self-focusing phenomenon was found and its causes are analyzed. Smooth Particle Hydrodynamic (SPH) modeling technique was employed to understand the effect of water and water-jet on debris removal during water-jet laser machining.
Compressor stability management
NASA Astrophysics Data System (ADS)
Dhingra, Manuj
Dynamic compressors are susceptible to aerodynamic instabilities while operating at low mass flow rates. These instabilities, rotating stall and surge, are detrimental to engine life and operational safety, and are thus undesirable. In order to prevent stability problems, a passive technique, involving fuel flow scheduling, is currently employed on gas turbines. The passive nature of this technique necessitates conservative stability margins, compromising performance and/or efficiency. In the past, model based active control has been proposed to enable reduction of margin requirements. However, available compressor stability models do not predict the different stall inception patterns, making model based control techniques practically infeasible. This research presents active stability management as a viable alternative. In particular, a limit detection and avoidance approach has been used to maintain the system free of instabilities. Simulations show significant improvements in the dynamic response of a gas turbine engine with this approach. A novel technique has been developed to enable real-time detection of stability limits in axial compressors. It employs a correlation measure to quantify the chaos in the rotor tip region. Analysis of data from four axial compressors shows that the value of the correlation measure decreases as compressor loading is increased. Moreover, sharp drops in this measure have been found to be relevant for stability limit detection. The significance of these drops can be captured by tracking events generated by the downward crossing of a selected threshold level. It has been observed that the average number of events increases as the stability limit is approached in all the compressors studied. These events appear to be randomly distributed in time. A stochastic model for the time between consecutive events has been developed and incorporated in an engine simulation. The simulation has been used to highlight the importance of the threshold level to successful stability management. The compressor stability management concepts have also been experimentally demonstrated on a laboratory axial compressor rig. The fundamental nature of correlation measure has opened avenues for its application besides limit detection. The applications presented include stage load matching in a multi-stage compressor and monitoring the aerodynamic health of rotor blades.
NASA Astrophysics Data System (ADS)
Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.
2016-05-01
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Simple to complex modeling of breathing volume using a motion sensor.
John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-06-01
To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.
Patel, Trushar R; Chojnowski, Grzegorz; Astha; Koul, Amit; McKenna, Sean A; Bujnicki, Janusz M
2017-04-15
The diverse functional cellular roles played by ribonucleic acids (RNA) have emphasized the need to develop rapid and accurate methodologies to elucidate the relationship between the structure and function of RNA. Structural biology tools such as X-ray crystallography and Nuclear Magnetic Resonance are highly useful methods to obtain atomic-level resolution models of macromolecules. However, both methods have sample, time, and technical limitations that prevent their application to a number of macromolecules of interest. An emerging alternative to high-resolution structural techniques is to employ a hybrid approach that combines low-resolution shape information about macromolecules and their complexes from experimental hydrodynamic (e.g. analytical ultracentrifugation) and solution scattering measurements (e.g., solution X-ray or neutron scattering), with computational modeling to obtain atomic-level models. While promising, scattering methods rely on aggregation-free, monodispersed preparations and therefore the careful development of a quality control pipeline is fundamental to an unbiased and reliable structural determination. This review article describes hydrodynamic techniques that are highly valuable for homogeneity studies, scattering techniques useful to study the low-resolution shape, and strategies for computational modeling to obtain high-resolution 3D structural models of RNAs, proteins, and RNA-protein complexes. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Santagati, C.; Inzerillo, L.; Di Paola, F.
2013-07-01
3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.
Biomathematical modeling of pulsatile hormone secretion: a historical perspective.
Evans, William S; Farhy, Leon S; Johnson, Michael L
2009-01-01
Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.
Comparing the landcapes of common retroviral insertion sites across tumor models
NASA Astrophysics Data System (ADS)
Weishaupt, Holger; Čančer, Matko; Engström, Cristopher; Silvestrov, Sergei; Swartling, Fredrik J.
2017-01-01
Retroviral tagging represents an important technique, which allows researchers to screen for candidate cancer genes. The technique is based on the integration of retroviral sequences into the genome of a host organism, which might then lead to the artificial inhibition or expression of proximal genetic elements. The identification of potential cancer genes in this framework involves the detection of genomic regions (common insertion sites; CIS) which contain a number of such viral integration sites that is greater than expected by chance. During the last two decades, a number of different methods have been discussed for the identification of such loci and the respective techniques have been applied to a variety of different retroviruses and/or tumor models. We have previously established a retrovirus driven brain tumor model and reported the CISs which were found based on a Monte Carlo statistics derived detection paradigm. In this study, we consider a recently proposed alternative graph theory based method for identifying CISs and compare the resulting CIS landscape in our brain tumor dataset to those obtained when using the Monte Carlo approach. Finally, we also employ the graph-based method to compare the CIS landscape in our brain tumor model with those of other published retroviral tumor models.
Combined use of heat and saline tracer to estimate aquifer properties in a forced gradient test
NASA Astrophysics Data System (ADS)
Colombani, N.; Giambastiani, B. M. S.; Mastrocicco, M.
2015-06-01
Usually electrolytic tracers are employed for subsurface characterization, but the interpretation of tracer test data collected by low cost techniques, such as electrical conductivity logging, can be biased by cation exchange reactions. To characterize the aquifer transport properties a saline and heat forced gradient test was employed. The field site, located near Ferrara (Northern Italy), is a well characterized site, which covers an area of 200 m2 and is equipped with a grid of 13 monitoring wells. A two-well (injection and pumping) system was employed to perform the forced gradient test and a straddle packer was installed in the injection well to avoid in-well artificial mixing. The contemporary continuous monitor of hydraulic head, electrical conductivity and temperature within the wells permitted to obtain a robust dataset, which was then used to accurately simulate injection conditions, to calibrate a 3D transient flow and transport model and to obtain aquifer properties at small scale. The transient groundwater flow and solute-heat transport model was built using SEAWAT. The result significance was further investigated by comparing the results with already published column experiments and a natural gradient tracer test performed in the same field. The test procedure shown here can provide a fast and low cost technique to characterize coarse grain aquifer properties, although some limitations can be highlighted, such as the small value of the dispersion coefficient compared to values obtained by natural gradient tracer test, or the fast depletion of heat signal due to high thermal diffusivity.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858
Description of the GMAO OSSE for Weather Analysis Software Package: Version 3
NASA Technical Reports Server (NTRS)
Koster, Randal D. (Editor); Errico, Ronald M.; Prive, Nikki C.; Carvalho, David; Sienkiewicz, Meta; El Akkraoui, Amal; Guo, Jing; Todling, Ricardo; McCarty, Will; Putman, William M.;
2017-01-01
The Global Modeling and Assimilation Office (GMAO) at the NASA Goddard Space Flight Center has developed software and products for conducting observing system simulation experiments (OSSEs) for weather analysis applications. Such applications include estimations of potential effects of new observing instruments or data assimilation techniques on improving weather analysis and forecasts. The GMAO software creates simulated observations from nature run (NR) data sets and adds simulated errors to those observations. The algorithms employed are much more sophisticated, adding a much greater degree of realism, compared with OSSE systems currently available elsewhere. The algorithms employed, software designs, and validation procedures are described in this document. Instructions for using the software are also provided.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-03-01
As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.
Constraint Logic Programming approach to protein structure prediction.
Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico
2004-11-30
The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.
Managing age discrimination: an examination of the techniques used when seeking employment.
Berger, Ellie D
2009-06-01
This article examines the age-related management techniques used by older workers in their search for employment. Data are drawn from interviews with individuals aged 45-65 years (N = 30). Findings indicate that participants develop "counteractions" and "concealments" to manage perceived age discrimination. Individuals counteract employers' ageist stereotypes by maintaining their skills and changing their work-related expectations and conceal age by altering their résumés, physical appearance, and language used. This research suggests that there is a need to reexamine the hiring practices of employers and to improve legislation in relation to their accountability.
Nonlinear probabilistic finite element models of laminated composite shells
NASA Technical Reports Server (NTRS)
Engelstad, S. P.; Reddy, J. N.
1993-01-01
A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.
PODIO: An Event-Data-Model Toolkit for High Energy Physics Experiments
NASA Astrophysics Data System (ADS)
Gaede, F.; Hegner, B.; Mato, P.
2017-10-01
PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.
Lopes, Daniela; Jakobtorweihen, Sven; Nunes, Cláudia; Sarmento, Bruno; Reis, Salette
2017-01-01
Lipid membranes work as barriers, which leads to inevitable drug-membrane interactions in vivo. These interactions affect the pharmacokinetic properties of drugs, such as their diffusion, transport, distribution, and accumulation inside the membrane. Furthermore, these interactions also affect their pharmacodynamic properties with respect to both therapeutic and toxic effects. Experimental membrane models have been used to perform in vitro assessment of the effects of drugs on the biophysical properties of membranes by employing different experimental techniques. In in silico studies, molecular dynamics simulations have been used to provide new insights at an atomistic level, which enables the study of properties that are difficult or even impossible to measure experimentally. Each model and technique has its advantages and disadvantages. Hence, combining different models and techniques is necessary for a more reliable study. In this review, the theoretical backgrounds of these (in vitro and in silico) approaches are presented, followed by a discussion of the pharmacokinetic and pharmacodynamic properties of drugs that are related to their interactions with membranes. All approaches are discussed in parallel to present for a better connection between experimental and simulation studies. Finally, an overview of the molecular dynamics simulation studies used for drug-membrane interactions is provided. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evaluation of support loss in micro-beam resonators: A revisit
NASA Astrophysics Data System (ADS)
Chen, S. Y.; Liu, J. Z.; Guo, F. L.
2017-12-01
This paper presents an analytical study on evaluation of support loss in micromechanical resonators undergoing in-plane flexural vibrations. Two-dimensional elastic wave theory is used to determine the energy transmission from the vibrating resonator to the support. Fourier transform and Green's function technique are adopted to solve the problem of wave motions on the surface of the support excited by the forces transmitted by the resonator onto the support. Analytical expressions of support loss in terms of quality factor, taking into account distributed normal stress and shear stress in the attachment region, and coupling between the normal stress and shear stress as well as material disparity between the support and the resonator, have been derived. Effects of geometry of micro-beam resonators, and material dissimilarity between support and resonator on support loss are examined. Numerical results show that 'harder resonator' and 'softer support' combination leads to larger support loss. In addition, the Perfectly Matched Layer (PML) numerical simulation technique is employed for validation of the proposed analytical model. Comparing with results of quality factor obtained by PML technique, we find that the present model agrees well with the results of PML technique and the pure-shear model overestimates support loss noticeably, especially for resonators with small aspect ratio and large material dissimilarity between the support and resonator.
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2011-12-01
Structures in service are often subjected to fatigue loads. Cracks would develop and lead to failure if left unnoticed after a large number of cyclic loadings. Monitoring the process of fatigue crack propagation as well as estimating the remaining useful life of a structure is thus essential to prevent catastrophe while minimizing earlier-than-required replacement. The advent of smart materials such as piezo-impedance transducers (lead zirconate titanate, PZT) has ushered in a new era of structural health monitoring (SHM) based on non-destructive evaluation (NDE). This paper presents a series of investigative studies to evaluate the feasibility of fatigue crack monitoring and estimation of remaining useful life using the electromechanical impedance (EMI) technique employing a PZT transducer. Experimental tests were conducted to study the ability of the EMI technique in monitoring fatigue crack in 1D lab-sized aluminum beams. The experimental results prove that the EMI technique is very sensitive to fatigue crack propagation. A proof-of-concept semi-analytical damage model for fatigue life estimation has been developed by incorporating the linear elastic fracture mechanics (LEFM) theory into the finite element (FE) model. The prediction of the model matches closely with the experiment, suggesting the possibility of replacing costly experiments in future.
Does Marital Status Influence the Parenting Styles Employed by Parents?
ERIC Educational Resources Information Center
Ashiono, Benard Litali; Mwoma, Teresa B.
2015-01-01
The current study sought to establish whether parents' marital status, influence their use of specific parenting styles in Kisauni District, Kenya. A correlational research design was employed to carry out this study. Stratified sampling technique was used to select preschools while purposive sampling technique was used to select preschool…
Pedagogical Techniques Employed by the Television Show "MythBusters"
ERIC Educational Resources Information Center
Zavrel, Erik
2016-01-01
"MythBusters," the long-running though recently discontinued Discovery Channel science entertainment television program, has proven itself to be far more than just a highly rated show. While its focus is on entertainment, the show employs an array of pedagogical techniques to communicate scientific concepts to its audience. These…
Electromagnetic Launch Vehicle Fairing and Acoustic Blanket Model of Received Power Using FEKO
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.
2011-01-01
Evaluating the impact of radio frequency transmission in vehicle fairings is important to electromagnetically sensitive spacecraft. This study employs the multilevel fast multipole method (MLFMM) from a commercial electromagnetic tool, FEKO, to model the fairing electromagnetic environment in the presence of an internal transmitter with improved accuracy over industry applied techniques. This fairing model includes material properties representative of acoustic blanketing commonly used in vehicles. Equivalent surface material models within FEKO were successfully applied to simulate the test case. Finally, a simplified model is presented using Nicholson Ross Weir derived blanket material properties. These properties are implemented with the coated metal option to reduce the model to one layer within the accuracy of the original three layer simulation.
Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing
2015-07-27
Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work.
Nonadditivity of van der Waals forces on liquid surfaces
NASA Astrophysics Data System (ADS)
Venkataram, Prashanth S.; Whitton, Jeremy D.; Rodriguez, Alejandro W.
2016-09-01
We present an approach for modeling nanoscale wetting and dewetting of textured solid surfaces that exploits recently developed, sophisticated techniques for computing exact long-range dispersive van der Waals (vdW) or (more generally) Casimir forces in arbitrary geometries. We apply these techniques to solve the variational formulation of the Young-Laplace equation and predict the equilibrium shapes of liquid-vacuum interfaces near solid gratings. We show that commonly employed methods of computing vdW interactions based on additive Hamaker or Derjaguin approximations, which neglect important electromagnetic boundary effects, can result in large discrepancies in the shapes and behaviors of liquid surfaces compared to exact methods.
New mechanistic insights in the NH 3-SCR reactions at low temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggeri, Maria Pia; Selleri, Tomasso; Nova, Isabella
2016-05-06
The present study is focused on the investigation of the low temperature Standard SCR reaction mechanism over Fe- and Cu-promoted zeolites. Different techniques are employed, including in situ DRIFTS, transient reaction analysis and chemical trapping techniques. The results present strong evidence of nitrite formation in the oxidative activation of NO and of their role in SCR reactions. These elements lead to a deeper understanding of the standard SCR chemistry at low temperature and can potentially improve the consistency of mechanistic mathematical models. Furthermore, comprehension of the mechanism on a fundamental level can contribute to the development of improved SCR catalysts.
Monitoring of Low-Level Virus in Natural Waters
Sorber, Charles A.; Sagik, Bernard P.; Malina, Joseph F.
1971-01-01
The insoluble polyelectrolyte technique for concentrating virus is extended to extremely low virus levels. The effectiveness of this method employing a coliphage T2 model is a constant 20% over a range of virus levels from 103 to 10−4 plaque-forming units/ml. The efficiency of the method is dependent upon pH control during the concentration phase. Although the study was initiated to develop a method for quantitating the effectiveness of water and wastewater treatment methods for the removal of viruses from waters at low concentrations, the potential of the technique for efficient monitoring of natural waters is apparent. PMID:4940873
OPERATIONS RESEARCH IN THE DESIGN OF MANAGEMENT INFORMATION SYSTEMS
management information systems is concerned with the identification and detailed specification of the information and data processing...of advanced data processing techniques in management information systems today, the close coordination of operations research and data systems activities has become a practical necessity for the modern business firm.... information systems in which mathematical models are employed as the basis for analysis and systems design. Operations research provides a
Pulmonary Thromboembolism: Evaluation By Intravenous Angiography
NASA Astrophysics Data System (ADS)
Pond, Gerald D.; Cook, Glenn C.; Woolfenden, James M.; Dodge, Russell R.
1981-11-01
Using perfusion lung scans as a guide, digital video subtraction angiography of the pulmonary arteries was performed in human subjects suspected of having pulmonary embolism. Dogs were employed as a pulmonary embolism model and both routine pulmonary angiography and intravenous pulmonary angiograms were obtained for comparison purposes. We have shown by our preliminary results that the technique is extremely promising as a safe and accurate alternative to routine pulmonary angiography in selected patients.
Tension Cutoff and Parameter Identification for the Viscoplastic Cap Model.
1983-04-01
computer program "VPDRVR" which employs a Crank-Nicolson time integration scheme and a Newton-Raphson iterative solution procedure. Numerical studies were...parameters was illustrated for triaxial stress and uniaxial strain loading for a well- studied sand material (McCormick Ranch Sand). Lastly, a finite element...viscoplastic tension-cutoff cri- terion and to establish parameter identification techniques with experimental data. Herein lies the impetus of this study
Low Frequency Acoustic Intensity Propagation Modeling in Shallow Water Waveguides
2016-06-01
REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the official policy or position of...release; distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) Three popular numerical techniques are employed to...planar interfacial two-fluid transmission and reflection are used to benchmark the commercial software package COMSOL. Canonical Pekeris-type
Semi-Markov Models for Degradation-Based Reliability
2010-01-01
standard analysis techniques for Markov processes can be employed (cf. Whitt (1984), Altiok (1985), Perros (1994), and Osogami and Harchol-Balter...We want to approximate X by a PH random variable, sayY, with c.d.f. Ĥ. Marie (1980), Altiok (1985), Johnson (1993), Perros (1994), and Osogami and...provides a minimal representation when matching only two moments. By considering the guidance provided by Marie (1980), Whitt (1984), Altiok (1985), Perros
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Berger, Karen T.; Berry, Scott A.; Bruckmann, Gregory J.; Buck, Gregory M.; DiFulvio, Michael; Horvath, Thomas J.; Liechty, Derek S.; Merski, N. Ronald; Murphy, Kelly J.;
2014-01-01
A review is presented of recent research, development, testing and evaluation activities related to entry, descent and landing that have been conducted at the NASA Langley Research Center. An overview of the test facilities, model development and fabrication capabilities, and instrumentation and measurement techniques employed in this work is provided. Contributions to hypersonic/supersonic flight and planetary exploration programs are detailed, as are fundamental research and development activities.
Modeling and simulation of dust behaviors behind a moving vehicle
NASA Astrophysics Data System (ADS)
Wang, Jingfang
Simulation of physically realistic complex dust behaviors is a difficult and attractive problem in computer graphics. A fast, interactive and visually convincing model of dust behaviors behind moving vehicles is very useful in computer simulation, training, education, art, advertising, and entertainment. In my dissertation, an experimental interactive system has been implemented for the simulation of dust behaviors behind moving vehicles. The system includes physically-based models, particle systems, rendering engines and graphical user interface (GUI). I have employed several vehicle models including tanks, cars, and jeeps to test and simulate in different scenarios and conditions. Calm weather, winding condition, vehicle turning left or right, and vehicle simulation controlled by users from the GUI are all included. I have also tested the factors which play against the physical behaviors and graphics appearances of the dust particles through GUI or off-line scripts. The simulations are done on a Silicon Graphics Octane station. The animation of dust behaviors is achieved by physically-based modeling and simulation. The flow around a moving vehicle is modeled using computational fluid dynamics (CFD) techniques. I implement a primitive variable and pressure-correction approach to solve the three dimensional incompressible Navier Stokes equations in a volume covering the moving vehicle. An alternating- direction implicit (ADI) method is used for the solution of the momentum equations, with a successive-over- relaxation (SOR) method for the solution of the Poisson pressure equation. Boundary conditions are defined and simplified according to their dynamic properties. The dust particle dynamics is modeled using particle systems, statistics, and procedure modeling techniques. Graphics and real-time simulation techniques, such as dynamics synchronization, motion blur, blending, and clipping have been employed in the rendering to achieve realistic appearing dust behaviors. In addition, I introduce a temporal smoothing technique to eliminate the jagged effect caused by large simulation time. Several algorithms are used to speed up the simulation. For example, pre-calculated tables and display lists are created to replace some of the most commonly used functions, scripts and processes. The performance study shows that both time and space costs of the algorithms are linear in the number of particles in the system. On a Silicon Graphics Octane, three vehicles with 20,000 particles run at 6-8 frames per second on average. This speed does not include the extra calculations of convergence of the numerical integration for fluid dynamics which usually takes about 4-5 minutes to achieve steady state.
Zarabadi, Atefeh S; Pawliszyn, Janusz
2015-02-17
Analysis in the frequency domain is considered a powerful tool to elicit precise information from spectroscopic signals. In this study, the Fourier transformation technique is employed to determine the diffusion coefficient (D) of a number of proteins in the frequency domain. Analytical approaches are investigated for determination of D from both experimental and data treatment viewpoints. The diffusion process is modeled to calculate diffusion coefficients based on the Fourier transformation solution to Fick's law equation, and its results are compared to time domain results. The simulations characterize optimum spatial and temporal conditions and demonstrate the noise tolerance of the method. The proposed model is validated by its application for the electropherograms from the diffusion path of a set of proteins. Real-time dynamic scanning is conducted to monitor dispersion by employing whole column imaging detection technology in combination with capillary isoelectric focusing (CIEF) and the imaging plug flow (iPF) experiment. These experimental techniques provide different peak shapes, which are utilized to demonstrate the Fourier transformation ability in extracting diffusion coefficients out of irregular shape signals. Experimental results confirmed that the Fourier transformation procedure substantially enhanced the accuracy of the determined values compared to those obtained in the time domain.
Colonel Blotto Games and Lancaster's Equations: A Novel Military Modeling Combination
NASA Technical Reports Server (NTRS)
Collins, Andrew J.; Hester, Patrick T.
2012-01-01
Military strategists face a difficult task when engaged in a battle against an adversarial force. They have to predict both what tactics their opponent will employ and the outcomes of any resultant conflicts in order to make the best decision about their actions. Game theory has been the dominant technique used by analysts to investigate the possible actions that an enemy will employ. Traditional game theory can be augmented by use of Lanchester equations, a set of differential equations used to determine the outcome of a conflict. This paper demonstrates a novel combination of game theory and Lanchester equations using Colonel Blotto games. Colonel Blotto games, which are one of the oldest applications of game theory to the military domain, look at the allocation of troops and resources when fighting across multiple areas of operation. This paper demonstrates that employing Lanchester equations within a game overcomes some of practical problems faced when applying game theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahl, D.E.; Jakowatz, C.V. Jr.; Ghiglia, D.C.
1991-01-01
Autofocus methods in SAR and self-survey techniques in SONAR have a common mathematical basis in that they both involve estimation and correction of phase errors introduced by sensor position uncertainties. Time delay estimation and correlation methods have been shown to be effective in solving the self-survey problem for towed SONAR arrays. Since it can be shown that platform motion errors introduce similar time-delay estimation problems in SAR imaging, the question arises as to whether such techniques could be effectively employed for autofocus of SAR imagery. With a simple mathematical model for motion errors in SAR, we will show why suchmore » correlation/time-delay techniques are not nearly as effective as established SAR autofocus algorithms such as phase gradient autofocus or sub-aperture based methods. This analysis forms an important bridge between signal processing methodologies for SAR and SONAR. 5 refs., 4 figs.« less
Fiber Optic Thermal Health Monitoring of Aerospace Structures and Materials
NASA Technical Reports Server (NTRS)
Wu, Meng-Chou; Winfree, William P.; Allison, Sidney G.
2009-01-01
A new technique is presented for thermographic detection of flaws in materials and structures by performing temperature measurements with fiber Bragg gratings. Individual optical fibers with multiple Bragg gratings employed as surface temperature sensors were bonded to the surfaces of structures with subsurface defects or thickness variations. Both during and following the application of a thermal heat flux to the surface, the individual Bragg grating sensors measured the temporal and spatial temperature variations. The investigated structures included a 10-ply composite specimen with subsurface delaminations of various sizes and depths. The data obtained from grating sensors were further analyzed with thermal modeling to reveal particular characteristics of the interested areas. These results were found to be consistent with those from conventional thermography techniques. Limitations of the technique were investigated using both experimental and numerical simulation techniques. Methods for performing in-situ structural health monitoring are discussed.
Contact thermal shock test of ceramics
NASA Technical Reports Server (NTRS)
Rogers, W. P.; Emery, A. F.
1992-01-01
A novel quantitative thermal shock test of ceramics is described. The technique employs contact between a metal-cooling rod and hot disk-shaped specimen. In contrast with traditional techniques, the well-defined thermal boundary condition allows for accurate analyses of heat transfer, stress, and fracture. Uniform equibiaxial tensile stresses are induced in the center of the test specimen. Transient specimen temperature and acoustic emission are monitored continuously during the thermal stress cycle. The technique is demonstrated with soda-lime glass specimens. Experimental results are compared with theoretical predictions based on a finite-element method thermal stress analysis combined with a statistical model of fracture. Material strength parameters are determined using concentric ring flexure tests. Good agreement is found between experimental results and theoretical predictions of failure probability as a function of time and initial specimen temperature.
Fiber Optic Thermal Health Monitoring of Composites
NASA Technical Reports Server (NTRS)
Wu, Meng-Chou; Winfree, William P.; Moore, Jason P.
2010-01-01
A recently developed technique is presented for thermographic detection of flaws in composite materials by performing temperature measurements with fiber optic Bragg gratings. Individual optical fibers with multiple Bragg gratings employed as surface temperature sensors were bonded to the surfaces of composites with subsurface defects. The investigated structures included a 10-ply composite specimen with subsurface delaminations of various sizes and depths. Both during and following the application of a thermal heat flux to the surface, the individual Bragg grating sensors measured the temporal and spatial temperature variations. The data obtained from grating sensors were analyzed with thermal modeling techniques of conventional thermography to reveal particular characteristics of the interested areas. Results were compared with the calculations using numerical simulation techniques. Methods and limitations for performing in-situ structural health monitoring are discussed.
Modal, ray, and beam techniques for analyzing the EM scattering by open-ended waveguide cavities
NASA Technical Reports Server (NTRS)
Pathak, Prabhakar H.; Burkholder, Robert J.
1989-01-01
The problem of high-frequency electromagnetic (EM) scattering by open-ended waveguide cavities with an interior termination is analyzed via three different approaches. When cavities can be adequately modeled by joining together piecewise separable waveguide sections, a hybrid combination of asymptotic high-frequency and modal techniques is employed. In the case of more arbitrarily shaped waveguide cavities for which modes cannot even be defined in the conventional sense, the geometrical optics ray approach proves to be highly useful. However, at sufficiently high frequencies, both of these approaches tend to become inefficient. Hence, a paraxial Gaussian batch technique, which retains much of the simplicity of the ray approximation but is potentially more efficient, is investigated. Typical numerical results based on the different approaches are discussed.
Fault Accommodation in Control of Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.; Lim, Kyong B.
1998-01-01
New synthesis techniques for the design of fault accommodating controllers for flexible systems are developed. Three robust control design strategies, static dissipative, dynamic dissipative and mu-synthesis, are used in the approach. The approach provides techniques for designing controllers that maximize, in some sense, the tolerance of the closed-loop system against faults in actuators and sensors, while guaranteeing performance robustness at a specified performance level, measured in terms of the proximity of the closed-loop poles to the imaginary axis (the degree of stability). For dissipative control designs, nonlinear programming is employed to synthesize the controllers, whereas in mu-synthesis, the traditional D-K iteration is used. To demonstrate the feasibility of the proposed techniques, they are applied to the control design of a structural model of a flexible laboratory test structure.
NASA Astrophysics Data System (ADS)
Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok
We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Fu, Haohuan; Song, Shuaiwen
2014-07-18
Wave propagation forward modeling is a widely used computational method in oil and gas exploration. The iterative stencil loops in such problems have broad applications in scientific computing. However, executing such loops can be highly time time-consuming, which greatly limits application’s performance and power efficiency. In this paper, we accelerate the forward modeling technique on the latest multi-core and many-core architectures such as Intel Sandy Bridge CPUs, NVIDIA Fermi C2070 GPU, NVIDIA Kepler K20x GPU, and the Intel Xeon Phi Co-processor. For the GPU platforms, we propose two parallel strategies to explore the performance optimization opportunities for our stencil kernels.more » For Sandy Bridge CPUs and MIC, we also employ various optimization techniques in order to achieve the best.« less
NASA Technical Reports Server (NTRS)
Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David
1987-01-01
The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.
Coarse Grid CFD for underresolved simulation
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.
2010-11-01
CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf
Utilization of volume correlation filters for underwater mine identification in LIDAR imagery
NASA Astrophysics Data System (ADS)
Walls, Bradley
2008-04-01
Underwater mine identification persists as a critical technology pursued aggressively by the Navy for fleet protection. As such, new and improved techniques must continue to be developed in order to provide measurable increases in mine identification performance and noticeable reductions in false alarm rates. In this paper we show how recent advances in the Volume Correlation Filter (VCF) developed for ground based LIDAR systems can be adapted to identify targets in underwater LIDAR imagery. Current automated target recognition (ATR) algorithms for underwater mine identification employ spatial based three-dimensional (3D) shape fitting of models to LIDAR data to identify common mine shapes consisting of the box, cylinder, hemisphere, truncated cone, wedge, and annulus. VCFs provide a promising alternative to these spatial techniques by correlating 3D models against the 3D rendered LIDAR data.
Detection probability in aerial surveys of feral horses
Ransom, Jason I.
2011-01-01
Observation bias pervades data collected during aerial surveys of large animals, and although some sources can be mitigated with informed planning, others must be addressed using valid sampling techniques that carefully model detection probability. Nonetheless, aerial surveys are frequently employed to count large mammals without applying such methods to account for heterogeneity in visibility of animal groups on the landscape. This often leaves managers and interest groups at odds over decisions that are not adequately informed. I analyzed detection of feral horse (Equus caballus) groups by dual independent observers from 24 fixed-wing and 16 helicopter flights using mixed-effect logistic regression models to investigate potential sources of observation bias. I accounted for observer skill, population location, and aircraft type in the model structure and analyzed the effects of group size, sun effect (position related to observer), vegetation type, topography, cloud cover, percent snow cover, and observer fatigue on detection of horse groups. The most important model-averaged effects for both fixed-wing and helicopter surveys included group size (fixed-wing: odds ratio = 0.891, 95% CI = 0.850–0.935; helicopter: odds ratio = 0.640, 95% CI = 0.587–0.698) and sun effect (fixed-wing: odds ratio = 0.632, 95% CI = 0.350–1.141; helicopter: odds ratio = 0.194, 95% CI = 0.080–0.470). Observer fatigue was also an important effect in the best model for helicopter surveys, with detection probability declining after 3 hr of survey time (odds ratio = 0.278, 95% CI = 0.144–0.537). Biases arising from sun effect and observer fatigue can be mitigated by pre-flight survey design. Other sources of bias, such as those arising from group size, topography, and vegetation can only be addressed by employing valid sampling techniques such as double sampling, mark–resight (batch-marked animals), mark–recapture (uniquely marked and identifiable animals), sightability bias correction models, and line transect distance sampling; however, some of these techniques may still only partially correct for negative observation biases.
Segmentation of prostate boundaries from ultrasound images using statistical shape model.
Shen, Dinggang; Zhan, Yiqiang; Davatzikos, Christos
2003-04-01
This paper presents a statistical shape model for the automatic prostate segmentation in transrectal ultrasound images. A Gabor filter bank is first used to characterize the prostate boundaries in ultrasound images in both multiple scales and multiple orientations. The Gabor features are further reconstructed to be invariant to the rotation of the ultrasound probe and incorporated in the prostate model as image attributes for guiding the deformable segmentation. A hierarchical deformation strategy is then employed, in which the model adaptively focuses on the similarity of different Gabor features at different deformation stages using a multiresolution technique, i.e., coarse features first and fine features later. A number of successful experiments validate the algorithm.
Dynamic deformable models for 3D MRI heart segmentation
NASA Astrophysics Data System (ADS)
Zhukov, Leonid; Bao, Zhaosheng; Gusikov, Igor; Wood, John; Breen, David E.
2002-05-01
Automated or semiautomated segmentation of medical images decreases interstudy variation, observer bias, and postprocessing time as well as providing clincally-relevant quantitative data. In this paper we present a new dynamic deformable modeling approach to 3D segmentation. It utilizes recently developed dynamic remeshing techniques and curvature estimation methods to produce high-quality meshes. The approach has been implemented in an interactive environment that allows a user to specify an initial model and identify key features in the data. These features act as hard constraints that the model must not pass through as it deforms. We have employed the method to perform semi-automatic segmentation of heart structures from cine MRI data.
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Lakshmanan, B.
1993-01-01
A high-speed shear layer is studied using compressibility corrected Reynolds stress turbulence model which employs newly developed model for pressure-strain correlation. MacCormack explicit prediction-corrector method is used for solving the governing equations and the turbulence transport equations. The stiffness arising due to source terms in the turbulence equations is handled by a semi-implicit numerical technique. Results obtained using the new model show a sharper reduction in growth rate with increasing convective Mach number. Some improvements were also noted in the prediction of the normalized streamwise stress and Reynolds shear stress. The computed results are in good agreement with the experimental data.
NASA Astrophysics Data System (ADS)
Miura, Yasunari; Sugiyama, Yuki
2017-12-01
We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.
Designing for Damage: Robust Flight Control Design using Sliding Mode Techniques
NASA Technical Reports Server (NTRS)
Vetter, T. K.; Wells, S. R.; Hess, Ronald A.; Bacon, Barton (Technical Monitor); Davidson, John (Technical Monitor)
2002-01-01
A brief review of sliding model control is undertaken, with particular emphasis upon the effects of neglected parasitic dynamics. Sliding model control design is interpreted in the frequency domain. The inclusion of asymptotic observers and control 'hedging' is shown to reduce the effects of neglected parasitic dynamics. An investigation into the application of observer-based sliding mode control to the robust longitudinal control of a highly unstable is described. The sliding mode controller is shown to exhibit stability and performance robustness superior to that of a classical loop-shaped design when significant changes in vehicle and actuator dynamics are employed to model airframe damage.
NASA Technical Reports Server (NTRS)
Burns, John A.; Marrekchi, Hamadi
1993-01-01
The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.
Optimal non-linear health insurance.
Blomqvist, A
1997-06-01
Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.
Consistent modelling of wind turbine noise propagation from source to receiver.
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick
2017-11-01
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.
Consistent modelling of wind turbine noise propagation from source to receiver
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...
2017-11-28
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
Consistent modelling of wind turbine noise propagation from source to receiver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
Model-Based GN and C Simulation and Flight Software Development for Orion Missions beyond LEO
NASA Technical Reports Server (NTRS)
Odegard, Ryan; Milenkovic, Zoran; Henry, Joel; Buttacoli, Michael
2014-01-01
For Orion missions beyond low Earth orbit (LEO), the Guidance, Navigation, and Control (GN&C) system is being developed using a model-based approach for simulation and flight software. Lessons learned from the development of GN&C algorithms and flight software for the Orion Exploration Flight Test One (EFT-1) vehicle have been applied to the development of further capabilities for Orion GN&C beyond EFT-1. Continuing the use of a Model-Based Development (MBD) approach with the Matlab®/Simulink® tool suite, the process for GN&C development and analysis has been largely improved. Furthermore, a model-based simulation environment in Simulink, rather than an external C-based simulation, greatly eases the process for development of flight algorithms. The benefits seen by employing lessons learned from EFT-1 are described, as well as the approach for implementing additional MBD techniques. Also detailed are the key enablers for improvements to the MBD process, including enhanced configuration management techniques for model-based software systems, automated code and artifact generation, and automated testing and integration.
KAMINSKI, GEORGE A.; STERN, HARRY A.; BERNE, B. J.; FRIESNER, RICHARD A.; CAO, YIXIANG X.; MURPHY, ROBERT B.; ZHOU, RUHONG; HALGREN, THOMAS A.
2014-01-01
We present results of developing a methodology suitable for producing molecular mechanics force fields with explicit treatment of electrostatic polarization for proteins and other molecular system of biological interest. The technique allows simulation of realistic-size systems. Employing high-level ab initio data as a target for fitting allows us to avoid the problem of the lack of detailed experimental data. Using the fast and reliable quantum mechanical methods supplies robust fitting data for the resulting parameter sets. As a result, gas-phase many-body effects for dipeptides are captured within the average RMSD of 0.22 kcal/mol from their ab initio values, and conformational energies for the di- and tetrapeptides are reproduced within the average RMSD of 0.43 kcal/mol from their quantum mechanical counterparts. The latter is achieved in part because of application of a novel torsional fitting technique recently developed in our group, which has already been used to greatly improve accuracy of the peptide conformational equilibrium prediction with the OPLS-AA force field.1 Finally, we have employed the newly developed first-generation model in computing gas-phase conformations of real proteins, as well as in molecular dynamics studies of the systems. The results show that, although the overall accuracy is no better than what can be achieved with a fixed-charges model, the methodology produces robust results, permits reasonably low computational cost, and avoids other computational problems typical for polarizable force fields. It can be considered as a solid basis for building a more accurate and complete second-generation model. PMID:12395421
Testing techniques for determining static mechanical properties of Pneumatic tires
NASA Technical Reports Server (NTRS)
Dodge, R. N.; Larson, R. B.; Clark, S. K.; Nybakken, G. H.
1974-01-01
Fore-aft, lateral, and vertical spring rates of model and full-scale pneumatic tires were evaluated by testing techniques generally employed by industry and various testing groups. The purpose of this experimental program was to investigate what effects the different testing techniques have on the measured values of these important static tire mechanical properties. The testing techniques included both incremental and continuous loadings applied at various rates over half, full, and repeated cycles. Of the three properties evaluated, the fore-aft stiffness was demonstrated to be the most affected by the different testing techniques used to obtain it. Appreciable differences in the fore-aft spring rates occurred using both the increment- and continuous-loading techniques; however, the most significant effect was attributed to variations in the size of the fore-aft force loop. The dependence of lateral stiffness values on testing techniques followed the same trends as that for fore-aft stiffness, except to a lesser degree. Vertical stiffness values were found to be nearly independent of testing procedures if the nonlinear portion of the vertical force-deflection curves is avoided.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
Gao, Chao; Sun, Hanbo; Wang, Tuo; Tang, Ming; Bohnen, Nicolaas I; Müller, Martijn L T M; Herman, Talia; Giladi, Nir; Kalinin, Alexandr; Spino, Cathie; Dauer, William; Hausdorff, Jeffrey M; Dinov, Ivo D
2018-05-08
In this study, we apply a multidisciplinary approach to investigate falls in PD patients using clinical, demographic and neuroimaging data from two independent initiatives (University of Michigan and Tel Aviv Sourasky Medical Center). Using machine learning techniques, we construct predictive models to discriminate fallers and non-fallers. Through controlled feature selection, we identified the most salient predictors of patient falls including gait speed, Hoehn and Yahr stage, postural instability and gait difficulty-related measurements. The model-based and model-free analytical methods we employed included logistic regression, random forests, support vector machines, and XGboost. The reliability of the forecasts was assessed by internal statistical (5-fold) cross validation as well as by external out-of-bag validation. Four specific challenges were addressed in the study: Challenge 1, develop a protocol for harmonizing and aggregating complex, multisource, and multi-site Parkinson's disease data; Challenge 2, identify salient predictive features associated with specific clinical traits, e.g., patient falls; Challenge 3, forecast patient falls and evaluate the classification performance; and Challenge 4, predict tremor dominance (TD) vs. posture instability and gait difficulty (PIGD). Our findings suggest that, compared to other approaches, model-free machine learning based techniques provide a more reliable clinical outcome forecasting of falls in Parkinson's patients, for example, with a classification accuracy of about 70-80%.
Gold Nanoparticles for the Detection of DNA Adducts as Biomarkers of Exposure to Acrylamide
NASA Astrophysics Data System (ADS)
Larguinho, Miguel Angelo Rodrigues
The main objective of this thesis was the development of a gold nanoparticle-based methodology for detection of DNA adducts as biomarkers, to try and overcome existing drawbacks in currently employed techniques. For this objective to be achieved, the experimental work was divided in three components: sample preparation, method of detection and development of a model for exposure to acrylamide. Different techniques were employed and combined for de-complexation and purification of DNA samples (including ultrasonic energy, nuclease digestion and chromatography), resulting in a complete protocol for sample treatment, prior to detection. The detection of alkylated nucleotides using gold nanoparticles was performed by two distinct methodologies: mass spectrometry and colorimetric detection. In mass spectrometry, gold nanoparticles were employed for laser desorption/ionisation instead of the organic matrix. Identification of nucleotides was possible by fingerprint, however no specific mass signals were denoted when using gold nanoparticles to analyse biological samples. An alternate method using the colorimetric properties of gold nanoparticles was employed for detection. This method inspired in the non-cross-linking assay allowed the identification of glycidamide-guanine adducts and DNA adducts generated in vitro. For the development of a model of exposure, two different aquatic organisms were studies: a goldfish and a mussel. Organisms were exposed to waterborne acrylamide, after which mortality was recorded and effect concentrations were estimated. In goldfish, both genotoxicity and metabolic alterations were assessed and revealed dose-effect relationships of acrylamide. Histopathological alterations were verified primarily in pancreatic cells, but also in hepatocytes. Mussels showed higher effect concentrations than goldfish. Biomarkers of oxidative stress, biotransformation and neurotoxicity were analysed after prolonged exposure, showing mild oxidative stress in mussel cells, and induction of enzymes involved in detoxification of oxygen radicals. A qualitative histopathological screening revealed gonadotoxicity in female mussels, which may present some risk to population equilibrium.
Detecting dark matter in the Milky Way with cosmic and gamma radiation
NASA Astrophysics Data System (ADS)
Carlson, Eric C.
Over the last decade, experiments in high-energy astroparticle physics have reached unprecedented precision and sensitivity which span the electromagnetic and cosmic-ray spectra. These advances have opened a new window onto the universe for which little was previously known. Such dramatic increases in sensitivity lead naturally to claims of excess emission, which call for either revised astrophysical models or the existence of exotic new sources such as particle dark matter. Here we stand firmly with Occam, sharpening his razor by (i) developing new techniques for discriminating astrophysical signatures from those of dark matter, and (ii) by developing detailed foreground models which can explain excess signals and shed light on the underlying astrophysical processes at hand. We concentrate most directly on observations of Galactic gamma and cosmic rays, factoring the discussion into three related parts which each contain significant advancements from our cumulative works. In Part I we introduce concepts which are fundamental to the Indirect Detection of particle dark matter, including motivations, targets, experiments, production of Standard Model particles, and a variety of statistical techniques. In Part II we introduce basic and advanced modelling techniques for propagation of cosmic-rays through the Galaxy and describe astrophysical gamma-ray production, as well as presenting state-of-the-art propagation models of the Milky Way.Finally, in Part III, we employ these models and techniques in order to study several indirect detection signals, including the Fermi GeV excess at the Galactic center, the Fermi 135 GeV line, the 3.5 keV line, and the WMAP-Planck haze.
40 CFR 426.135 - Standards of performance for new sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... greater than 50 gallons per day of process waste water, and employs hydrofluoric acid finishing techniques... any 1 day Average of daily values for 30 consecutive days shall not exceed— Lead 0.2 0.1 Fluoride 26.0... waste water, and employs hydrofluoric acid finishing techniques shall meet the following limitations...
40 CFR 426.135 - Standards of performance for new sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... greater than 50 gallons per day of process waste water, and employs hydrofluoric acid finishing techniques... any 1 day Average of daily values for 30 consecutive days shall not exceed— Lead 0.2 0.1 Fluoride 26.0... waste water, and employs hydrofluoric acid finishing techniques shall meet the following limitations...
40 CFR 426.135 - Standards of performance for new sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... greater than 50 gallons per day of process waste water, and employs hydrofluoric acid finishing techniques... any 1 day Average of daily values for 30 consecutive days shall not exceed— Lead 0.2 0.1 Fluoride 26.0... waste water, and employs hydrofluoric acid finishing techniques shall meet the following limitations...
Managing Age Discrimination: An Examination of the Techniques Used when Seeking Employment
ERIC Educational Resources Information Center
Berger, Ellie D.
2009-01-01
Purpose: This article examines the age-related management techniques used by older workers in their search for employment. Design and Methods: Data are drawn from interviews with individuals aged 45-65 years (N = 30). Results: Findings indicate that participants develop "counteractions" and "concealments" to manage perceived age discrimination.…
Error-in-variables models in calibration
NASA Astrophysics Data System (ADS)
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
D Modelling the Invisible Using Ground Penetrating Radar
NASA Astrophysics Data System (ADS)
Agrafiotis, P.; Lampropoulos, K.; Georgopoulos, A.; Moropoulou, A.
2017-02-01
An interdisciplinary team from the National Technical University of Athens is performing the restoration of the Holy Aedicule, which covers the Tomb of Christ within the Church of the Holy Sepulchre in Jerusalem. The first important task was to geometrically document the monument for the production of the necessary base material on which the structural and material prospection studies would be based. One task of this action was to assess the structural behavior of this edifice in order to support subsequent works. It was imperative that the internal composition of the construction be documented as reliably as possible. To this end several data acquisition techniques were employed, among them ground penetrating radar. Interpretation of these measurements revealed the position of the rock, remnants of the initial cave of the burial of Christ. This paper reports on the methodology employed to construct the 3D model of the rock and introduce it into the 3D model of the whole building, thus enhancing the information about the structure. The conversion of the radargrams to horizontal sections of the rock is explained and the construction of the 3D model and its insertion into the 3D model of the Holy Aedicule is described.
Artifact-Based Transformation of IBM Global Financing
NASA Astrophysics Data System (ADS)
Chao, Tian; Cohn, David; Flatgard, Adrian; Hahn, Sandy; Linehan, Mark; Nandi, Prabir; Nigam, Anil; Pinel, Florian; Vergo, John; Wu, Frederick Y.
IBM Global Financing (IGF) is transforming its business using the Business Artifact Method, an innovative business process modeling technique that identifies key business artifacts and traces their life cycles as they are processed by the business. IGF is a complex, global business operation with many business design challenges. The Business Artifact Method is a fundamental shift in how to conceptualize, design and implement business operations. The Business Artifact Method was extended to solve the problem of designing a global standard for a complex, end-to-end process while supporting local geographic variations. Prior to employing the Business Artifact method, process decomposition, Lean and Six Sigma methods were each employed on different parts of the financing operation. Although they provided critical input to the final operational model, they proved insufficient for designing a complete, integrated, standard operation. The artifact method resulted in a business operations model that was at the right level of granularity for the problem at hand. A fully functional rapid prototype was created early in the engagement, which facilitated an improved understanding of the redesigned operations model. The resulting business operations model is being used as the basis for all aspects of business transformation in IBM Global Financing.
Normal mode analysis of the IUS/TDRS payload in a payload canister/transporter environment
NASA Technical Reports Server (NTRS)
Meyer, K. A.
1980-01-01
Special modeling techniques were developed to simulate an accurate mathematical model of the transporter/canister/payload system during ground transport of the Inertial Upper Stage/Tracking and Data Relay Satellite (IUS/TDRS) payload. The three finite element models - the transporter, the canister, and the IUS/TDRS payload - were merged into one model and used along with the NASTRAN normal mode analysis. Deficiencies were found in the NASTRAN program that make a total analysis using modal transient response impractical. It was also discovered that inaccuracies may exist for NASTRAN rigid body modes on large models when Given's method for eigenvalue extraction is employed. The deficiencies as well as recommendations for improving the NASTRAN program are discussed.
Implementation of model predictive control for resistive wall mode stabilization on EXTRAP T2R
NASA Astrophysics Data System (ADS)
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2015-10-01
A model predictive control (MPC) method for stabilization of the resistive wall mode (RWM) in the EXTRAP T2R reversed-field pinch is presented. The system identification technique is used to obtain a linearized empirical model of EXTRAP T2R. MPC employs the model for prediction and computes optimal control inputs that satisfy performance criterion. The use of a linearized form of the model allows for compact formulation of MPC, implemented on a millisecond timescale, that can be used for real-time control. The design allows the user to arbitrarily suppress any selected Fourier mode. The experimental results from EXTRAP T2R show that the designed and implemented MPC successfully stabilizes the RWM.
NASA Technical Reports Server (NTRS)
Bair, S.; Winer, W. O.
1980-01-01
Research related to the development of the limiting shear stress rheological model is reported. Techniques were developed for subjecting lubricants to isothermal compression in order to obtain relevant determinations of the limiting shear stress and elastic shear modulus. The isothermal compression limiting shear stress was found to predict very well the maximum traction for a given lubricant. Small amounts of side slip and twist incorporated in the model were shown to have great influence on the rising portion of the traction curve at low slide-roll ratio. The shear rheological model was also applied to a Grubin-like elastohydrodynamic inlet analysis for predicting film thicknesses when employing the limiting shear stress model material behavior.
Technique Developed for Optimizing Traveling-Wave Tubes
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
1999-01-01
A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT s are critical components in deep-space probes, geosynchronous communication satellites, and high-power radar systems. Power efficiency is of paramount importance for TWT s employed in deep-space probes and communications satellites. Consequently, increasing the power efficiency of TWT s has been the primary goal of the TWT group at the NASA Lewis Research Center over the last 25 years. An in-house effort produced a technique (ref. 1) to design TWT's for optimized power efficiency. This technique is based on simulated annealing, which has an advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 2). A simulated annealing algorithm was created and integrated into the NASA TWT computer model (ref. 3). The new technique almost doubled the computed conversion power efficiency of a TWT from 7.1 to 13.5 percent (ref. 1).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less
NASA Technical Reports Server (NTRS)
Bachelder, Edward; Hess, Ronald; Godfroy-Cooper, Martine; Aponso, Bimal
2017-01-01
In this study, behavioral models are developed that closely reproduced pulsive control response of two pilots from the experimental pool using markedly different control techniques (styles) while conducting a tracking task. An intriguing find was that the pilots appeared to: 1) produce a continuous, internally-generated stick signal that they integrated in time; 2) integrate the actual stick position; and 3) compare the two integrations to issue and cease pulse commands. This suggests that the pilots utilized kinesthetic feedback in order to perceive and integrate stick position, supporting the hypothesis that pilots can access and employ the proprioceptive inner feedback loop proposed by Hess' pilot Structural Model. The Pulse Models used in conjunction with the pilot Structural Model closely recreated the pilot data both in the frequency and time domains during closed-loop simulation. This indicates that for the range of tasks and control styles encountered, the models captured the fundamental mechanisms governing pulsive and control processes. The pilot Pulse Models give important insight for the amount of remnant (stick output uncorrelated with the forcing function) that arises from nonlinear pilot technique, and for the remaining remnant arising from different sources unrelated to tracking control (i.e. neuromuscular tremor, reallocation of cognitive resources, etc.).
Dynamic drought risk assessment using crop model and remote sensing techniques
NASA Astrophysics Data System (ADS)
Sun, H.; Su, Z.; Lv, J.; Li, L.; Wang, Y.
2017-02-01
Drought risk assessment is of great significance to reduce the loss of agricultural drought and ensure food security. The normally drought risk assessment method is to evaluate its exposure to the hazard and the vulnerability to extended periods of water shortage for a specific region, which is a static evaluation method. The Dynamic Drought Risk Assessment (DDRA) is to estimate the drought risk according to the crop growth and water stress conditions in real time. In this study, a DDRA method using crop model and remote sensing techniques was proposed. The crop model we employed is DeNitrification and DeComposition (DNDC) model. The drought risk was quantified by the yield losses predicted by the crop model in a scenario-based method. The crop model was re-calibrated to improve the performance by the Leaf Area Index (LAI) retrieved from MODerate Resolution Imaging Spectroradiometer (MODIS) data. And the in-situ station-based crop model was extended to assess the regional drought risk by integrating crop planted mapping. The crop planted area was extracted with extended CPPI method from MODIS data. This study was implemented and validated on maize crop in Liaoning province, China.
Resisting the "Employability" Doctrine through Anarchist Pedagogies & Prefiguration
ERIC Educational Resources Information Center
Osborne, Natalie; Grant-Smith, Deanna
2017-01-01
Increasingly those working in higher education are tasked with targeting their teaching approaches and techniques to improve the "employability" of graduates. However, this approach is promoted with little recognition that enhanced employability does not guarantee employment outcomes or the tensions inherent in pursuing this agenda. The…
Identification of cracks in thick beams with a cracked beam element model
NASA Astrophysics Data System (ADS)
Hou, Chuanchuan; Lu, Yong
2016-12-01
The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.
Animal and in silico models for the study of sarcomeric cardiomyopathies
Duncker, Dirk J.; Bakkers, Jeroen; Brundel, Bianca J.; Robbins, Jeff; Tardiff, Jil C.; Carrier, Lucie
2015-01-01
Over the past decade, our understanding of cardiomyopathies has improved dramatically, due to improvements in screening and detection of gene defects in the human genome as well as a variety of novel animal models (mouse, zebrafish, and drosophila) and in silico computational models. These novel experimental tools have created a platform that is highly complementary to the naturally occurring cardiomyopathies in cats and dogs that had been available for some time. A fully integrative approach, which incorporates all these modalities, is likely required for significant steps forward in understanding the molecular underpinnings and pathogenesis of cardiomyopathies. Finally, novel technologies, including CRISPR/Cas9, which have already been proved to work in zebrafish, are currently being employed to engineer sarcomeric cardiomyopathy in larger animals, including pigs and non-human primates. In the mouse, the increased speed with which these techniques can be employed to engineer precise ‘knock-in’ models that previously took years to make via multiple rounds of homologous recombination-based gene targeting promises multiple and precise models of human cardiac disease for future study. Such novel genetically engineered animal models recapitulating human sarcomeric protein defects will help bridging the gap to translate therapeutic targets from small animal and in silico models to the human patient with sarcomeric cardiomyopathy. PMID:25600962
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Lai, W.; Douglas, C. C.; Miller, S. N.; Zhang, Y.
2012-12-01
The CI-WATER project is a cooperative effort between the Utah and Wyoming EPSCoR jurisdictions, and is funded through a cooperative agreement with the U.S. National Science Foundation EPSCoR. The CI-WATER project is acquiring hardware and developing software cyberinfrastructure (CI) to enhance accessibility of High Performance Computing for water resources modeling in the Western U.S. One of the components of the project is development of a large-scale, high-resolution, physically-based, data-driven, integrated computational water resources model, which we call the CI-WATER HPC model. The objective of this model development is to enable evaluation of integrated system behavior to guide and support water system planning and management by individual users, cities, or states. The model is first being tested in the Green River basin of Wyoming, which is the largest tributary to the Colorado River. The model will ultimately be applied to simulate the entire Upper Colorado River basin for hydrological studies, watershed management, economic analysis, as well as evaluation of potential changes in environmental policy and law, population, land use, and climate. In addition to hydrologically important processes simulated in many hydrological models, the CI-WATER HPC model will emphasize anthropogenic influences such as land use change, water resources infrastructure, irrigation practices, trans-basin diversions, and urban/suburban development. The model operates on an unstructured mesh, employing adaptive mesh at grid sizes as small as 10 m as needed- particularly in high elevation snow melt regions. Data for the model are derived from remote sensing sources, atmospheric models and geophysical techniques. Monte-Carlo techniques and ensemble Kalman filtering methodologies are employed for data assimilation. The model includes application programming interface (API) standards to allow easy substitution of alternative process-level simulation routines, and provide post-processing, visualization, and communication of massive amounts of output. The open-source CI-WATER model represents a significant advance in water resources modeling, and will be useful to water managers, planners, resource economists, and the hydrologic research community in general.
Employment program for patients with severe mental illness in Malaysia: a 3-month outcome.
Wan Kasim, Syarifah Hafizah; Midin, Marhani; Abu Bakar, Abdul Kadir; Sidi, Hatta; Nik Jaafar, Nik Ruzyanei; Das, Srijit
2014-01-01
This study aimed to examine the rate and predictive factors of successful employment at 3 months upon enrolment into an employment program among patients with severe mental illness (SMI). A cross-sectional study using universal sampling technique was conducted on patients with SMI who completed a 3-month period of being employed at Hospital Permai, Malaysia. A total of 147 patients were approached and 126 were finally included in the statistical analyses. Successful employment was defined as the ability to work 40 or more hours per month. Factors significantly associated with successful employment from bivariate analyses were entered into a multiple logistic regression analysis to identify predictors of successful employment. The rate of successful employment at 3 months was 68.3% (n=81). Significant factors associated with successful employment from bivariate analyses were having past history of working, good family support, less number of psychiatric admissions, good compliance to medicine, good interest in work, living in hostel, being motivated to work, satisfied with the job or salary, getting a preferred job, being in competitive or supported employment and having higher than median scores of PANNS on the positive, negative and general psychopathology. Significant predictors of employment, from a logistic regression model were having good past history of working (p<0.021; OR 6.12; [95% CI 2.1-11.9]) and getting a preferred job (p<0.032; [OR 4.021; 95% CI 1.83-12.1]). Results showed a high employment rate among patients with SMI. Good past history of working and getting a preferred job were significant predictors of successful employment. Copyright © 2014 Elsevier Inc. All rights reserved.
Analysis of the Harrier forebody/inlet design using computational techniques
NASA Technical Reports Server (NTRS)
Chow, Chuen-Yen
1993-01-01
Under the support of this Cooperative Agreement, computations of transonic flow past the complex forebody/inlet configuration of the AV-8B Harrier II have been performed. The actual aircraft configuration was measured and its surface and surrounding domain were defined using computational structured grids. The thin-layer Navier-Stokes equations were used to model the flow along with the Chimera embedded multi-grid technique. A fully conservative, alternating direction implicit (ADI), approximately-factored, partially flux-split algorithm was employed to perform the computation. An existing code was altered to conform with the needs of the study, and some special engine face boundary conditions were developed. The algorithm incorporated the Chimera technique and an algebraic turbulence model in order to deal with the embedded multi-grids and viscous governing equations. Comparison with experimental data has yielded good agreement for the simplifications incorporated into the analysis. The aim of the present research was to provide a methodology for the numerical solution of complex, combined external/internal flows. This is the first time-dependent Navier-Stokes solution for a geometry in which the fuselage and inlet share a wall. The results indicate the methodology used here is a viable tool for transonic aircraft modeling.
A Numerical Model of Unsteady, Subsonic Aeroelastic Behavior. Ph.D Thesis
NASA Technical Reports Server (NTRS)
Strganac, Thomas W.
1987-01-01
A method for predicting unsteady, subsonic aeroelastic responses was developed. The technique accounts for aerodynamic nonlinearities associated with angles of attack, vortex-dominated flow, static deformations, and unsteady behavior. The fluid and the wing together are treated as a single dynamical system, and the equations of motion for the structure and flow field are integrated simultaneously and interactively in the time domain. The method employs an iterative scheme based on a predictor-corrector technique. The aerodynamic loads are computed by the general unsteady vortex-lattice method and are determined simultaneously with the motion of the wing. Because the unsteady vortex-lattice method predicts the wake as part of the solution, the history of the motion is taken into account; hysteresis is predicted. Two models are used to demonstrate the technique: a rigid wing on an elastic support experiencing plunge and pitch about the elastic axis, and an elastic wing rigidly supported at the root chord experiencing spanwise bending and twisting. The method can be readily extended to account for structural nonlinearities and/or substitute aerodynamic load models. The time domain solution coupled with the unsteady vortex-lattice method provides the capability of graphically depicting wing and wake motion.
IGA: A Simplified Introduction and Implementation Details for Finite Element Users
NASA Astrophysics Data System (ADS)
Agrawal, Vishal; Gautam, Sachin S.
2018-05-01
Isogeometric analysis (IGA) is a recently introduced technique that employs the Computer Aided Design (CAD) concept of Non-uniform Rational B-splines (NURBS) tool to bridge the substantial bottleneck between the CAD and finite element analysis (FEA) fields. The simplified transition of exact CAD models into the analysis alleviates the issues originating from geometrical discontinuities and thus, significantly reduces the design-to-analysis time in comparison to traditional FEA technique. Since its origination, the research in the field of IGA is accelerating and has been applied to various problems. However, the employment of CAD tools in the area of FEA invokes the need of adapting the existing implementation procedure for the framework of IGA. Also, the usage of IGA requires the in-depth knowledge of both the CAD and FEA fields. This can be overwhelming for a beginner in IGA. Hence, in this paper, a simplified introduction and implementation details for the incorporation of NURBS based IGA technique within the existing FEA code is presented. It is shown that with little modifications, the available standard code structure of FEA can be adapted for IGA. For the clear and concise explanation of these modifications, step-by-step implementation of a benchmark plate with a circular hole under the action of in-plane tension is included.
Prewarping techniques in imaging: applications in nanotechnology and biotechnology
NASA Astrophysics Data System (ADS)
Poonawala, Amyn; Milanfar, Peyman
2005-03-01
In all imaging systems, the underlying process introduces undesirable distortions that cause the output signal to be a warped version of the input. When the input to such systems can be controlled, pre-warping techniques can be employed which consist of systematically modifying the input such that it cancels out (or compensates for) the process losses. In this paper, we focus on the mask (reticle) design problem for 'optical micro-lithography', a process similar to photographic printing used for transferring binary circuit patterns onto silicon wafers. We use a pixel-based mask representation and model the above process as a cascade of convolution (aerial image formation) and thresholding (high-contrast recording) operations. The pre-distorted mask is obtained by minimizing the norm of the difference between the 'desired' output image and the 'reproduced' output image. We employ the regularization framework to ensure that the resulting masks are close-to-binary as well as simple and easy to fabricate. Finally, we provide insight into two additional applications of pre-warping techniques. First is 'e-beam lithography', used for fabricating nano-scale structures, and second is 'electronic visual prosthesis' which aims at providing limited vision to the blind by using a prosthetic retinally implanted chip capable of electrically stimulating the retinal neuron cells.
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S
2018-03-01
Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.
Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.
Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante
2014-10-01
In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.
Mitigating randomness of consumer preferences under certain conditional choices
NASA Astrophysics Data System (ADS)
Bothos, John M. A.; Thanos, Konstantinos-Georgios; Papadopoulou, Eirini; Daveas, Stelios; Thomopoulos, Stelios C. A.
2017-05-01
Agent-based crowd behaviour consists a significant field of research that has drawn a lot of attention in recent years. Agent-based crowd simulation techniques have been used excessively to forecast the behaviour of larger or smaller crowds in terms of certain given conditions influenced by specific cognition models and behavioural rules and norms, imposed from the beginning. Our research employs conditional event algebra, statistical methodology and agent-based crowd simulation techniques in developing a behavioural econometric model about the selection of certain economic behaviour by a consumer that faces a spectre of potential choices when moving and acting in a multiplex mall. More specifically we try to analyse the influence of demographic, economic, social and cultural factors on the economic behaviour of a certain individual and then we try to link its behaviour with the general behaviour of the crowds of consumers in multiplex malls using agent-based crowd simulation techniques. We then run our model using Generalized Least Squares and Maximum Likelihood methods to come up with the most probable forecast estimations, regarding the agent's behaviour. Our model is indicative about the formation of consumers' spectre of choices in multiplex malls under the condition of predefined preferences and can be used as a guide for further research in this area.
Kant, Nasir Ali; Dar, Mohamad Rafiq; Khanday, Farooq Ahmad
2015-01-01
The output of every neuron in neural network is specified by the employed activation function (AF) and therefore forms the heart of neural networks. As far as the design of artificial neural networks (ANNs) is concerned, hardware approach is preferred over software one because it promises the full utilization of the application potential of ANNs. Therefore, besides some arithmetic blocks, designing AF in hardware is the most important for designing ANN. While attempting to design the AF in hardware, the designs should be compatible with the modern Very Large Scale Integration (VLSI) design techniques. In this regard, the implemented designs should: only be in Metal Oxide Semiconductor (MOS) technology in order to be compatible with the digital designs, provide electronic tunability feature, and be able to operate at ultra-low voltage. Companding is one of the promising circuit design techniques for achieving these goals. In this paper, 0.5 V design of Liao's AF using sinh-domain technique is introduced. Furthermore, the function is tested by implementing inertial neuron model. The performance of the AF and inertial neuron model have been evaluated through simulation results, using the PSPICE software with the MOS transistor models provided by the 0.18-μm Taiwan Semiconductor Manufacturer Complementary Metal Oxide Semiconductor (TSM CMOS) process.
Magnetic suspension - Today's marvel, tomorrow's tool
NASA Technical Reports Server (NTRS)
Lawing, Pierce L.
1989-01-01
NASA's Langley facility has through constant advocacy of magnetic suspension systems (MSSs) for wind-tunnel model positioning obtained a technology-development status for the requisite large magnets, computers, automatic control techniques, and apparatus configurations, to contemplate the construction of MSSs for large wind tunnels. Attention is presently given to the prospects for MSSs in wind tunnels employing superfluid helium atmospheres to obtain very high Reynolds numbers, where the MSS can yield substantial enhancements of wind tunnel productivity.