NASA Astrophysics Data System (ADS)
Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.
2017-03-01
We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.
Separability of Item and Person Parameters in Response Time Models.
ERIC Educational Resources Information Center
Van Breukelen, Gerard J. P.
1997-01-01
Discusses two forms of separability of item and person parameters in the context of response time models. The first is "separate sufficiency," and the second is "ranking independence." For each form a theorem stating sufficient conditions is proved. The two forms are shown to include several cases of models from psychometric…
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-01-17
A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888
NASA Astrophysics Data System (ADS)
Mamin, R. F.; Shaposhnikova, T. S.; Kabanov, V. V.
2018-03-01
We have considered the model of the phase transition of the second order for the Coulomb frustrated 2D charged system. The coupling of the order parameter with the charge was considered as the local temperature. We have found that in such a system, an appearance of the phase-separated state is possible. By numerical simulation, we have obtained different types ("stripes," "rings," "snakes") of phase-separated states and determined the parameter ranges for these states. Thus the system undergoes a series of phase transitions when the temperature decreases. First, the system moves from the homogeneous state with a zero order parameter to the phase-separated state with two phases in one of which the order parameter is zero and, in the other, it is nonzero (τ >0 ). Then a first-order transition occurs to another phase-separated state, in which both phases have different and nonzero values of the order parameter (for τ <0 ). Only a further decrease of temperature leads to a transition to a homogeneous ordered state.
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations
NASA Astrophysics Data System (ADS)
Romanihin, S. M.; Tronin, I. V.
2016-09-01
We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.
Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle
NASA Technical Reports Server (NTRS)
Ali, Yasmin; Chuhta, Jesse D.; Hughes, Michael P.; Radke, Tara S.
2015-01-01
Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics models used to verify no re-contact. The NASA Orion Multi-Purpose Crew Vehicle (MPCV) architecture includes a highly-integrated Forward Bay Cover (FBC) jettison assembly design that combines parachutes and piston thrusters to separate the FBC from the Crew Module (CM) and avoid re-contact. A multi-disciplinary team across numerous organizations examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the FBC separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute elements, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1. Additional testing will be required to support human certification of this separation event, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust human-rated FBC separation event.
Statistical methods for the beta-binomial model in teratology.
Yamamoto, E; Yanagimoto, T
1994-01-01
The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716
Applications of the solvation parameter model in reversed-phase liquid chromatography.
Poole, Colin F; Lenca, Nicole
2017-02-24
The solvation parameter model is widely used to provide insight into the retention mechanism in reversed-phase liquid chromatography, for column characterization, and in the development of surrogate chromatographic models for biopartitioning processes. The properties of the separation system are described by five system constants representing all possible intermolecular interactions for neutral molecules. The general model can be extended to include ions and enantiomers by adding new descriptors to encode the specific properties of these compounds. System maps provide a comprehensive overview of the separation system as a function of mobile phase composition and/or temperature for method development. The solvation parameter model has been applied to gradient elution separations but here theory and practice suggest a cautious approach since the interpretation of system and compound properties derived from its use are approximate. A growing application of the solvation parameter model in reversed-phase liquid chromatography is the screening of surrogate chromatographic systems for estimating biopartitioning properties. Throughout the discussion of the above topics success as well as known and likely deficiencies of the solvation parameter model are described with an emphasis on the role of the heterogeneous properties of the interphase region on the interpretation and understanding of the general retention mechanism in reversed-phase liquid chromatography for porous chemically bonded sorbents. Copyright © 2016 Elsevier B.V. All rights reserved.
Another convex combination of product states for the separable Werner state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azuma, Hiroo; Ban, Masashi; CREST, Japan Science and Technology Agency, 1-1-9 Yaesu, Chuo-ku, Tokyo 103-0028
2006-03-15
In this paper, we write down the separable Werner state in a two-qubit system explicitly as a convex combination of product states, which is different from the convex combination obtained by Wootters' method. The Werner state in a two-qubit system has a single real parameter and varies from inseparable to separable according to the value of its parameter. We derive a hidden variable model that is induced by our decomposed form for the separable Werner state. From our explicit form of the convex combination of product states, we understand the following: The critical point of the parameter for separability ofmore » the Werner state comes from positivity of local density operators of the qubits.« less
Forward Bay Cover Separation Modeling and Testing for the Orion Multi-Purpose Crew Vehicle
NASA Technical Reports Server (NTRS)
Ali, Yasmin; Radke, Tara; Chuhta, Jesse; Hughes, Michael
2014-01-01
Spacecraft multi-body separation events during atmospheric descent require complex testing and analysis to validate the flight separation dynamics model and to verify no recontact. NASA Orion Multi-Purpose Crew Vehicle (MPCV) teams examined key model parameters and risk areas to develop a robust but affordable test campaign in order to validate and verify the Forward Bay Cover (FBC) separation event for Exploration Flight Test-1 (EFT-1). The FBC jettison simulation model is highly complex, consisting of dozens of parameters varied simultaneously, with numerous multi-parameter interactions (coupling and feedback) among the various model elements, and encompassing distinct near-field, mid-field, and far-field regimes. The test campaign was composed of component-level testing (for example gas-piston thrusters and parachute mortars), ground FBC jettison tests, and FBC jettison air-drop tests that were accomplished by a highly multi-disciplinary team. Three ground jettison tests isolated the testing of mechanisms and structures to anchor the simulation models excluding aerodynamic effects. Subsequently, two air-drop tests added aerodynamic and parachute parameters, and served as integrated system demonstrations, which had been preliminarily explored during the Orion Pad Abort-1 (PA-1) flight test in May 2010. Both ground and drop tests provided extensive data to validate analytical models and to verify the FBC jettison event for EFT-1, but more testing is required to support human certification, for which NASA and Lockheed Martin are applying knowledge from Apollo and EFT-1 testing and modeling to develop a robust but affordable human spacecraft capability.
Faghihi, Faramarz; Moustafa, Ahmed A.
2015-01-01
Information processing in the hippocampus begins by transferring spiking activity of the entorhinal cortex (EC) into the dentate gyrus (DG). Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modeled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of granule cells of the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking). This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed. PMID:25859189
NASA Astrophysics Data System (ADS)
El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.
2016-02-01
Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate different biological parameters of phytoplanktons and zooplanktons. We analyze the performance of the filters in terms of complexity and accuracy of the state and parameters estimates.
MONTAGE: A Methodology for Designing Composable End-to-End Secure Distributed Systems
2012-08-01
83 7.6 Formal Model of Loc Separation . . . . . . . . . . . . . . . . . . . . . . . . . 84 7.6.1 Static Partitions...Next, we derive five requirements (called Loc Separation, Implicit Parameter Separation, Error Signaling Separation, Conf Separation, and Next Call...hypervisors and hardware) and a real cloud (with shared hypervisors and hardware) that satisfies these requirements. Finally we study Loc Separation
NASA Astrophysics Data System (ADS)
Skrypnyk, T.
2017-08-01
We study the problem of separation of variables for classical integrable Hamiltonian systems governed by non-skew-symmetric non-dynamical so(3)\\otimes so(3) -valued elliptic r-matrices with spectral parameters. We consider several examples of such models, and perform separation of variables for classical anisotropic one- and two-spin Gaudin-type models in an external magnetic field, and for Jaynes-Cummings-Dicke-type models without the rotating wave approximation.
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
Lee, Yu; Yu, Chanki; Lee, Sang Wook
2018-01-10
We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.
Kinetics of process of product separation in closed system with recirculation
NASA Astrophysics Data System (ADS)
Prokopenko, V. S.; Orekhova, T. N.; Goncharov, E. I.; Odobesko, I. A.
2018-03-01
The object of an article is the extrapolation of the process of classifying material while passing in a model with the separation of the products of milling in the cleaning system includes a separator, concentrator, cyclone and a recycle loop. The model allows for the given parameters to predict the coarseness of grading of the finished product.
QSAR modeling of flotation collectors using principal components extracted from topological indices.
Natarajan, R; Nirdosh, Inderjit; Basak, Subhash C; Mills, Denise R
2002-01-01
Several topological indices were calculated for substituted-cupferrons that were tested as collectors for the froth flotation of uranium. The principal component analysis (PCA) was used for data reduction. Seven principal components (PC) were found to account for 98.6% of the variance among the computed indices. The principal components thus extracted were used in stepwise regression analyses to construct regression models for the prediction of separation efficiencies (Es) of the collectors. A two-parameter model with a correlation coefficient of 0.889 and a three-parameter model with a correlation coefficient of 0.913 were formed. PCs were found to be better than partition coefficient to form regression equations, and inclusion of an electronic parameter such as Hammett sigma or quantum mechanically derived electronic charges on the chelating atoms did not improve the correlation coefficient significantly. The method was extended to model the separation efficiencies of mercaptobenzothiazoles (MBT) and aminothiophenols (ATP) used in the flotation of lead and zinc ores, respectively. Five principal components were found to explain 99% of the data variability in each series. A three-parameter equation with correlation coefficient of 0.985 and a two-parameter equation with correlation coefficient of 0.926 were obtained for MBT and ATP, respectively. The amenability of separation efficiencies of chelating collectors to QSAR modeling using PCs based on topological indices might lead to the selection of collectors for synthesis and testing from a virtual database.
NASA Astrophysics Data System (ADS)
Kameswara Rao, P. V.; Rawal, Amit; Kumar, Vijay; Rajput, Krishn Gopal
2017-10-01
Absorptive glass mat (AGM) separators play a key role in enhancing the cycle life of the valve regulated lead acid (VRLA) batteries by maintaining the elastic characteristics under a defined level of compression force with the plates of the electrodes. Inevitably, there are inherent challenges to maintain the required level of compression characteristics of AGM separators during the charge and discharge of the battery. Herein, we report a three-dimensional (3D) analytical model for predicting the compression-recovery behavior of AGM separators by formulating a direct relationship with the constituent fiber and structural parameters. The analytical model of compression-recovery behavior of AGM separators has successfully included the fiber slippage criterion and internal friction losses. The presented work uses, for the first time, 3D data of fiber orientation from X-ray micro-computed tomography, for predicting the compression-recovery behavior of AGM separators. A comparison has been made between the theoretical and experimental results of compression-recovery behavior of AGM samples with defined fiber orientation characteristics. In general, the theory agreed reasonably well with the experimental results of AGM samples in both dry and wet states. Through theoretical modeling, fiber volume fraction was established as one of the key structural parameters that modulates the compression hysteresis of an AGM separator.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Phase separation and large deviations of lattice active matter
NASA Astrophysics Data System (ADS)
Whitelam, Stephen; Klymko, Katherine; Mandal, Dibyendu
2018-04-01
Off-lattice active Brownian particles form clusters and undergo phase separation even in the absence of attractions or velocity-alignment mechanisms. Arguments that explain this phenomenon appeal only to the ability of particles to move persistently in a direction that fluctuates, but existing lattice models of hard particles that account for this behavior do not exhibit phase separation. Here we present a lattice model of active matter that exhibits motility-induced phase separation in the absence of velocity alignment. Using direct and rare-event sampling of dynamical trajectories, we show that clustering and phase separation are accompanied by pronounced fluctuations of static and dynamic order parameters. This model provides a complement to off-lattice models for the study of motility-induced phase separation.
Modeling stick-slip-separation dynamics in a bimodal standing wave ultrasonic motor
NASA Astrophysics Data System (ADS)
Li, Xiang; Yao, Zhiyuan; Lv, Qibao; Liu, Zhen
2016-11-01
Ultrasonic motor (USM) is an electromechanical coupling system with ultrasonic vibration, which is driven by the frictional contact force between the stator (vibrating body) and the rotor/slider (driven body). Stick-slip motion can occur at the contact interface when USM is operating, which may affect the performance of the motor. This paper develops a physically-based model to investigate the complex stick-slip-separation dynamics in a bimodal standing wave ultrasonic motor. The model includes both friction nonlinearity and intermittent separation nonlinearity of the system. Utilizing Hamilton's principle and assumed mode method, the dynamic equations of the stator are deduced. Based on the dynamics of the stator and the slider, sticking force during the stick phase is derived, which is used to examine the stick-to-slip transition. Furthermore, the stick-slip-separation kinematics is analyzed by establishing analytical criteria that predict the transition between stick, slip and separation of the interface. Stick-slip-separation motion is observed in the resulting model, and numerical simulations are performed to study the influence of parameters on the range of possible motions. Results show that stick-slip motion can occur with greater preload and smaller voltage amplitude. Furthermore, a dimensionless parameter is proposed to predict the occurrence of stick-slip versus slip-separation motions, and its role in designing ultrasonic motors is discussed. It is shown that slip-separation motion is favorable for the slider velocity.
Neural network river forecasting through baseflow separation and binary-coded swarm optimization
NASA Astrophysics Data System (ADS)
Taormina, Riccardo; Chau, Kwok-Wing; Sivakumar, Bellie
2015-10-01
The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible.
Basic features of boron isotope separation by SILARC method in the two-step iterative static model
NASA Astrophysics Data System (ADS)
Lyakhov, K. A.; Lee, H. J.
2013-05-01
In this paper we develop a new static model for boron isotope separation by the laser assisted retardation of condensation method (SILARC) on the basis of model proposed by Jeff Eerkens. Our model is thought to be adequate to so-called two-step iterative scheme for isotope separation. This rather simple model helps to understand combined action on boron separation by SILARC method of all important parameters and relations between them. These parameters include carrier gas, molar fraction of BCl3 molecules in carrier gas, laser pulse intensity, gas pulse duration, gas pressure and temperature in reservoir and irradiation cells, optimal irradiation cell and skimmer chamber volumes, and optimal nozzle throughput. A method for finding optimal values of these parameters based on some objective function global minimum search was suggested. It turns out that minimum of this objective function is directly related to the minimum of total energy consumed, and total setup volume. Relations between nozzle throat area, IC volume, laser intensity, number of nozzles, number of vacuum pumps, and required isotope production rate were derived. Two types of industrial scale irradiation cells are compared. The first one has one large throughput slit nozzle, while the second one has numerous small nozzles arranged in parallel arrays for better overlap with laser beam. It is shown that the last one outperforms the former one significantly. It is argued that NO2 is the best carrier gas for boron isotope separation from the point of view of energy efficiency and Ar from the point of view of setup compactness.
Parametric Study of Synthetic-Jet-Based Flow Control on a Vertical Tail Model
NASA Astrophysics Data System (ADS)
Monastero, Marianne; Lindstrom, Annika; Beyar, Michael; Amitay, Michael
2015-11-01
Separation control over the rudder of the vertical tail of a commercial airplane using synthetic-jet-based flow control can lead to a reduction in tail size, with an associated decrease in drag and increase in fuel savings. A parametric, experimental study was undertaken using an array of finite span synthetic jets to investigate the sensitivity of the enhanced vertical tail side force to jet parameters, such as jet spanwise spacing and jet momentum coefficient. A generic wind tunnel model was designed and fabricated to fundamentally study the effects of the jet parameters at varying rudder deflection and model sideslip angles. Wind tunnel results obtained from pressure measurements and tuft flow visualization in the Rensselaer Polytechnic Subsonic Wind Tunnel show a decrease in separation severity and increase in model performance in comparison to the baseline, non-actuated case. The sensitivity to various parameters will be presented.
Models and parameters for environmental radiological assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, C W
1984-01-01
This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
Huhn, Carolin; Pyell, Ute
2008-07-11
It is investigated whether those relationships derived within an optimization scheme developed previously to optimize separations in micellar electrokinetic chromatography can be used to model effective electrophoretic mobilities of analytes strongly differing in their properties (polarity and type of interaction with the pseudostationary phase). The modeling is based on two parameter sets: (i) carbon number equivalents or octanol-water partition coefficients as analyte descriptors and (ii) four coefficients describing properties of the separation electrolyte (based on retention data for a homologous series of alkyl phenyl ketones used as reference analytes). The applicability of the proposed model is validated comparing experimental and calculated effective electrophoretic mobilities. The results demonstrate that the model can effectively be used to predict effective electrophoretic mobilities of neutral analytes from the determined carbon number equivalents or from octanol-water partition coefficients provided that the solvation parameters of the analytes of interest are similar to those of the reference analytes.
ERIC Educational Resources Information Center
Haberman, Shelby J.
2009-01-01
A regression procedure is developed to link simultaneously a very large number of item response theory (IRT) parameter estimates obtained from a large number of test forms, where each form has been separately calibrated and where forms can be linked on a pairwise basis by means of common items. An application is made to forms in which a…
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Optimal SVM parameter selection for non-separable and unbalanced datasets.
Jiang, Peng; Missoum, Samy; Chen, Zhao
2014-10-01
This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced accuracy. These validation metrics are tested using computational data only, which enables the creation of fully separable sets of data. This way, non-separable datasets, representative of a real-world problem, can be created by projection onto a lower dimensional sub-space. The knowledge of the separable dataset, unknown in real-world problems, provides a reference to compare the three validation metrics using a quantity referred to as the "weighted likelihood". As an application example, the study investigates a classification model for hip fracture prediction. The data is obtained from a parameterized finite element model of a femur. The performance of the various validation metrics is studied for several levels of separability, ratios of unbalance, and training set sizes.
Particle acceleration at a reconnecting magnetic separator
NASA Astrophysics Data System (ADS)
Threlfall, J.; Neukirch, T.; Parnell, C. E.; Eradat Oskoui, S.
2015-02-01
Context. While the exact acceleration mechanism of energetic particles during solar flares is (as yet) unknown, magnetic reconnection plays a key role both in the release of stored magnetic energy of the solar corona and the magnetic restructuring during a flare. Recent work has shown that special field lines, called separators, are common sites of reconnection in 3D numerical experiments. To date, 3D separator reconnection sites have received little attention as particle accelerators. Aims: We investigate the effectiveness of separator reconnection as a particle acceleration mechanism for electrons and protons. Methods: We study the particle acceleration using a relativistic guiding-centre particle code in a time-dependent kinematic model of magnetic reconnection at a separator. Results: The effect upon particle behaviour of initial position, pitch angle, and initial kinetic energy are examined in detail, both for specific (single) particle examples and for large distributions of initial conditions. The separator reconnection model contains several free parameters, and we study the effect of changing these parameters upon particle acceleration, in particular in view of the final particle energy ranges that agree with observed energy spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent; Gettelman, Andrew; Morrison, Hugh
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less
Dynamics in the Parameter Space of a Neuron Model
NASA Astrophysics Data System (ADS)
Paulo, C. Rech
2012-06-01
Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.
Launch Vehicle Propulsion Design with Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.
2005-01-01
The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.
Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi
2017-01-01
The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO2 emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO2 emissions is significantly higher than those of GDPpc and Es on per capita CO2 emissions. PMID:29236083
Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi
2017-12-13
The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO₂ emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO₂ emissions is significantly higher than those of GDPpc and Es on per capita CO₂ emissions.
Inlet Diameter and Flow Volume Effects on Separation and Energy Efficiency of Hydrocyclones
NASA Astrophysics Data System (ADS)
Erikli, Ş.; Olcay, A. B.
2015-08-01
This study investigates hydrocyclone performance of an oil injected screw compressor. Especially, the oil separation efficiency of a screw compressor plays a significant role for air quality and non-stop working hour of compressors has become an important issue when the efficiency in energy is considered. In this study, two separation efficiency parameters were selected to be hydrocyclone inlet diameter and flow volume height between oil reservoir surface and top of the hydrocyclone. Nine different cases were studied in which cyclone inlet diameter and flow volume height between oil reservoir surface and top were investigated in regards to separation and energy performance aspects and the effect of the parameters on the general performance appears to be causing powerful influence. Flow inside the hydrocyclone geometry was modelled by Reynolds Stress Model (RSM) and hydro particles were tracked by Discrete Phase Model (DPM). Besides, particle break up was modelled by the Taylor Analogy Breakup (TAB) model. The reversed vortex generation was observed at different planes. The upper limit of the inlet diameter of the cyclone yields the centrifugal force on particles to decrease while the flow becomes slower; and the larger diameter implies slower flow. On the contrary, the lower limit is increment in speed causes breakup problems that the particle diameters become smaller; consequently, it is harder to separate them from gas.
A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Van der Tol, C.; Berry, J. A.
2014-12-01
Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.
Kawai, Kosuke; Huong, Luong Thi Mai
2017-03-01
Proper management of food waste, a major component of municipal solid waste (MSW), is needed, especially in developing Asian countries where most MSW is disposed of in landfill sites without any pretreatment. Source separation can contribute to solving problems derived from the disposal of food waste. An organic waste source separation and collection programme has been operated in model areas in Hanoi, Vietnam, since 2007. This study proposed three key parameters (participation rate, proper separation rate and proper discharge rate) for behaviour related to source separation of household organic waste, and monitored the progress of the programme based on the physical composition of household waste sampled from 558 households in model programme areas of Hanoi. The results showed that 13.8% of 558 households separated organic waste, and 33.0% discharged mixed (unseparated) waste improperly. About 41.5% (by weight) of the waste collected as organic waste was contaminated by inorganic waste, and one-third of the waste disposed of as organic waste by separators was inorganic waste. We proposed six hypothetical future household behaviour scenarios to help local officials identify a final or midterm goal for the programme. We also suggested that the city government take further actions to increase the number of people participating in separating organic waste, improve the accuracy of separation and prevent non-separators from discharging mixed waste improperly.
NASA Astrophysics Data System (ADS)
Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.
2015-08-01
Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.
Kluters, Simon; Wittkopp, Felix; Jöhnck, Matthias; Frech, Christian
2016-02-01
The mobile phase pH is a key parameter of every ion exchange chromatography process. However, mechanistic insights into the pH influence on the ion exchange chromatography equilibrium are rare. This work describes a mechanistic model capturing salt and pH influence in ion exchange chromatography. The pH dependence of the characteristic protein charge and the equilibrium constant is introduced to the steric mass action model based on a protein net charge model considering the number of amino acids interacting with the stationary phase. This allows the description of the adsorption equilibrium of the chromatographed proteins as a function of pH. The model parameters were determined for a monoclonal antibody monomer, dimer, and a higher aggregated species based on a manageable set of pH gradient experiments. Without further modification of the model parameters the transfer to salt gradient elution at fixed pH is demonstrated. A lumped rate model was used to predict the separation of the monoclonal antibody monomer/aggregate mixture in pH gradient elution and for a pH step elution procedure-also at increased protein loadings up to 48 g/L packed resin. The presented model combines both salt and pH influence and may be useful for the development and deeper understanding of an ion exchange chromatography separation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Raj, Rahul; Hamm, Nicholas Alexander Samuel; van der Tol, Christiaan; Stein, Alfred
2016-03-01
Gross primary production (GPP) can be separated from flux tower measurements of net ecosystem exchange (NEE) of CO2. This is used increasingly to validate process-based simulators and remote-sensing-derived estimates of simulated GPP at various time steps. Proper validation includes the uncertainty associated with this separation. In this study, uncertainty assessment was done in a Bayesian framework. It was applied to data from the Speulderbos forest site, The Netherlands. We estimated the uncertainty in GPP at half-hourly time steps, using a non-rectangular hyperbola (NRH) model for its separation from the flux tower measurements. The NRH model provides a robust empirical relationship between radiation and GPP. It includes the degree of curvature of the light response curve, radiation and temperature. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. We defined the prior distribution of each NRH parameter and used Markov chain Monte Carlo (MCMC) simulation to estimate the uncertainty in the separated GPP from the posterior distribution at half-hourly time steps. This time series also allowed us to estimate the uncertainty at daily time steps. We compared the informative with the non-informative prior distributions of the NRH parameters and found that both choices produced similar posterior distributions of GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Rawal, Amit; Rao, P. V. Kameswara; Kumar, Vijay
2018-04-01
Absorptive glass mat (AGM) separator is a vital technical component in valve regulated lead acid (VRLA) batteries that can be tailored for a desired application. To selectively design and tailor the AGM separator, the intricate three-dimensional (3D) structure needs to be unraveled. Herein, a toolkit of 3D analytical models of pore size distribution and electrolyte uptake expressed via wicking characteristics of AGM separators under unconfined and confined states is presented. 3D data of fiber orientation distributions obtained previously through X-ray micro-computed tomography (microCT) analysis are used as key set of input parameters. The predictive ability of pore size distribution model is assessed through the commonly used experimental set-up that usually apply high level of compressive stresses. Further, the existing analytical model of wicking characteristics of AGM separators has been extended to account for 3D characteristics, and subsequently, compared with the experimental results. A good agreement between the theory and experiments pave the way to simulate the realistic charge-discharge modes of the battery by applying cyclic loading condition. A threshold criterion describing the invariant behavior of pore size and wicking characteristics in terms of maximum permissible limit of key structural parameters during charge-discharge mode of the battery has also been proposed.
Lin, Mu; He, Hongjian; Schifitto, Giovanni; Zhong, Jianhui
2016-01-01
Purpose The goal of the current study was to investigate tissue pathology at the cellular level in traumatic brain injury (TBI) as revealed by Monte Carlo simulation of diffusion tensor imaging (DTI)-derived parameters and elucidate the possible sources of conflicting findings of DTI abnormalities as reported in the TBI literature. Methods A model with three compartments separated by permeable membranes was employed to represent the diffusion environment of water molecules in brain white matter. The dynamic diffusion process was simulated with a Monte Carlo method using adjustable parameters of intra-axonal diffusivity, axon separation, glial cell volume fraction, and myelin sheath permeability. The effects of tissue pathology on DTI parameters were investigated by adjusting the parameters of the model corresponding to different stages of brain injury. Results The results suggest that the model is appropriate and the DTI-derived parameters simulate the predominant cellular pathology after TBI. Our results further indicate that when edema is not prevalent, axial and radial diffusivity have better sensitivity to axonal injury and demyelination than other DTI parameters. Conclusion DTI is a promising biomarker to detect and stage tissue injury after TBI. The observed inconsistencies among previous studies are likely due to scanning at different stages of tissue injury after TBI. PMID:26256558
Final Technical Report for "Reducing tropical precipitation biases in CESM"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent
In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we have created a climate model that contains a unified cloud parameterization (“CLUBB”) and a unified microphysics parameterization (“MG2”). In this model, all cloud types --- including marine stratocumulus, shallow cumulus, and deep cumulus --- are represented with a single equation set. This model improves themore » representation of convection in the Tropics. The model has been compared with ARM observations. The chief benefit of the project is to provide a climate model that is based on a more theoretically rigorous formulation.« less
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
Microparticle Separation by Cyclonic Separation
NASA Astrophysics Data System (ADS)
Karback, Keegan; Leith, Alexander
2017-11-01
The ability to separate particles based on their size has wide ranging applications from the industrial to the medical. Currently, cyclonic separators are primarily used in agriculture and manufacturing to syphon out contaminates or products from an air supply. This has led us to believe that cyclonic separation has more applications than the agricultural and industrial. Using the OpenFoam computational package, we were able to determine the flow parameters of a vortex in a cyclonic separator in order to segregate dust particles to a cutoff size of tens of nanometers. To test the model, we constructed an experiment to separate a test dust of various sized particles. We filled a chamber with Arizona test dust and utilized an acoustic suspension technique to segregate particles finer than a coarse cutoff size and introduce them into the cyclonic separation apparatus where they were further separated via a vortex following our computational model. The size of the particles separated from this experiment will be used to further refine our model. Metropolitan State University of Denver, Colorado University of Denver, Dr. Randall Tagg, Dr. Richard Krantz.
Modeling spin magnetization transport in a spatially varying magnetic field
NASA Astrophysics Data System (ADS)
Picone, Rico A. R.; Garbini, Joseph L.; Sidles, John A.
2015-01-01
We present a framework for modeling the transport of any number of globally conserved quantities in any spatial configuration and apply it to obtain a model of magnetization transport for spin-systems that is valid in new regimes (including high-polarization). The framework allows an entropy function to define a model that explicitly respects the laws of thermodynamics. Three facets of the model are explored. First, it is expressed as nonlinear partial differential equations that are valid for the new regime of high dipole-energy and polarization. Second, the nonlinear model is explored in the limit of low dipole-energy (semi-linear), from which is derived a physical parameter characterizing separative magnetization transport (SMT). It is shown that the necessary and sufficient condition for SMT to occur is that the parameter is spatially inhomogeneous. Third, the high spin-temperature (linear) limit is shown to be equivalent to the model of nuclear spin transport of Genack and Redfield (1975) [1]. Differences among the three forms of the model are illustrated by numerical solution with parameters corresponding to a magnetic resonance force microscopy (MRFM) experiment (Degen et al., 2009 [2]; Kuehn et al., 2008 [3]; Sidles et al., 2003 [4]; Dougherty et al., 2000 [5]). A family of analytic, steady-state solutions to the nonlinear equation is derived and shown to be the spin-temperature analog of the Langevin paramagnetic equation and Curie's law. Finally, we analyze the separative quality of magnetization transport, and a steady-state solution for the magnetization is shown to be compatible with Fenske's separative mass transport equation (Fenske, 1932 [6]).
NASA Astrophysics Data System (ADS)
Chougule, Abhijit; Mann, Jakob; Kelly, Mark; Larsen, Gunner C.
2018-06-01
A spectral-tensor model of non-neutral, atmospheric-boundary-layer turbulence is evaluated using Eulerian statistics from single-point measurements of the wind speed and temperature at heights up to 100 m, assuming constant vertical gradients of mean wind speed and temperature. The model has been previously described in terms of the dissipation rate ɛ , the length scale of energy-containing eddies L, a turbulence anisotropy parameter Γ, the Richardson number Ri, and the normalized rate of destruction of temperature variance η _θ ≡ ɛ _θ /ɛ . Here, the latter two parameters are collapsed into a single atmospheric stability parameter z / L using Monin-Obukhov similarity theory, where z is the height above the Earth's surface, and L is the Obukhov length corresponding to Ri,η _θ. Model outputs of the one-dimensional velocity spectra, as well as cospectra of the streamwise and/or vertical velocity components, and/or temperature, and cross-spectra for the spatial separation of all three velocity components and temperature, are compared with measurements. As a function of the four model parameters, spectra and cospectra are reproduced quite well, but horizontal temperature fluxes are slightly underestimated in stable conditions. In moderately unstable stratification, our model reproduces spectra only up to a scale ˜ 1 km. The model also overestimates coherences for vertical separations, but is less severe in unstable than in stable cases.
NASA Astrophysics Data System (ADS)
Borisevich, V. D.; Potanin, E. P.
2017-07-01
The possibility of using a rotating magnetic field (RMF) in a plasma centrifuge (PC), with axial circulation to multiply the radial separation effect in an axial direction, is considered. For the first time, a traveling magnetic field (TMF) is proposed to drive an axial circulation flow in a PC. The longitudinal separation effect is calculated for a notional model, using specified operational parameters and the properties of a plasma, comprising an isotopic mixture of 20Ne-22Ne and generated by a high frequency discharge. The optimal intensity of a circulation flow, in which the longitudinal separation effect reaches its maximum value, is studied. The optimal parameters of the RMF and TMF for effective separation, as well as the centrifuge performance, are calculated.
Rácz, A; Bajusz, D; Héberger, K
2015-01-01
Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.
Li, Jia; Xu, Zhenming; Zhou, Yaohe
2008-05-30
Traditionally, the mixture metals from waste printed circuit board (PCB) were sent to the smelt factory to refine pure copper. Some valuable metals (aluminum, zinc and tin) with low content in PCB were lost during smelt. A new method which used roll-type electrostatic separator (RES) to recovery low content metals in waste PCB was presented in this study. The theoretic model which was established from computing electric field and the analysis of forces on the particles was used to write a program by MATLAB language. The program was design to simulate the process of separating mixture metal particles. Electrical, material and mechanical factors were analyzed to optimize the operating parameters of separator. The experiment results of separating copper and aluminum particles by RES had a good agreement with computer simulation results. The model could be used to simulate separating other metal (tin, zinc, etc.) particles during the process of recycling waste PCBs by RES.
SEPARABLE FACTOR ANALYSIS WITH APPLICATIONS TO MORTALITY DATA
Fosdick, Bailey K.; Hoff, Peter D.
2014-01-01
Human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. Regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. However, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. An alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. However, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. In this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. This model can be viewed as an extension of factor analysis to array-valued data, as it uses a factor model to estimate the covariance along each dimension of the array. We discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and Bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. We apply this methodology to the analysis of data from the Human Mortality Database, and show in a cross-validation experiment how it outperforms simpler methods. Additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. Unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations. PMID:25489353
Quantum correlations in a family of bipartite separable qubit states
NASA Astrophysics Data System (ADS)
Xie, Chuanmei; Liu, Yimin; Chen, Jianlan; Zhang, Zhanjun
2017-03-01
Quantum correlations (QCs) in some separable states have been proposed as a key resource for certain quantum communication tasks and quantum computational models without entanglement. In this paper, a family of nine-parameter separable states, obtained from arbitrary mixture of two sets of bi-qubit product pure states, is considered. QCs in these separable states are studied analytically or numerically using four QC quantifiers, i.e., measurement-induced disturbance (Luo in Phys Rev A77:022301, 2008), ameliorated MID (Girolami et al. in J Phys A Math Theor 44:352002, 2011),quantum dissonance (DN) (Modi et al. in Phys Rev Lett 104:080501, 2010), and new quantum dissonance (Rulli in Phys Rev A 84:042109, 2011), respectively. First, an inherent symmetry in the concerned separable states is revealed, that is, any nine-parameter separable states concerned in this paper can be transformed to a three-parameter kernel state via some certain local unitary operation. Then, four different QC expressions are concretely derived with the four QC quantifiers. Furthermore, some comparative studies of the QCs are presented, discussed and analyzed, and some distinct features about them are exposed. We find that, in the framework of all the four QC quantifiers, the more mixed the original two pure product states, the bigger QCs the separable states own. Our results reveal some intrinsic features of QCs in separable systems in quantum information.
NASA Astrophysics Data System (ADS)
Colli, Pierluigi; Gilardi, Gianni; Sprekels, Jürgen
2016-06-01
This paper investigates a nonlocal version of a model for phase separation on an atomic lattice that was introduced by P. Podio-Guidugli (2006) [36]. The model consists of an initial-boundary value problem for a nonlinearly coupled system of two partial differential equations governing the evolution of an order parameter ρ and the chemical potential μ. Singular contributions to the local free energy in the form of logarithmic or double-obstacle potentials are admitted. In contrast to the local model, which was studied by P. Podio-Guidugli and the present authors in a series of recent publications, in the nonlocal case the equation governing the evolution of the order parameter contains in place of the Laplacian a nonlocal expression that originates from nonlocal contributions to the free energy and accounts for possible long-range interactions between the atoms. It is shown that just as in the local case the model equations are well posed, where the technique of proving existence is entirely different: it is based on an application of Tikhonov's fixed point theorem in a rather unusual separable and reflexive Banach space.
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
FISHERY-ORIENTED MODEL OF MARYLAND OYSTER POPULATIONS
We used time series data to calibrate a model of oyster population dynamics for Maryland's Chesapeake Bay. Model parameters were fishing mortality, natural mortality, recruitment, and carrying capacity. We calibrated for the Maryland bay as a whole and separately for 3 salinity z...
Model identification using stochastic differential equation grey-box models in diabetes.
Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik
2013-03-01
The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.
Batrouni, G. G.; Rousseau, V. G.; Scalettar, R. T.; ...
2014-11-17
Here, we study the phase diagram of the one-dimensional bosonic Hubbard model with contact (U) and near neighbor (V ) interactions focusing on the gapped Haldane insulating (HI) phase which is characterized by an exotic nonlocal order parameter. The parameter regime (U, V and μ) where this phase exists and how it competes with other phases such as the supersolid (SS) phase, is incompletely understood. We use the Stochastic Green Function quantum Monte Carlo algorithm as well as the density matrix renormalization group to map out the phase diagram. The HI exists only at = 1, the SS phase existsmore » for a very wide range of parameters (including commensurate fillings) and displays power law decay in the one body Green function were our main conclusions. Additionally, we show that at fixed integer density, the system exhibits phase separation in the (U, V ) plane.« less
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
NASA Astrophysics Data System (ADS)
Neri, Mattia; Toth, Elena
2017-04-01
The study presents the implementation of different regionalisation approaches for the transfer of model parameters from similar and/or neighbouring gauged basin to an ungauged catchment, and in particular it uses a semi-distributed continuously-simulating conceptual rainfall-runoff model for simulating daily streamflows. The case study refers to a set of Apennine catchments (in the Emilia-Romagna region, Italy), that, given the spatial proximity, are assumed to belong to the same hydrologically homogeneous region and are used, alternatively, as donors and regionalised basins. The model is a semi-distributed version of the HBV model (TUWien model) in which the catchment is divided in zones of different altitude that contribute separately to the total outlet flow. The model includes a snow module, whose application in the Apennine area has been, so far, very limited, even if snow accumulation and melting phenomena do have an important role in the study basins. Two methods, both widely applied in the recent literature, are applied for regionalising the model: i) "parameters averaging", where each parameter is obtained as a weighted mean of the parameters obtained, through calibration, on the donor catchments ii) "output averaging", where the model is run over the ungauged basin using the entire set of parameters of each donor basin and the simulated outputs are then averaged. In the first approach, the parameters are regionalised independently from each other, in the second one, instead, the correlation among the parameters is maintained. Since the model is a semi-distributed one, where each elevation zone contributes separately, the study proposes to test also a modified version of the second approach ("output averaging"), where each zone is considered as an autonomous entity, whose parameters are transposed to the ungauged sub-basin corresponding to the same elevation zone. The study explores also the choice of the weights to be used for averaging the parameters (in the "parameters averaging" approach) or for averaging the simulated streamflow (in the "output averaging" approach): in particular, weights are estimated as a function of the similarity/distance of the ungauged basin/zone to the donors, on the basis of a set of geo-morphological catchment descriptors. The predictive accuracy of the different regionalisation methods is finally assessed by jack-knife cross-validation against the observed daily runoff for all the study catchments.
Bauerle, William L.; Bowden, Joseph D.
2011-01-01
A spatially explicit mechanistic model, MAESTRA, was used to separate key parameters affecting transpiration to provide insights into the most influential parameters for accurate predictions of within-crown and within-canopy transpiration. Once validated among Acer rubrum L. genotypes, model responses to different parameterization scenarios were scaled up to stand transpiration (expressed per unit leaf area) to assess how transpiration might be affected by the spatial distribution of foliage properties. For example, when physiological differences were accounted for, differences in leaf width among A. rubrum L. genotypes resulted in a 25% difference in transpiration. An in silico within-canopy sensitivity analysis was conducted over the range of genotype parameter variation observed and under different climate forcing conditions. The analysis revealed that seven of 16 leaf traits had a ≥5% impact on transpiration predictions. Under sparse foliage conditions, comparisons of the present findings with previous studies were in agreement that parameters such as the maximum Rubisco-limited rate of photosynthesis can explain ∼20% of the variability in predicted transpiration. However, the spatial analysis shows how such parameters can decrease or change in importance below the uppermost canopy layer. Alternatively, model sensitivity to leaf width and minimum stomatal conductance was continuous along a vertical canopy depth profile. Foremost, transpiration sensitivity to an observed range of morphological and physiological parameters is examined and the spatial sensitivity of transpiration model predictions to vertical variations in microclimate and foliage density is identified to reduce the uncertainty of current transpiration predictions. PMID:21617246
NASA Astrophysics Data System (ADS)
Adumitroaie, V.; Oyafuso, F. A.; Levin, S.; Gulkis, S.; Janssen, M. A.; Santos-Costa, D.; Bolton, S. J.
2017-12-01
In order to obtain credible atmospheric composition retrieval values from Jupiter's observed radiative signature via Juno's MWR instrument, it is necessary to separate as robustly as possible the contributions from three emission sources: CMB, planet and synchrotron radiation belts. The numerical separation requires a refinement, based on the in-situ data, of a higher fidelity model for the synchrotron emission, namely the multi-parameter, multi-zonal model of Levin at al. (2001). This model employs an empirical electron energy distribution, which prior to the Juno mission, has been adjusted exclusively from VLA observations. At minimum 8 sets of perijove observations (i.e. by PJ9) have to be delivered to an inverse model for retrieval of the electron distribution parameters with the goal of matching the synchrotron emission observed along MWR's lines of sight. The challenges and approaches taken to perform this task are discussed here. The model will be continuously improved with the availability of additional information, both from the MWR and magnetometer instruments.
Boutilier, Michael S H; Sun, Chengzhen; O'Hern, Sean C; Au, Harold; Hadjiconstantinou, Nicolas G; Karnik, Rohit
2014-01-28
Gas transport through intrinsic defects and tears is a critical yet poorly understood phenomenon in graphene membranes for gas separation. We report that independent stacking of graphene layers on a porous support exponentially decreases flow through defects. On the basis of experimental results, we develop a gas transport model that elucidates the separate contributions of tears and intrinsic defects on gas leakage through these membranes. The model shows that the pore size of the porous support and its permeance critically affect the separation behavior, and reveals the parameter space where gas separation can be achieved regardless of the presence of nonselective defects, even for single-layer membranes. The results provide a framework for understanding gas transport in graphene membranes and guide the design of practical, selectively permeable graphene membranes for gas separation.
Measuring Dark Matter With MilkyWay@home
NASA Astrophysics Data System (ADS)
Shelton, Siddhartha; Newberg, Heidi Jo; Arsenault, Matthew; Bauer, Jacob; Desell, Travis; Judd, Roland; Magdon-Ismail, Malik; Newby, Matthew; Rice, Colin; Thompson, Jeffrey; Ulin, Steve; Weiss, Jake; Widrow, Larry
2016-01-01
We perform N-body simulations of two component dwarf galaxies (dark matter and stars follow separate distributions) falling into the Milky Way and the forming of tidal streams. Using MilkyWay@home we optimize the parameters of the progenitor dwarf galaxy and the orbital time to fit the simulated distribution of stars along the tidal stream to the observed distribution of stars. Our initial dwarf galaxy models are constructed with two separate Plummer profiles (one for the dark matter and one for the baryonic matter), sampled using a generalized distribution function for spherically symmetric systems. We perform rigorous testing to ensure that our simulated galaxies are in virial equilibrium, and stable over a simulation time. The N-body simulations are performed using a Barnes-Hut Tree algorithm. Optimization traverses the likelihood surface from our six model parameters using particle swarm and differential evolution methods. We have generated simulated data with known model parameters that are similar to those of the Orphan Stream. We show that we are able to recover a majority of our model parameters, and most importantly the mass-to-light ratio of the now disrupted progenitor galaxy, using MilkyWay@home. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.
D'Archivio, Angelo Antonio; Maggi, Maria Anna; Ruggieri, Fabrizio
2014-08-01
In this paper, a multilayer artificial neural network is used to model simultaneously the effect of solute structure and eluent concentration profile on the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient elution. The retention data of 24 triazines, including common herbicides and their metabolites, are collected under 13 different elution modes, covering the following experimental domain: starting acetonitrile volume fraction ranging between 40 and 60% and gradient slope ranging between 0 and 1% acetonitrile/min. The gradient parameters together with five selected molecular descriptors, identified by quantitative structure-retention relationship modelling applied to individual separation conditions, are the network inputs. Predictive performance of this model is evaluated on six external triazines and four unseen separation conditions. For comparison, retention of triazines is modelled by both quantitative structure-retention relationships and response surface methodology, which describe separately the effect of molecular structure and gradient parameters on the retention. Although applied to a wider variable domain, the network provides a performance comparable to that of the above "local" models and retention times of triazines are modelled with accuracy generally better than 7%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling and testing of a tube-in-tube separation mechanism of bodies in space
NASA Astrophysics Data System (ADS)
Michaels, Dan; Gany, Alon
2016-12-01
A tube-in-tube concept for separation of bodies in space was investigated theoretically and experimentally. The separation system is based on generation of high pressure gas by combustion of solid propellant and restricting the expansion of the gas only by ejecting the two bodies in opposite directions, in such a fashion that maximizes generated impulse. An interior ballistics model was developed in order to investigate the potential benefits of the separation system for a large range of space body masses and for different design parameters such as geometry and propellant. The model takes into account solid propellant combustion, heat losses, and gas phase chemical reactions. The model shows that for large bodies (above 100 kg) and typical separation velocities of 5 m/s, the proposed separation mechanism may be characterized by a specific impulse of 25,000 s, two order of magnitude larger than that of conventional solid rockets. It means that the proposed separation system requires only 1% of the propellant mass that would be needed for a conventional rocket for the same mission. Since many existing launch vehicles obtain such separation velocities by using conventional solid rocket motors (retro-rockets), the implementation of the new separation system design can reduce dramatically the mass of the separation system and increase safety. A dedicated experimental setup was built in order to demonstrate the concept and validate the model. The experimental results revealed specific impulse values of up to 27,000 s and showed good correspondence with the model.
NASA Astrophysics Data System (ADS)
Ushakov, Anton; Orlov, Alexey; Sovach, Victor P.
2018-03-01
This article presents the results of research filling of gas centrifuge cascade for separation of the multicomponent isotope mixture with process gas by various feed flow rate. It has been used mathematical model of the nonstationary hydraulic and separation processes occurring in the gas centrifuge cascade. The research object is definition of the regularity transient of nickel isotopes into cascade during filling of the cascade. It is shown that isotope concentrations into cascade stages after its filling depend on variable parameters and are not equal to its concentration on initial isotope mixture (or feed flow of cascade). This assumption is used earlier any researchers for modeling such nonstationary process as set of steady-state concentration of isotopes into cascade. Article shows physical laws of isotope distribution into cascade stage after its filling. It's shown that varying each parameters of cascade (feed flow rate, feed stage number or cascade stage number) it is possible to change isotope concentration on output cascade flows (light or heavy fraction) for reduction of duration of further process to set of steady-state concentration of isotopes into cascade.
Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.
Glöckner, Andreas; Pachur, Thorsten
2012-04-01
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Trippa, Giuliana; Ventikos, Yiannis; Taggart, David P; Coussios, Constantin-C
2011-02-01
A computational fluid dynamics (CFD) model is presented to simulate the removal of lipid particles from blood using a novel ultrasonic quarter-wavelength separator. The Lagrangian-Eulerian CFD model accounts for conservation of mass and momentum, for the presence of lipid particles of a range of diameters, for the acoustic force as experienced by the particles in the blood, as well as for gravity and other particle-fluid interaction forces. In the separator, the liquid flows radially inward within a fluid chamber formed between a disc-shaped transducer and a disc-shaped reflector. Following separation of the lipid particles, blood exits the separator axially through a central opening on the disc-shaped reflector. Separator diameters studied varied between 12 and 18 cm, and gap sizes between the discs of 600 μm, 800 μm and 1 mm were considered. Results show a strong effect of residence time of the particles within the chamber on the separation performance. Different separator configurations were identified, which could give a lipid removal performance of 95% or higher when processing 62.5 cm (3)/min of blood. The developed model provides a design method for the selection of geometric and operating parameters for the ultrasonic separator.
Distributed modelling of hydrologic regime at three subcatchments of Kopaninský tok catchment
NASA Astrophysics Data System (ADS)
Žlábek, Pavel; Tachecí, Pavel; Kaplická, Markéta; Bystřický, Václav
2010-05-01
Kopaninský tok catchment is situated in crystalline area of Bohemo-Moravian highland hilly region, with cambisol cover and prevailing agricultural land use. It is a subject of long term (since 1980's) observation. Time series (discharge, precipitation, climatic parameters...) are nowadays available in 10 min. time step, water quality average daily composit samples plus samples during events are available. Soil survey resulting in reference soil hydraulic properties for horizons and vegetation cover survey incl. LAI measurement has been done. All parameters were analysed and used for establishing of distributed mathematical models of P6, P52 and P53 subcatchments, using MIKE SHE 2009 WM deterministic hydrologic modelling system. The aim is to simulate long-term hydrologic regime as well as rainfall-runoff events, serving the base for modelling of nitrate regime and agricultural management influence in the next step. Mentioned subcatchments differs in ratio of artificial drainage area, soil types, land use and slope angle. The models are set-up in a regular computational grid of 2 m size. Basic time step was set to 2 hrs, total simulated period covers 3 years. Runoff response and moisture regime is compared using spatially distributed simulation results. Sensitivity analysis revealed most important parameters influencing model response. Importance of spatial distribution of initial conditions was underlined. Further on, different runoff components in terms of their origin, flow paths and travel time were separated using a combination of two runoff separation techniques (a digital filter and a simple conceptual model GROUND) in 12 subcatchments of Kopaninský tok catchment. These two methods were chosen based on a number of methods testing. Ordinations diagrams performed with Canoco software were used to evaluate influence of different catchment parameters on different runoff components. A canonical ordination method analyses (RDA) was used to explain one data set (runoff components - either volumes of each runoff component or occurence of baseflow) with another data set (catchment parameters - proportion of arable land, proportion of forest, proportion of vulnerable zones with high infiltration capacity, average slope, topographic index and runoff coefficient). The influence was analysed both for long-term runoff balance and selected rainfall-runoff events. Keywords: small catchment, water balance modelling, rainfall-runoff modelling, distributed deterministic model, runoff separation, sensitivity analysis
NASA Technical Reports Server (NTRS)
Marvin, J. G.; Horstman, C. C.; Rubesin, M. W.; Coakley, T. J.; Kussoy, M. I.
1975-01-01
An experiment designed to test and guide computations of the interaction of an impinging shock wave with a turbulent boundary layer is described. Detailed mean flow-field and surface data are presented for two shock strengths which resulted in attached and separated flows, respectively. Numerical computations, employing the complete time-averaged Navier-Stokes equations along with algebraic eddy-viscosity and turbulent Prandtl number models to describe shear stress and heat flux, are used to illustrate the dependence of the computations on the particulars of the turbulence models. Models appropriate for zero-pressure-gradient flows predicted the overall features of the flow fields, but were deficient in predicting many of the details of the interaction regions. Improvements to the turbulence model parameters were sought through a combination of detailed data analysis and computer simulations which tested the sensitivity of the solutions to model parameter changes. Computer simulations using these improvements are presented and discussed.
NASA Astrophysics Data System (ADS)
Dabiri, Arman; Butcher, Eric A.; Nazari, Morad
2017-02-01
Compliant impacts can be modeled using linear viscoelastic constitutive models. While such impact models for realistic viscoelastic materials using integer order derivatives of force and displacement usually require a large number of parameters, compliant impact models obtained using fractional calculus, however, can be advantageous since such models use fewer parameters and successfully capture the hereditary property. In this paper, we introduce the fractional Chebyshev collocation (FCC) method as an approximation tool for numerical simulation of several linear fractional viscoelastic compliant impact models in which the overall coefficient of restitution for the impact is studied as a function of the fractional model parameters for the first time. Other relevant impact characteristics such as hysteresis curves, impact force gradient, penetration and separation depths are also studied.
Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng
2015-01-26
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.
Depaoli, Sarah
2013-06-01
Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A; Sandison, G; Schwartz, J
Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less
Fault Detection of Bearing Systems through EEMD and Optimization Algorithm
Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan
2017-01-01
This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772
Creating photorealistic virtual model with polarization-based vision system
NASA Astrophysics Data System (ADS)
Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi
2005-08-01
Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
NASA Astrophysics Data System (ADS)
Anikin, A. S.
2018-06-01
Conditional statistical characteristics of the phase difference are considered depending on the ratio of instantaneous output signal amplitudes of spatially separated weakly directional antennas for the normal field model for paths with radio-wave scattering. The dependences obtained are related to the physical processes on the radio-wave propagation path. The normal model parameters are established at which the statistical characteristics of the phase difference depend on the ratio of the instantaneous amplitudes and hence can be used to measure the phase difference. Using Shannon's formula, the amount of information on the phase difference of signals contained in the ratio of their amplitudes is calculated depending on the parameters of the normal field model. Approaches are suggested to reduce the shift of phase difference measured for paths with radio-wave scattering. A comparison with results of computer simulation by the Monte Carlo method is performed.
Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393
Korjus, Kristjan; Hebart, Martin N; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.
Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets
NASA Technical Reports Server (NTRS)
Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.
1978-01-01
A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Usmanov, Arcadi V.; Matthaeus, William H.; Goldstein, Melvyn L., E-mail: arcadi.usmanov@nasa.gov
2016-03-20
We have developed a four-fluid, three-dimensional magnetohydrodynamic model of the solar wind interaction with the local interstellar medium. The unique features of the model are: (a) a three-fluid description for the charged components of the solar wind and interstellar plasmas (thermal protons, electrons, and pickup protons), (b) the built-in turbulence transport equations based on Reynolds decomposition and coupled with the mean-flow Reynolds-averaged equations, and (c) a solar corona/solar wind model that supplies inner boundary conditions at 40 au by computing solar wind and magnetic field parameters outward from the coronal base. The three charged species are described by separate energy equationsmore » and are assumed to move with the same velocity. The fourth fluid in the model is the interstellar hydrogen which is treated by separate continuity, momentum, and energy equations and is coupled with the charged components through photoionization and charge exchange. We evaluate the effects of turbulence transport and pickup protons on the global heliospheric structure and compute the distribution of plasma, magnetic field, and turbulence parameters throughout the heliosphere for representative solar minimum and maximum conditions. We compare our results with Voyager 1 observations in the outer heliosheath and show that the relative amplitude of magnetic fluctuations just outside the heliopause is in close agreement with the value inferred from Voyager 1 measurements by Burlaga et al. The simulated profiles of magnetic field parameters in the outer heliosheath are in qualitative agreement with the Voyager 1 observations and with the analytical model of magnetic field draping around the heliopause of Isenberg et al.« less
Integrated direct/indirect adaptive robust motion trajectory tracking control of pneumatic cylinders
NASA Astrophysics Data System (ADS)
Meng, Deyuan; Tao, Guoliang; Zhu, Xiaocong
2013-09-01
This paper studies the precision motion trajectory tracking control of a pneumatic cylinder driven by a proportional-directional control valve. An integrated direct/indirect adaptive robust controller is proposed. The controller employs a physical model based indirect-type parameter estimation to obtain reliable estimates of unknown model parameters, and utilises a robust control method with dynamic compensation type fast adaptation to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. Due to the use of projection mapping, the robust control law and the parameter adaption algorithm can be designed separately. Since the system model uncertainties are unmatched, the recursive backstepping technology is adopted to design the robust control law. Extensive comparative experimental results are presented to illustrate the effectiveness of the proposed controller and its performance robustness to parameter variations and sudden disturbances.
Mathematical modeling of a thermovoltaic cell
NASA Technical Reports Server (NTRS)
White, Ralph E.; Kawanami, Makoto
1992-01-01
A new type of battery named 'Vaporvolt' cell is in the early stage of its development. A mathematical model of a CuO/Cu 'Vaporvolt' cell is presented that can be used to predict the potential and the transport behavior of the cell during discharge. A sensitivity analysis of the various transport and electrokinetic parameters indicates which parameters have the most influence on the predicted energy and power density of the 'Vaporvolt' cell. This information can be used to decide which parameters should be optimized or determined more accurately through further modeling or experimental studies. The optimal thicknesses of electrodes and separator, the concentration of the electrolyte, and the current density are determined by maximizing the power density. These parameter sensitivities and optimal design parameter values will help in the development of a better CuO/Cu 'Vaporvolt' cell.
Displacement-based back-analysis of the model parameters of the Nuozhadu high earth-rockfill dam.
Wu, Yongkang; Yuan, Huina; Zhang, Bingyin; Zhang, Zongliang; Yu, Yuzhen
2014-01-01
The parameters of the constitutive model, the creep model, and the wetting model of materials of the Nuozhadu high earth-rockfill dam were back-analyzed together based on field monitoring displacement data by employing an intelligent back-analysis method. In this method, an artificial neural network is used as a substitute for time-consuming finite element analysis, and an evolutionary algorithm is applied for both network training and parameter optimization. To avoid simultaneous back-analysis of many parameters, the model parameters of the three main dam materials are decoupled and back-analyzed separately in a particular order. Displacement back-analyses were performed at different stages of the construction period, with and without considering the creep and wetting deformations. Good agreement between the numerical results and the monitoring data was obtained for most observation points, which implies that the back-analysis method and decoupling method are effective for solving complex problems with multiple models and parameters. The comparison of calculation results based on different sets of back-analyzed model parameters indicates the necessity of taking the effects of creep and wetting into consideration in the numerical analyses of high earth-rockfill dams. With the resulting model parameters, the stress and deformation distributions at completion are predicted and analyzed.
Tommasino, Paolo; Campolo, Domenico
2017-02-03
In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.
A square-force cohesion model and its extraction from bulk measurements
NASA Astrophysics Data System (ADS)
Liu, Peiyuan; Lamarche, Casey; Kellogg, Kevin; Hrenya, Christine
2017-11-01
Cohesive particles remain poorly understood, with order of magnitude differences exhibited for prior, physical predictions of agglomerate size. A major obstacle lies in the absence of robust models of particle-particle cohesion, thereby precluding accurate prediction of the behavior of cohesive particles. Rigorous cohesion models commonly contain parameters related to surface roughness, to which cohesion shows extreme sensitivity. However, both roughness measurement and its distillation into these model parameters are challenging. Accordingly, we propose a ``square-force'' model, where cohesive force remains constant until a cut-off separation. Via DEM simulations, we demonstrate validity of the square-force model as surrogate of more rigorous models, when its two parameters are selected to match the two key quantities governing dense and dilute granular flows, namely maximum cohesive force and critical cohesive energy, respectively. Perhaps more importantly, we establish a method to extract the parameters in the square-force model via defluidization, due to its ability to isolate the effects of the two parameters. Thus, instead of relying on complicated scans of individual grains, determination of particle-particle cohesion from simple bulk measurements becomes feasible. Dow Corning Corporation.
NASA Astrophysics Data System (ADS)
Ramos, José A.; Mercère, Guillaume
2016-12-01
In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.
Herrero-Martínez, José Manuel; Izquierdo, Pere; Sales, Joaquim; Rosés, Martí; Bosch, Elisabeth
2008-10-01
The retention behavior of a series of fat-soluble vitamins has been established on the basis of a polarity retention model: log k = (log k)(0) + p (P(m) (N) - P(s) (N)), with p being the polarity of the solute, P(m) (N) the mobile phase polarity, and (log k)(0) and P(m) (N) two parameters for the characterization of the stationary phase. To estimate the p-values of solutes, two approaches have been considered. The first one is based on the application of a QSPR model, derived from the molecular structure of solutes and their log P(o/w), while in the second one, the p-values are obtained from several experimental measurements. The quality of prediction of both approaches has also been evaluated, with the second one giving more accurate results for the most lipophilic vitamins. This model allows establishing the best conditions to separate and determine simultaneously some fat-soluble vitamins in dairy foods.
Active-Adaptive Control of Inlet Separation Using Supersonic Microjets
NASA Technical Reports Server (NTRS)
Alvi, Farrukh S.
2007-01-01
Flow separation in internal and external flows generally results in a significant degradation in aircraft performance. For internal flows, such as inlets and transmission ducts in aircraft propulsion systems, separation is undesirable as it reduces the overall system performance. The aim of this research has been to understand the nature of separation and more importantly, to explore techniques to actively control it. In this research, we extended our investigation of active separation control (under a previous NASA grant) where we explored the use of microjets for the control of boundary layer separation. The geometry used for the initial study was a simple diverging Stratford ramp, equipped with arrays of microjets. These early results clearly show that the activation of microjets eliminated flow separation. Furthermore, the velocity-field measurements, using PIV, also demonstrate that the gain in momentum due to the elimination of separation is at least an order of magnitude larger (two orders of magnitude larger in most cases) than the momentum injected by the microjets and is accomplished with very little mass flow through the microjets. Based on our initial promising results this research was continued under the present grant, using a more flexible model. This model allows for the magnitude and extent of separation as well as the microjet parameters to be independently varied. The results, using this model were even more encouraging and demonstrated that microjet control completely eliminated significant regions of flow separation over a wide range of conditions with almost negligible mass flow. Detailed studies of the flowfield and its response to microjets were further examined using 3-component PIV and unsteady pressure measurements, among others. As the results presented this report will show, microjets were successfully used to control the separation of a much larger extent and magnitude than demonstrated in our earlier experiments. In fact, using the appropriate combination of control parameters (microjet, location, angle and pressure) separation was completely eliminated for the largest separated flowfield we could generate with the present model. Separation control also resulted in a significant reduction in the unsteady pressures in the flow where the unsteady pressure field was found to be directly responsive to the state of the flow above the surface. Hence, our study indicates that the unsteady pressure signature is a strong candidate for a flow state sensor , which can be used to estimate the location, magnitude and other properties of the separated flowfield. Once better understood and properly utilized, this behavior can be of significant practical importance for developing and implementing online control.
A Multinomial Model of Event-Based Prospective Memory
ERIC Educational Resources Information Center
Smith, Rebekah E.; Bayen, Ute J.
2004-01-01
Prospective memory is remembering to perform an action in the future. The authors introduce the 1st formal model of event-based prospective memory, namely, a multinomial model that includes 2 separate parameters related to prospective memory processes. The 1st measures preparatory attentional processes, and the 2nd measures retrospective memory…
NASA Astrophysics Data System (ADS)
Becker, M.; Bour, O.; Le Borgne, T.; Longuevergne, L.; Lavenant, N.; Cole, M. C.; Guiheneuf, N.
2017-12-01
Determining hydraulic and transport connectivity in fractured bedrock has long been an important objective in contaminant hydrogeology, petroleum engineering, and geothermal operations. A persistent obstacle to making this determination is that the characteristic length scale is nearly impossible to determine in sparsely fractured networks. Both flow and transport occur through an unknown structure of interconnected fracture and/or fracture zones making the actual length that water or solutes travel undetermined. This poses difficulties for flow and transport models. For, example, hydraulic equations require a separation distance between pumping and observation well to determine hydraulic parameters. When wells pairs are close, the structure of the network can influence the interpretation of well separation and the flow dimension of the tested system. This issue is explored using hydraulic tests conducted in a shallow fractured crystalline rock. Periodic (oscillatory) slug tests were performed at the Ploemeur fractured rock test site located in Brittany, France. Hydraulic connectivity was examined between three zones in one well and four zones in another, located 6 m apart in map view. The wells are sufficiently close, however, that the tangential distance between the tested zones ranges between 6 and 30 m. Using standard periodic formulations of radial flow, estimates of storativity scale inversely with the square of the separation distance and hydraulic diffusivity directly with the square of the separation distance. Uncertainty in the connection paths between the two wells leads to an order of magnitude uncertainty in estimates of storativity and hydraulic diffusivity, although estimates of transmissivity are unaffected. The assumed flow dimension results in alternative estimates of hydraulic parameters. In general, one is faced with the prospect of assuming the hydraulic parameter and inverting the separation distance, or vice versa. Similar uncertainties exist, for instance, when trying to invert transport parameters from tracer mean residence time. This field test illustrates that when dealing with fracture networks, there is a need for analytic methods of complexity that lie between simple radial solutions and discrete fracture network models.
NASA Astrophysics Data System (ADS)
Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng
2018-02-01
The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.
The Baldwin-Lomax model for separated and wake flows using the entropy envelope concept
NASA Technical Reports Server (NTRS)
Brock, J. S.; Ng, W. F.
1992-01-01
Implementation of the Baldwin-Lomax algebraic turbulence model is difficult and ambiguous within flows characterized by strong viscous-inviscid interactions and flow separations. A new method of implementation is proposed which uses an entropy envelope concept and is demonstrated to ensure the proper evaluation of modeling parameters. The method is simple, computationally fast, and applicable to both wake and boundary layer flows. The method is general, making it applicable to any turbulence model which requires the automated determination of the proper maxima of a vorticity-based function. The new method is evalulated within two test cases involving strong viscous-inviscid interaction.
Blind identification of the kinetic parameters in three-compartment models
NASA Astrophysics Data System (ADS)
Riabkov, Dmitri Y.; Di Bella, Edward V. R.
2004-03-01
Quantified knowledge of tissue kinetic parameters in the regions of the brain and other organs can offer information useful in clinical and research applications. Dynamic medical imaging with injection of radioactive or paramagnetic tracer can be used for this measurement. The kinetics of some widely used tracers such as [18F]2-fluoro-2-deoxy-D-glucose can be described by a three-compartment physiological model. The kinetic parameters of the tissue can be estimated from dynamically acquired images. Feasibility of estimation by blind identification, which does not require knowledge of the blood input, is considered analytically and numerically in this work for the three-compartment type of tissue response. The non-uniqueness of the two-region case for blind identification of kinetic parameters in three-compartment model is shown; at least three regions are needed for the blind identification to be unique. Numerical results for the accuracy of these blind identification methods in different conditions were considered. Both a separable variables least-squares (SLS) approach and an eigenvector-based algorithm for multichannel blind deconvolution approach were used. The latter showed poor accuracy. Modifications for non-uniform time sampling were also developed. Also, another method which uses a model for the blood input was compared. Results for the macroparameter K, which reflects the metabolic rate of glucose usage, using three regions with noise showed comparable accuracy for the separable variables least squares method and for the input model-based method, and slightly worse accuracy for SLS with the non-uniform sampling modification.
Least-Squares Self-Calibration of Imaging Array Data
NASA Technical Reports Server (NTRS)
Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.
2004-01-01
When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.
Battat, James B R; Chandler, John F; Stubbs, Christopher W
2007-12-14
We present constraints on violations of Lorentz invariance based on archival lunar laser-ranging (LLR) data. LLR measures the Earth-Moon separation by timing the round-trip travel of light between the two bodies and is currently accurate to the equivalent of a few centimeters (parts in 10(11) of the total distance). By analyzing this LLR data under the standard-model extension (SME) framework, we derived six observational constraints on dimensionless SME parameters that describe potential Lorentz violation. We found no evidence for Lorentz violation at the 10(-6) to 10(-11) level in these parameters. This work constitutes the first LLR constraints on SME parameters.
Hey, Jody; Nielsen, Rasmus
2004-01-01
The genetic study of diverging, closely related populations is required for basic questions on demography and speciation, as well as for biodiversity and conservation research. However, it is often unclear whether divergence is due simply to separation or whether populations have also experienced gene flow. These questions can be addressed with a full model of population separation with gene flow, by applying a Markov chain Monte Carlo method for estimating the posterior probability distribution of model parameters. We have generalized this method and made it applicable to data from multiple unlinked loci. These loci can vary in their modes of inheritance, and inheritance scalars can be implemented either as constants or as parameters to be estimated. By treating inheritance scalars as parameters it is also possible to address variation among loci in the impact via linkage of recurrent selective sweeps or background selection. These methods are applied to a large multilocus data set from Drosophila pseudoobscura and D. persimilis. The species are estimated to have diverged approximately 500,000 years ago. Several loci have nonzero estimates of gene flow since the initial separation of the species, with considerable variation in gene flow estimates among loci, in both directions between the species. PMID:15238526
Evaluating vortex generator jet experiments for turbulent flow separation control
NASA Astrophysics Data System (ADS)
von Stillfried, F.; Kékesi, T.; Wallin, S.; Johansson, A. V.
2011-12-01
Separating turbulent boundary-layers can be energized by streamwise vortices from vortex generators (VG) that increase the near wall momentum as well as the overall mixing of the flow so that flow separation can be delayed or even prevented. In general, two different types of VGs exist: passive vane VGs (VVG) and active VG jets (VGJ). Even though VGs are already successfully used in engineering applications, it is still time-consuming and computationally expensive to include them in a numerical analysis. Fully resolved VGs in a computational mesh lead to a very high number of grid points and thus, computational costs. In addition, computational parameter studies for such flow control devices take much time to set-up. Therefore, much of the research work is still carried out experimentally. KTH Stockholm develops a novel VGJ model that makes it possible to only include the physical influence in terms of the additional stresses that originate from the VGJs without the need to locally refine the computational mesh. Such a modelling strategy enables fast VGJ parameter variations and optimization studies are easliy made possible. For that, VGJ experiments are evaluated in this contribution and results are used for developing a statistical VGJ model.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Flow separation in a computational oscillating vocal fold model
NASA Astrophysics Data System (ADS)
Alipour, Fariborz; Scherer, Ronald C.
2004-09-01
A finite-volume computational model that solves the time-dependent glottal airflow within a forced-oscillation model of the glottis was employed to study glottal flow separation. Tracheal input velocity was independently controlled with a sinusoidally varying parabolic velocity profile. Control parameters included flow rate (Reynolds number), oscillation frequency and amplitude of the vocal folds, and the phase difference between the superior and inferior glottal margins. Results for static divergent glottal shapes suggest that velocity increase caused glottal separation to move downstream, but reduction in velocity increase and velocity decrease moved the separation upstream. At the fixed frequency, an increase of amplitude of the glottal walls moved the separation further downstream during glottal closing. Increase of Reynolds number caused the flow separation to move upstream in the glottis. The flow separation cross-sectional ratio ranged from approximately 1.1 to 1.9 (average of 1.47) for the divergent shapes. Results suggest that there may be a strong interaction of rate of change of airflow, inertia, and wall movement. Flow separation appeared to be ``delayed'' during the vibratory cycle, leading to movement of the separation point upstream of the glottal end only after a significant divergent angle was reached, and to persist upstream into the convergent phase of the cycle.
Hanke, Alexander T; Tsintavi, Eleni; Ramirez Vazquez, Maria Del Pilar; van der Wielen, Luuk A M; Verhaert, Peter D E M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel
2016-09-01
Knowledge-based development of chromatographic separation processes requires efficient techniques to determine the physicochemical properties of the product and the impurities to be removed. These characterization techniques are usually divided into approaches that determine molecular properties, such as charge, hydrophobicity and size, or molecular interactions with auxiliary materials, commonly in the form of adsorption isotherms. In this study we demonstrate the application of a three-dimensional liquid chromatography approach to a clarified cell homogenate containing a therapeutic enzyme. Each separation dimension determines a molecular property relevant to the chromatographic behavior of each component. Matching of the peaks across the different separation dimensions and against a high-resolution reference chromatogram allows to assign the determined parameters to pseudo-components, allowing to determine the most promising technique for the removal of each impurity. More detailed process design using mechanistic models requires isotherm parameters. For this purpose, the second dimension consists of multiple linear gradient separations on columns in a high-throughput screening compatible format, that allow regression of isotherm parameters with an average standard error of 8%. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1283-1291, 2016. © 2016 American Institute of Chemical Engineers.
Collisional considerations in axial-collection plasma mass filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ochs, I. E.; Gueroult, R.; Fisch, N. J.
The chemical inhomogeneity of nuclear waste makes chemical separations difficult, while the correlation between radioactivity and nuclear mass makes mass-based separation, and in particular plasma-based separation, an attractive alternative. Here, we examine a particular class of plasma mass filters, namely filters in which (a) species of different masses are collected along magnetic field lines at opposite ends of an open-field-line plasma device and (b) gyro-drift effects are important for the separation process. Using an idealized cylindrical model, we derive a set of dimensionless parameters which provide minimum necessary conditions for an effective mass filter function in the presence of ion-ionmore » and ion-neutral collisions. Through simulations of the constant-density profile, turbulence-free devices, we find that these parameters accurately describe the mass filter performance in more general magnetic geometries. We then use these parameters to study the design and upgrade of current experiments, as well as to derive general scalings for the throughput of production mass filters. Most importantly, we find that ion temperatures above 3 eV and magnetic fields above 104 G are critical to ensure a feasible mass filter function when operating at an ion density of 10 13 cm –3.« less
Collisional considerations in axial-collection plasma mass filters
Ochs, I. E.; Gueroult, R.; Fisch, N. J.; ...
2017-04-01
The chemical inhomogeneity of nuclear waste makes chemical separations difficult, while the correlation between radioactivity and nuclear mass makes mass-based separation, and in particular plasma-based separation, an attractive alternative. Here, we examine a particular class of plasma mass filters, namely filters in which (a) species of different masses are collected along magnetic field lines at opposite ends of an open-field-line plasma device and (b) gyro-drift effects are important for the separation process. Using an idealized cylindrical model, we derive a set of dimensionless parameters which provide minimum necessary conditions for an effective mass filter function in the presence of ion-ionmore » and ion-neutral collisions. Through simulations of the constant-density profile, turbulence-free devices, we find that these parameters accurately describe the mass filter performance in more general magnetic geometries. We then use these parameters to study the design and upgrade of current experiments, as well as to derive general scalings for the throughput of production mass filters. Most importantly, we find that ion temperatures above 3 eV and magnetic fields above 104 G are critical to ensure a feasible mass filter function when operating at an ion density of 10 13 cm –3.« less
Stability of model-based event-triggered control systems: a separation property
NASA Astrophysics Data System (ADS)
Hao, Fei; Yu, Hao
2017-04-01
To save resource of communication, this paper investigates the model-based event-triggered control systems. Two main problems are considered in this paper. One is, for given plant and model, to design event conditions to guarantee the stability of the systems. The other is to consider the effect of the model matrices on the stability. The results show that the closed-loop systems can be asymptotically stabilised with any model matrices in compact sets if the parameters in the event conditions are within the designed ranges. Then, a separation property of model-based event-triggered control is proposed. Namely, the design of the controller gain and the event condition can be separated from the selection of the model matrices. Based on this property, an adaption mechanism is introduced to the model-based event-triggered control systems, which can further improve the sampling performance. Finally, a numerical example is given to show the efficiency and feasibility of the developed results.
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
Model of Energy Spectrum Parameters of Ground Level Enhancement Events in Solar Cycle 23
NASA Astrophysics Data System (ADS)
Wu, S.-S.; Qin, G.
2018-01-01
Mewaldt et al. (2012) fitted the observations of the ground level enhancement (GLE) events during solar cycle 23 to the double power law equation to obtain the four spectral parameters, the normalization constant C, low-energy power law slope γ1, high-energy power law slope γ2, and break energy E0. There are 16 GLEs from which we select 13 for study by excluding some events with complicated situation. We analyze the four parameters with conditions of the corresponding solar events. According to solar event conditions, we divide the GLEs into two groups, one with strong acceleration by interplanetary shocks and another one without strong acceleration. By fitting the four parameters with solar event conditions we obtain models of the parameters for the two groups of GLEs separately. Therefore, we establish a model of energy spectrum of solar cycle 23 GLEs, which may be used in prediction in the future.
Aerodynamics and Percolation: Unfolding Laminar Separation Bubble on Airfoils
NASA Astrophysics Data System (ADS)
Traphan, Dominik; Wester, Tom T. B.; Gülker, Gerd; Peinke, Joachim; Lind, Pedro G.
2018-04-01
As a fundamental phenomenon of fluid mechanics, recent studies suggested laminar-turbulent transition belonging to the universality class of directed percolation. Here, the onset of a laminar separation bubble on an airfoil is analyzed in terms of the directed percolation model using particle image velocimetry data. Our findings indicate a clear significance of percolation models in a general flow situation beyond fundamental ones. We show that our results are robust against fluctuations of the parameter, namely, the threshold of turbulence intensity, that maps velocimetry data into binary cells (turbulent or laminar). In particular, this percolation approach enables the precise determination of the transition point of the laminar separation bubble, an important problem in aerodynamics.
Effects of molecular and particle scatterings on the model parameter for remote-sensing reflectance.
Lee, ZhongPing; Carder, Kendall L; Du, KePing
2004-09-01
For optically deep waters, remote-sensing reflectance (r(rs)) is traditionally expressed as the ratio of the backscattering coefficient (b(b)) to the sum of absorption and backscattering coefficients (a + b(b)) that multiples a model parameter (g, or the so-called f'/Q). Parameter g is further expressed as a function of b(b)/(a + b(b)) (or b(b)/a) to account for its variation that is due to multiple scattering. With such an approach, the same g value will be derived for different a and b(b) values that provide the same ratio. Because g is partially a measure of the angular distribution of upwelling light, and the angular distribution from molecular scattering is quite different from that of particle scattering; g values are expected to vary with different scattering distributions even if the b(b)/a ratios are the same. In this study, after numerically demonstrating the effects of molecular and particle scatterings on the values of g, an innovative r(rs) model is developed. This new model expresses r(rs) in two separate terms: one governed by the phase function of molecular scattering and one governed by the phase function of particle scattering, with a model parameter introduced for each term. In this way the phase function effects from molecular and particle scatterings are explicitly separated and accounted for. This new model provides an analytical tool to understand and quantify the phase-function effects on r(rs), and a platform to calculate r(rs) spectrum quickly and accurately that is required for remote-sensing applications.
Analysis of the statistical thermodynamic model for nonlinear binary protein adsorption equilibria.
Zhou, Xiao-Peng; Su, Xue-Li; Sun, Yan
2007-01-01
The statistical thermodynamic (ST) model was used to study nonlinear binary protein adsorption equilibria on an anion exchanger. Single-component and binary protein adsorption isotherms of bovine hemoglobin (Hb) and bovine serum albumin (BSA) on DEAE Spherodex M were determined by batch adsorption experiments in 10 mM Tris-HCl buffer containing a specific NaCl concentration (0.05, 0.10, and 0.15 M) at pH 7.40. The ST model was found to depict the effect of ionic strength on the single-component equilibria well, with model parameters depending on ionic strength. Moreover, the ST model gave acceptable fitting to the binary adsorption data with the fitted single-component model parameters, leading to the estimation of the binary ST model parameter. The effects of ionic strength on the model parameters are reasonably interpreted by the electrostatic and thermodynamic theories. The effective charge of protein in adsorption phase can be separately calculated from the two categories of the model parameters, and the values obtained from the two methods are consistent. The results demonstrate the utility of the ST model for describing nonlinear binary protein adsorption equilibria.
VizieR Online Data Catalog: Parameters and IR excesses of Gaia DR1 stars (McDonald+, 2017)
NASA Astrophysics Data System (ADS)
McDonald, I.; Zijlstra, A. A.; Watson, R. A.
2017-08-01
Spectral energy distribution fits are presented for stars from the Tycho-Gaia Astrometric Solution (TGAS) from Gaia Data Release 1. Hipparcos-Gaia stars are presented in a separate table. Effective temperatures, bolometric luminosities, and infrared excesses are presented (alongside other parameters pertinent to the model fits), plus the source photometry used. (3 data files).
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jufeng; Xia, Bing; Shang, Yunlong
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...
2016-12-22
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Mach 10 Stage Separation Analysis for the X43-A
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Bose, David M.; Thornblom, Mark N.; Lien, J. P.; Martin, John G.
2007-01-01
This paper describes the pre-flight stage separation analysis that was conducted in support of the final flight of the X-43A. In that flight, which occurred less than eight months after the successful Mach 7 flight, the X-43A Research Vehicle attained a peak speed of Mach 9.6. Details are provided on how the lessons learned from the Mach 7 flight affected separation modeling and how adjustments were made to account for the increased flight Mach number. Also, the procedure for defining the feedback loop closure and feed-forward parameters employed in the separation control logic are described, and their effect on separation performance is explained. In addition, the range and nominal values of these parameters, which were included in the Mission Data Load, are presented. Once updates were made, the nominal pre-flight trajectory and Monte Carlo statistical results were determined and stress tests were performed to ensure system robustness. During flight the vehicle performed within the uncertainty bounds predicted in the pre-flight analysis and ultimately set the world record for airbreathing powered flight.
Li, Jia; Lu, Hongzhou; Xu, Zhenming; Zhou, Yaohe
2008-06-15
Waste printed circuit board (PCB) is increasing worldwide. The corona electrostatic separation (CES) was an effective and environmental protection way to recycle resource from waste PCBs. The aim of this paper is to analyze the main factor (rotational speed) that affects the efficiency of CES from the point of view of electrostatics and mechanics. A quantitative method for analyzing the affection of rotational speed was studied and the model for separating flat nonmetal particles in waste PCBs was established. The conception of "charging critical rotational speed" and "detaching critical rotational speed" were presented. Experiments with the waste PCBs verified the theoretical model, and the experimental results were in good agreement with the theoretical model. The results indicated that the purity and recycle percentage of materials got a good level when the rotational speed was about 70 rpm and the critical rotational speed of small particles was higher than big particles. The model can guide the definition of operator parameter and the design of CES, which are needed for the development of any new application of the electrostatic separation method.
Effect of atomic disorder on the magnetic phase separation.
Groshev, A G; Arzhnikov, A K
2018-05-10
The effect of disorder on the magnetic phase separation between the antiferromagnetic and incommensurate helical [Formula: see text] and [Formula: see text] phases is investigated. The study is based on the quasi-two-dimensional single-band Hubbard model in the presence of atomic disorder (the [Formula: see text] Anderson-Hubbard model). A model of binary alloy disorder is considered, in which the disorder is determined by the difference in energy between the host and impurity atomic levels at a fixed impurity concentration. The problem is solved within the theory of functional integration in static approximation. Magnetic phase diagrams are obtained as functions of the temperature, the number of electrons and impurity concentration with allowance for phase separation. It is shown that for the model parameters chosen, the disorder caused by impurities whose atomic-level energy is greater than that of the host atomic levels, leads to qualitative changes in the phase diagram of the impurity-free system. In the opposite case, only quantitative changes occur. The peculiarities of the effect of disorder on the phase separation regions of the quasi-two-dimensional Hubbard model are discussed.
Effect of atomic disorder on the magnetic phase separation
NASA Astrophysics Data System (ADS)
Groshev, A. G.; Arzhnikov, A. K.
2018-05-01
The effect of disorder on the magnetic phase separation between the antiferromagnetic and incommensurate helical and phases is investigated. The study is based on the quasi-two-dimensional single-band Hubbard model in the presence of atomic disorder (the Anderson–Hubbard model). A model of binary alloy disorder is considered, in which the disorder is determined by the difference in energy between the host and impurity atomic levels at a fixed impurity concentration. The problem is solved within the theory of functional integration in static approximation. Magnetic phase diagrams are obtained as functions of the temperature, the number of electrons and impurity concentration with allowance for phase separation. It is shown that for the model parameters chosen, the disorder caused by impurities whose atomic-level energy is greater than that of the host atomic levels, leads to qualitative changes in the phase diagram of the impurity-free system. In the opposite case, only quantitative changes occur. The peculiarities of the effect of disorder on the phase separation regions of the quasi-two-dimensional Hubbard model are discussed.
Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan
2018-08-10
An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.
Thermal modeling of a pressurized air cavity receiver for solar dish Stirling system
NASA Astrophysics Data System (ADS)
Zou, Chongzhe; Zhang, Yanping; Falcoz, Quentin; Neveu, Pierre; Li, Jianlan; Zhang, Cheng
2017-06-01
A solar cavity receiver model for the dish collector system is designed in response to growing demand of renewable energy. In the present research field, no investigations into the geometric parameters of a cavity receiver have been performed. The cylindrical receiver in this study is composed of an enclosed bottom at the back, an aperture at the front, a helical pipe inside the cavity and an insulation layer on the external surface of the cavity. The influence of several critical receiver parameters on the thermal efficiency is analyzed in this paper: cavity inner diameter and cavity length. The thermal model in this paper is solved considering the cavity dimensions as variables. Implementing the model into EES, each parameter influence is separately investigated, and a preliminary optimization method is proposed.
Darnaude, Audrey M.
2016-01-01
Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
Extended Kalman Filter framework for forecasting shoreline evolution
Long, Joseph; Plant, Nathaniel G.
2012-01-01
A shoreline change model incorporating both long- and short-term evolution is integrated into a data assimilation framework that uses sparse observations to generate an updated forecast of shoreline position and to estimate unobserved geophysical variables and model parameters. Application of the assimilation algorithm provides quantitative statistical estimates of combined model-data forecast uncertainty which is crucial for developing hazard vulnerability assessments, evaluation of prediction skill, and identifying future data collection needs. Significant attention is given to the estimation of four non-observable parameter values and separating two scales of shoreline evolution using only one observable morphological quantity (i.e. shoreline position).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
A seasonal Bartlett-Lewis Rectangular Pulse model
NASA Astrophysics Data System (ADS)
Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter
2016-04-01
Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.
NASA Technical Reports Server (NTRS)
Michelassi, V.; Durbin, P. A.; Mansour, N. N.
1996-01-01
A four-equation model of turbulence is applied to the numerical simulation of flows with massive separation induced by a sudden expansion. The model constants are a function of the flow parameters, and two different formulations for these functions are tested. The results are compared with experimental data for a high Reynolds-number case and with experimental and DNS data for a low Reynolds-number case. The computations prove that the recovery region downstream of the massive separation is properly modeled only for the high Re case. The problems in this case stem from the gradient diffusion hypothesis, which underestimates the turbulent diffusion.
NASA Astrophysics Data System (ADS)
Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.
2017-12-01
Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.
3D CFD simulation of Multi-phase flow separators
NASA Astrophysics Data System (ADS)
Zhu, Zhiying
2017-10-01
During the exploitation of natural gas, some water and sands are contained. It will be better to separate water and sands from natural gas to insure favourable transportation and storage. In this study, we use CFD to analyse the effect of multi-phase flow separator, whose detailed geometrical parameters are designed in advanced. VOF model and DPM are used here. From the results of CFD, we can draw a conclusion that separated effect of multi-phase flow achieves better results. No solid and water is carried out from gas outlet. CFD simulation provides an economical and efficient approach to shed more light on details of the flow behaviour.
Note: A calibration method to determine the lumped-circuit parameters of a magnetic probe.
Li, Fuming; Chen, Zhipeng; Zhu, Lizhi; Liu, Hai; Wang, Zhijiang; Zhuang, Ge
2016-06-01
This paper describes a novel method to determine the lumped-circuit parameters of a magnetic inductive probe for calibration by using Helmholtz coils with high frequency power supply (frequency range: 10 kHz-400 kHz). The whole calibration circuit system can be separated into two parts: "generator" circuit and "receiver" circuit. By implementing the Fourier transform, two analytical lumped-circuit models, with respect to these separated circuits, are constructed to obtain the transfer function between each other. Herein, the precise lumped-circuit parameters (including the resistance, inductance, and capacitance) of the magnetic probe can be determined by fitting the experimental data to the transfer function. Regarding the fitting results, the finite impedance of magnetic probe can be used to analyze the transmission of a high-frequency signal between magnetic probes, cables, and acquisition system.
Nowak, Przemyslaw; Dobbins, Allan C.; Gawne, Timothy J.; Grzywacz, Norberto M.
2011-01-01
The ganglion cell output of the retina constitutes a bottleneck in sensory processing in that ganglion cells must encode multiple stimulus parameters in their responses. Here we investigate encoding strategies of On-Off directionally selective retinal ganglion cells (On-Off DS RGCs) in rabbits, a class of cells dedicated to representing motion. The exquisite axial discrimination of these cells to preferred vs. null direction motion is well documented: it is invariant with respect to speed, contrast, spatial configuration, spatial frequency, and motion extent. However, these cells have broad direction tuning curves and their responses also vary as a function of other parameters such as speed and contrast. In this study, we examined whether the variation in responses across multiple stimulus parameters is systematic, that is the same for all cells, and separable, such that the response to a stimulus is a product of the effects of each stimulus parameter alone. We extracellularly recorded single On-Off DS RGCs in a superfused eyecup preparation while stimulating them with moving bars. We found that spike count responses of these cells scaled as independent functions of direction, speed, and luminance. Moreover, the speed and luminance functions were common across the whole sample of cells. Based on these findings, we developed a model that accurately predicted responses of On-Off DS RGCs as products of separable functions of direction, speed, and luminance (r = 0.98; P < 0.0001). Such a multiplicatively separable encoding strategy may simplify the decoding of these cells' outputs by the higher visual centers. PMID:21325684
NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Modeling of ion transport through a porous separator in vanadium redox flow batteries
NASA Astrophysics Data System (ADS)
Zhou, X. L.; Zhao, T. S.; An, L.; Zeng, Y. K.; Wei, L.
2016-09-01
In this work, we develop a two-dimensional, transient model to investigate the mechanisms of ion-transport through a porous separator in VRFBs and their effects on battery performance. Commercial-available separators with pore sizes of around 45 nm are particularly investigated and effects of key separator design parameters and operation modes are explored. We reveal that: i) the transport mechanism of vanadium-ion crossover through available separators is predominated by convection; ii) reducing the pore size below 15 nm effectively minimizes the convection-driven vanadium-ion crossover, while further reduction in migration- and diffusion-driven vanadium-ion crossover can be achieved only when the pore size is reduced to the level close to the sizes of vanadium ions; and iii) operation modes that can affect the pressure at the separator/electrode interface, such as the electrolyte flow rate, exert a significant influence on the vanadium-ion crossover rate through the available separators, indicating that it is critically important to equalize the pressure on each half-cell of a power pack in practical applications.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
Accounting for Ecohydrologic Separation Alters Interpreted Catchment Hydrology
NASA Astrophysics Data System (ADS)
Cain, M. R.; Ward, A. S.; Hrachowitz, M.
2017-12-01
Recent studies have demonstrated that in in some catchments, compartmentalized pools of water supply either plant transpiration (poorly mobile water) or streamflow and groundwater (highly mobile water), a phenomenon referred to as ecohydrologic separation. Although the literature has acknowledged that omission of ecohydrologic separation in hydrological models may influence estimates of residence times of water and solutes, no study has investigated how and when this compartmentalization might alter interpretations of fluxes and storages within a catchment. In this study, we develop two hydrochemical lumped rainfall-runoff models, one which incorporates ecohydrologic separation and one which does not for a watershed at the H.J. Andrews Experimental Forest (Oregon, USA), the study site where ecohydrologic separation was first observed. The models are calibrated against stream discharge, as well as stream chloride concentration. The objectives of this study are (1) to compare calibrated parameters and identifiability across models, (2) to determine how and when compartmentalization of water in the vadose zone might alter interpretations of fluxes and stores within the catchment, and (3) to identify how and when these changes alter residence times. Preliminary results suggest that compartmentalization of the vadose zone alters interpretations of fluxes and storages in the catchment and improves our ability to simulate solute transport.
Impact parameter determination in experimental analysis using a neural network
NASA Astrophysics Data System (ADS)
Haddad, F.; Hagel, K.; Li, J.; Mdeiwayeh, N.; Natowitz, J. B.; Wada, R.; Xiao, B.; David, C.; Freslier, M.; Aichelin, J.
1997-03-01
A neural network is used to determine the impact parameter in 40Ca+40Ca reactions. The effect of the detection efficiency as well as the model dependence of the training procedure has been studied carefully. An overall improvement of the impact parameter determination of 25% is obtained using this technique. The analysis of Amphora 40Ca+40Ca data at 35 MeV per nucleon using a neural network shows two well-separated classes of events among the selected ``complete'' events.
Certainty Equivalence M-MRAC for Systems with Unmatched Uncertainties
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
The paper presents a certainty equivalence state feedback indirect adaptive control design method for the systems of any relative degree with unmatched uncertainties. The approach is based on the parameter identification (estimation) model, which is completely separated from the control design and is capable of producing parameter estimates as fast as the computing power allows without generating high frequency oscillations. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters.
The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.
St-Yves, Ghislain; Naselaris, Thomas
2017-06-20
We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity. Copyright © 2017. Published by Elsevier Inc.
Using factorial experimental design to evaluate the separation of plastics by froth flotation.
Salerno, Davide; Jordão, Helga; La Marca, Floriana; Carvalho, M Teresa
2018-03-01
This paper proposes the use of factorial experimental design as a standard experimental method in the application of froth flotation to plastic separation instead of the commonly used OVAT method (manipulation of one variable at a time). Furthermore, as is common practice in minerals flotation, the parameters of the kinetic model were used as process responses rather than the recovery of plastics in the separation products. To explain and illustrate the proposed methodology, a set of 32 experimental tests was performed using mixtures of two polymers with approximately the same density, PVC and PS (with mineral charges), with particle size ranging from 2 to 4 mm. The manipulated variables were frother concentration, air flow rate and pH. A three-level full factorial design was conducted. The models establishing the relationships between the manipulated variables and their interactions with the responses (first order kinetic model parameters) were built. The Corrected Akaike Information Criterion was used to select the best fit model and an analysis of variance (ANOVA) was conducted to identify the statistically significant terms of the model. It was shown that froth flotation can be used to efficiently separate PVC from PS with mineral charges by reducing the floatability of PVC, which largely depends on the action of pH. Within the tested interval, this is the factor that most affects the flotation rate constants. The results obtained show that the pure error may be of the same magnitude as the sum of squares of the errors, suggesting that there is significant variability within the same experimental conditions. Thus, special care is needed when evaluating and generalizing the process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lee, Yi Feng; Jöhnck, Matthias; Frech, Christian
2018-02-21
The efficiencies of mono gradient elution and dual salt-pH gradient elution for separation of six mAb charge and size variants on a preparative-scale ion exchange chromatographic resin are compared in this study. Results showed that opposite dual salt-pH gradient elution with increasing pH gradient and simultaneously decreasing salt gradient is best suited for the separation of these mAb charge and size variants on Eshmuno ® CPX. Besides giving high binding capacity, this type of opposite dual salt-pH gradient also provides better resolved mAb variant peaks and lower conductivity in the elution pools compared to single pH or salt gradients. To have a mechanistic understanding of the differences in mAb variants retention behaviors of mono pH gradient, parallel dual salt-pH gradient, and opposite dual salt-pH gradient, a linear gradient elution model was used. After determining the model parameters using the linear gradient elution model, 2D plots were used to show the pH and salt dependencies of the reciprocals of distribution coefficient, equilibrium constant, and effective ionic capacity of the mAb variants in these gradient elution systems. Comparison of the 2D plots indicated that the advantage of opposite dual salt-pH gradient system with increasing pH gradient and simultaneously decreasing salt gradient is the noncontinuous increased acceleration of protein migration. Furthermore, the fitted model parameters can be used for the prediction and optimization of mAb variants separation in dual salt-pH gradient and step elution. © 2018 American Institute of Chemical Engineers Biotechnol. Prog., 2018. © 2018 American Institute of Chemical Engineers.
Multiphase-field model of small strain elasto-plasticity according to the mechanical jump conditions
NASA Astrophysics Data System (ADS)
Herrmann, Christoph; Schoof, Ephraim; Schneider, Daniel; Schwab, Felix; Reiter, Andreas; Selzer, Michael; Nestler, Britta
2018-04-01
We introduce a small strain elasto-plastic multiphase-field model according to the mechanical jump conditions. A rate-independent J_2 -plasticity model with linear isotropic hardening and without kinematic hardening is applied exemplary. Generally, any physically nonlinear mechanical model is compatible with the subsequently presented procedure. In contrast to models with interpolated material parameters, the proposed model is able to apply different nonlinear mechanical constitutive equations for each phase separately. The Hadamard compatibility condition and the static force balance are employed as homogenization approaches to calculate the phase-inherent stresses and strains. Several verification cases are discussed. The applicability of the proposed model is demonstrated by simulations of the martensitic transformation and quantitative parameters.
NASA Astrophysics Data System (ADS)
Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.
2010-07-01
Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.
Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Cudeck, Robert
2009-01-01
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…
Birdsall, Robert E.; Koshel, Brooke M.; Hua, Yimin; Ratnayaka, Saliya N.; Wirth, Mary J.
2013-01-01
Sieving of proteins in silica colloidal crystals of mm dimensions is characterized for particle diameters of nominally 350 and 500 nm, where the colloidal crystals are chemically modified with a brush layer of polyacrylamide. A model is developed that relates the reduced electrophoretic mobility to the experimentally measurable porosity. The model fits the data with no adjustable parameters for the case of silica colloidal crystals packed in capillaries, for which independent measurements of the pore radii were made from flow data. The model also fits the data for electrophoresis in a highly ordered colloidal crystal formed in a channel, where the unknown pore radius was used as a fitting parameter. Plate heights as small as 0.4 μm point to the potential for miniaturized separations. Band broadening increases as the pore radius approaches the protein radius, indicating that the main contribution to broadening is the spatial heterogeneity of the pore radius. The results quantitatively support the notion that sieving occurs for proteins in silica colloidal crystals, and facilitate design of new separations that would benefit from miniaturization. PMID:23229163
Nonlinear-regression groundwater flow modeling of a deep regional aquifer system
Cooley, Richard L.; Konikow, Leonard F.; Naff, Richard L.
1986-01-01
A nonlinear regression groundwater flow model, based on a Galerkin finite-element discretization, was used to analyze steady state two-dimensional groundwater flow in the areally extensive Madison aquifer in a 75,000 mi2 area of the Northern Great Plains. Regression parameters estimated include intrinsic permeabilities of the main aquifer and separate lineament zones, discharges from eight major springs surrounding the Black Hills, and specified heads on the model boundaries. Aquifer thickness and temperature variations were included as specified functions. The regression model was applied using sequential F testing so that the fewest number and simplest zonation of intrinsic permeabilities, combined with the simplest overall model, were evaluated initially; additional complexities (such as subdivisions of zones and variations in temperature and thickness) were added in stages to evaluate the subsequent degree of improvement in the model results. It was found that only the eight major springs, a single main aquifer intrinsic permeability, two separate lineament intrinsic permeabilities of much smaller values, and temperature variations are warranted by the observed data (hydraulic heads and prior information on some parameters) for inclusion in a model that attempts to explain significant controls on groundwater flow. Addition of thickness variations did not significantly improve model results; however, thickness variations were included in the final model because they are fairly well defined. Effects on the observed head distribution from other features, such as vertical leakage and regional variations in intrinsic permeability, apparently were overshadowed by measurement errors in the observed heads. Estimates of the parameters correspond well to estimates obtained from other independent sources.
Nonlinear-Regression Groundwater Flow Modeling of a Deep Regional Aquifer System
NASA Astrophysics Data System (ADS)
Cooley, Richard L.; Konikow, Leonard F.; Naff, Richard L.
1986-12-01
A nonlinear regression groundwater flow model, based on a Galerkin finite-element discretization, was used to analyze steady state two-dimensional groundwater flow in the areally extensive Madison aquifer in a 75,000 mi2 area of the Northern Great Plains. Regression parameters estimated include intrinsic permeabilities of the main aquifer and separate lineament zones, discharges from eight major springs surrounding the Black Hills, and specified heads on the model boundaries. Aquifer thickness and temperature variations were included as specified functions. The regression model was applied using sequential F testing so that the fewest number and simplest zonation of intrinsic permeabilities, combined with the simplest overall model, were evaluated initially; additional complexities (such as subdivisions of zones and variations in temperature and thickness) were added in stages to evaluate the subsequent degree of improvement in the model results. It was found that only the eight major springs, a single main aquifer intrinsic permeability, two separate lineament intrinsic permeabilities of much smaller values, and temperature variations are warranted by the observed data (hydraulic heads and prior information on some parameters) for inclusion in a model that attempts to explain significant controls on groundwater flow. Addition of thickness variations did not significantly improve model results; however, thickness variations were included in the final model because they are fairly well defined. Effects on the observed head distribution from other features, such as vertical leakage and regional variations in intrinsic permeability, apparently were overshadowed by measurement errors in the observed heads. Estimates of the parameters correspond well to estimates obtained from other independent sources.
Feasibility of Rapid Multitracer PET Tumor Imaging
NASA Astrophysics Data System (ADS)
Kadrmas, D. J.; Rust, T. C.
2005-10-01
Positron emission tomography (PET) can characterize different aspects of tumor physiology using various tracers. PET scans are usually performed using only one tracer since there is no explicit signal for distinguishing multiple tracers. We tested the feasibility of rapidly imaging multiple PET tracers using dynamic imaging techniques, where the signals from each tracer are separated based upon differences in tracer half-life, kinetics, and distribution. Time-activity curve populations for FDG, acetate, ATSM, and PTSM were simulated using appropriate compartment models, and noisy dual-tracer curves were computed by shifting and adding the single-tracer curves. Single-tracer components were then estimated from dual-tracer data using two methods: principal component analysis (PCA)-based fits of single-tracer components to multitracer data, and parallel multitracer compartment models estimating single-tracer rate parameters from multitracer time-activity curves. The PCA analysis found that there is information content present for separating multitracer data, and that tracer separability depends upon tracer kinetics, injection order and timing. Multitracer compartment modeling recovered rate parameters for individual tracers with good accuracy but somewhat higher statistical uncertainty than single-tracer results when the injection delay was >10 min. These approaches to processing rapid multitracer PET data may potentially provide a new tool for characterizing multiple aspects of tumor physiology in vivo.
Schneider, Bradley B.; Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.
2013-01-01
Differential mobility spectrometry (DMS) separates ions on the basis of the difference in their migration rates under high versus low electric fields. Several models describing the physical nature of this field mobility dependence have been proposed but emerging as a dominant effect is the clusterization model sometimes referred to as the dynamic cluster-decluster model. DMS resolution and peak capacity is strongly influenced by the addition of modifiers which results in the formation and dissociation of clusters. This process increases selectivity due to the unique chemical interactions that occur between an ion and neutral gas phase molecules. It is thus imperative to bring the parameters influencing the chemical interactions under control and find ways to exploit them in order to improve the analytical utility of the device. In this paper we describe three important areas that need consideration in order to stabilize and capitalize on the chemical processes that dominate a DMS separation. The first involves means of controlling the dynamic equilibrium of the clustering reactions with high concentrations of specific reagents. The second area involves a means to deal with the unwanted heterogeneous cluster ion populations emitted from the electrospray ionization process that degrade resolution and sensitivity. The third involves fine control of parameters that affect the fundamental collision processes, temperature and pressure. PMID:20065515
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories.
Hajdziona, Marta; Molski, Andrzej
2011-02-07
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Safari, Ashkan; Tukovic, Zeljko; Cardiff, Philip; Walter, Maik; Casey, Eoin; Ivankovic, Alojz
2016-02-01
A good understanding of the mechanical stability of biofilms is essential for biofouling management, particularly when mechanical forces are used. Previous biofilm studies lack a damage-based theoretical model to describe the biofilm separation from a surface. The purpose of the current study was to investigate the interfacial separation of a mature biofilm from a rigid glass substrate using a combined experimental and numerical modelling approach. In the current work, the biofilm-glass interfacial separation process was investigated under tensile and shear stresses at the macroscale level, known as modes I and II failure mechanisms respectively. The numerical simulations were performed using a Finite Volume (FV)-based simulation package (OpenFOAM®) to predict the separation initiation using the cohesive zone model (CZM). Atomic force microscopy (AFM)-based retraction curve was used to obtain the separation properties between the biofilm and glass colloid at microscale level, where the CZM parameters were estimated using the Johnson-Kendall-Roberts (JKR) model. In this study CZM is introduced as a reliable method for the investigation of interfacial separation between a biofilm and rigid substrate, in which a high local stress at the interface edge acts as an ultimate stress at the crack tip.This study demonstrated that the total interfacial failure energy measured at the macroscale, was significantly higher than the pure interfacial separation energy obtained by AFM at the microscale, indicating a highly ductile deformation behaviour within the bulk biofilm matrix. The results of this study can significantly contribute to the understanding of biofilm detachments. Copyright © 2015 Elsevier Ltd. All rights reserved.
Corzo, Gerald; Solomatine, Dimitri
2007-05-01
Natural phenomena are multistationary and are composed of a number of interacting processes, so one single model handling all processes often suffers from inaccuracies. A solution is to partition data in relation to such processes using the available domain knowledge or expert judgment, to train separate models for each of the processes, and to merge them in a modular model (committee). In this paper a problem of water flow forecast in watershed hydrology is considered where the flow process can be presented as consisting of two subprocesses -- base flow and excess flow, so that these two processes can be separated. Several approaches to data separation techniques are studied. Two case studies with different forecast horizons are considered. Parameters of the algorithms responsible for data partitioning are optimized using genetic algorithms and global pattern search. It was found that modularization of ANN models using domain knowledge makes models more accurate, if compared with a global model trained on the whole data set, especially when forecast horizon (and hence the complexity of the modelled processes) is increased.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
NIR light propagation in a digital head model for traumatic brain injury (TBI)
Francis, Robert; Khan, Bilal; Alexandrakis, George; Florence, James; MacFarlane, Duncan
2015-01-01
Near infrared spectroscopy (NIRS) is capable of detecting and monitoring acute changes in cerebral blood volume and oxygenation associated with traumatic brain injury (TBI). Wavelength selection, source-detector separation, optode density, and detector sensitivity are key design parameters that determine the imaging depth, chromophore separability, and, ultimately, clinical usefulness of a NIRS instrument. We present simulation results of NIR light propagation in a digital head model as it relates to the ability to detect intracranial hematomas and monitor the peri-hematomal tissue viability. These results inform NIRS instrument design specific to TBI diagnosis and monitoring. PMID:26417498
An improved method to estimate reflectance parameters for high dynamic range imaging
NASA Astrophysics Data System (ADS)
Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro
2008-01-01
Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.
Mixed-mode oscillations and interspike interval statistics in the stochastic FitzHugh-Nagumo model
NASA Astrophysics Data System (ADS)
Berglund, Nils; Landon, Damien
2012-08-01
We study the stochastic FitzHugh-Nagumo equations, modelling the dynamics of neuronal action potentials in parameter regimes characterized by mixed-mode oscillations. The interspike time interval is related to the random number of small-amplitude oscillations separating consecutive spikes. We prove that this number has an asymptotically geometric distribution, whose parameter is related to the principal eigenvalue of a substochastic Markov chain. We provide rigorous bounds on this eigenvalue in the small-noise regime and derive an approximation of its dependence on the system's parameters for a large range of noise intensities. This yields a precise description of the probability distribution of observed mixed-mode patterns and interspike intervals.
Sweeney, Lisa M.; Parker, Ann; Haber, Lynne T.; Tran, C. Lang; Kuempel, Eileen D.
2015-01-01
A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model. PMID:23454101
Hays, Ron D; Spritzer, Karen L; Amtmann, Dagmar; Lai, Jin-Shei; Dewitt, Esi Morgan; Rothrock, Nan; Dewalt, Darren A; Riley, William T; Fries, James F; Krishnan, Eswar
2013-11-01
To create upper-extremity and mobility subdomain scores from the Patient-Reported Outcomes Measurement Information System (PROMIS) physical functioning adult item bank. Expert reviews were used to identify upper-extremity and mobility items from the PROMIS item bank. Psychometric analyses were conducted to assess empirical support for scoring upper-extremity and mobility subdomains. Data were collected from the U.S. general population and multiple disease groups via self-administered surveys. The sample (N=21,773) included 21,133 English-speaking adults who participated in the PROMIS wave 1 data collection and 640 Spanish-speaking Latino adults recruited separately. Not applicable. We used English- and Spanish-language data and existing PROMIS item parameters for the physical functioning item bank to estimate upper-extremity and mobility scores. In addition, we fit graded response models to calibrate the upper-extremity items and mobility items separately, compare separate to combined calibrations, and produce subdomain scores. After eliminating items because of local dependency, 16 items remained to assess upper extremity and 17 items to assess mobility. The estimated correlation between upper extremity and mobility was .59 using existing PROMIS physical functioning item parameters (r=.60 using parameters calibrated separately for upper-extremity and mobility items). Upper-extremity and mobility subdomains shared about 35% of the variance in common, and produced comparable scores whether calibrated separately or together. The identification of the subset of items tapping these 2 aspects of physical functioning and scored using the existing PROMIS parameters provides the option of scoring these subdomains in addition to the overall physical functioning score. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Modeling the electrophoretic separation of short biological molecules in nanofluidic devices
NASA Astrophysics Data System (ADS)
Fayad, Ghassan; Hadjiconstantinou, Nicolas
2010-11-01
Via comparisons with Brownian Dynamics simulations of the worm-like-chain and rigid-rod models, and the experimental results of Fu et al. [Phys. Rev. Lett., 97, 018103 (2006)], we demonstrate that, for the purposes of low-to-medium field electrophoretic separation in periodic nanofilter arrays, sufficiently short biomolecules can be modeled as point particles, with their orientational degrees of freedom accounted for using partition coefficients. This observation is used in the present work to build a particularly simple and efficient Brownian Dynamics simulation method. Particular attention is paid to the model's ability to quantitatively capture experimental results using realistic values of all physical parameters. A variance-reduction method is developed for efficiently simulating arbitrarily small forcing electric fields.
Visual-search models for location-known detection tasks
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.
2017-03-01
Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.
NASA Technical Reports Server (NTRS)
Berman, A. L.
1976-01-01
In the last two decades, increasingly sophisticated deep space missions have placed correspondingly stringent requirements on navigational accuracy. As part of the effort to increase navigational accuracy, and hence the quality of radiometric data, much effort has been expended in an attempt to understand and compute the tropospheric effect on range (and hence range rate) data. The general approach adopted has been that of computing a zenith range refraction, and then mapping this refraction to any arbitrary elevation angle via an empirically derived function of elevation. The prediction of zenith range refraction derived from surface measurements of meteorological parameters is presented. Refractivity is separated into wet (water vapor pressure) and dry (atmospheric pressure) components. The integration of dry refractivity is shown to be exact. Attempts to integrate wet refractivity directly prove ineffective; however, several empirical models developed by the author and other researchers at JPL are discussed. The best current wet refraction model is here considered to be a separate day/night model, which is proportional to surface water vapor pressure and inversely proportional to surface temperature. Methods are suggested that might improve the accuracy of the wet range refraction model.
Tethered Satellites as Enabling Platforms for an Operational Space Weather Monitoring System
NASA Technical Reports Server (NTRS)
Krause, L. Habash; Gilchrist, B. E.; Bilen, S.; Owens, J.; Voronka, N.; Furhop, K.
2013-01-01
Space weather nowcasting and forecasting models require assimilation of near-real time (NRT) space environment data to improve the precision and accuracy of operational products. Typically, these models begin with a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g. via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative physics-based forward-prediction calculations. The issue of required space weather observatories to meet the spatial and temporal requirements of these models is a complex one, and we do not address that with this poster. Instead, we present some examples of how tethered satellites can be used to address the shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include very long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements will be presented for each space weather parameter considered in this study.
Modeling phase separation in mixtures of intrinsically-disordered proteins
NASA Astrophysics Data System (ADS)
Gu, Chad; Zilman, Anton
Phase separation in a pure or mixed solution of intrinsically-disordered proteins (IDPs) and its role in various biological processes has generated interest from the theoretical biophysics community. Phase separation of IDPs has been implicated in the formation of membrane-less organelles such as nucleoli, as well as in a mechanism of selectivity in transport through the nuclear pore complex. Based on a lattice model of polymers, we study the phase diagram of IDPs in a mixture and describe the selective exclusion of soluble proteins from the dense-phase IDP aggregates. The model captures the essential behaviour of phase separation by a minimal set of coarse-grained parameters, corresponding to the average monomer-monomer and monomer-protein attraction strength, as well as the protein-to-monomer size ratio. Contrary to the intuition that strong monomer-monomer interaction increases exclusion of soluble proteins from the dense IDP aggregates, our model predicts that the concentration of soluble proteins in the aggregate phase as a function of monomer-monomer attraction is non-monotonic. We corroborate the predictions of the lattice model using Langevin dynamics simulations of grafted polymers in planar and cylindrical geometries, mimicking various in-vivo and in-vitro conditions.
Mandujano-Ramírez, Humberto J; González-Vázquez, José P; Oskam, Gerko; Dittrich, Thomas; Garcia-Belmonte, Germa; Mora-Seró, Iván; Bisquert, Juan; Anta, Juan A
2014-03-07
Many recent advances in novel solar cell technologies are based on charge separation in disordered semiconductor heterojunctions. In this work we use the Random Walk Numerical Simulation (RWNS) method to model the dynamics of electrons and holes in two disordered semiconductors in contact. Miller-Abrahams hopping rates and a tunnelling distance-dependent electron-hole annihilation mechanism are used to model transport and recombination, respectively. To test the validity of the model, three numerical "experiments" have been devised: (1) in the absence of constant illumination, charge separation has been quantified by computing surface photovoltage (SPV) transients. (2) By applying a continuous generation of electron-hole pairs, the model can be used to simulate a solar cell under steady-state conditions. This has been exploited to calculate open-circuit voltages and recombination currents for an archetypical bulk heterojunction solar cell (BHJ). (3) The calculations have been extended to nanostructured solar cells with inorganic sensitizers to study, specifically, non-ideality in the recombination rate. The RWNS model in combination with exponential disorder and an activated tunnelling mechanism for transport and recombination is shown to reproduce correctly charge separation parameters in these three "experiments". This provides a theoretical basis to study relevant features of novel solar cell technologies.
Identification of a parametric, discrete-time model of ankle stiffness.
Guarin, Diego L; Jalaleddini, Kian; Kearney, Robert E
2013-01-01
Dynamic ankle joint stiffness defines the relationship between the position of the ankle and the torque acting about it and can be separated into intrinsic and reflex components. Under stationary conditions, intrinsic stiffness can described by a linear second order system while reflex stiffness is described by Hammerstein system whose input is delayed velocity. Given that reflex and intrinsic torque cannot be measured separately, there has been much interest in the development of system identification techniques to separate them analytically. To date, most methods have been nonparametric and as a result there is no direct link between the estimated parameters and those of the stiffness model. This paper presents a novel algorithm for identification of a discrete-time model of ankle stiffness. Through simulations we show that the algorithm gives unbiased results even in the presence of large, non-white noise. Application of the method to experimental data demonstrates that it produces results consistent with previous findings.
Separation of m-cresol from neutral oils with liquid-liquid extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venter, D.L.; Nieuwoudt
Coal pyrolysis liquors are a major source of valuable phenolic compounds. In this study, the separation of m-cresol from neutral oils by means of liquid-liquid extraction is investigated. Liquid-liquid equilibria for the systems m-cresol + o-toluonitrile + hexane + water + tetraethylene glycol + undecane + dodecane and m-cresol + o-toluonitrile + hexane + water + tetraethylene glycol have bee determined at 313.15 K in order to evaluate the suitability of tetraethylene glycol as a high-boiling solvent for the separation of m-cresol from neutral oils. The effect of parameters such as solvent ratios on the desired separation were investigated. Thesemore » are illustrated on the basis of separation factors, percentage of feed o-toluonitrile remaining in the solvent phase, and percentage recovery of m-cresol. From the experimental results it was concluded that tetraethylene glycol is suitable for the proposed separation. The nonrandom two-liquid model fitted the experimental data satisfactorily. The model was used in the simulation of a multistage extraction column. m-Cresol recoveries of greater than 97% and m-cresol purity of greater than 99.5% were predicted.« less
NASA Astrophysics Data System (ADS)
O'Brien, R. J.; Deakin, J.; Misstear, B.; Gill, L.; Flynn, R. M.
2012-12-01
An appreciation of the quantity of streamflow derived from the main hydrological groundwater and surface water pathways transporting diffuse pollutants is critical when addressing a wide range of water resource management issues. The Pathways Project, funded by the Irish EPA, is developing a Catchment Management Tool (CMT) as an aid to water resource decision makers. The pollutants investigated by the CMT include phosphorus, nitrogen, sediments, pesticides and pathogens. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways in conjunction with the quicker overland and interflow pathways. Four watersheds are being investigated, with continuous rainfall, discharge, temperature and conductivity data being collected at gauging points within each of the watersheds. These datasets are being used to populate the semi-distributed, lumped flow model, NAM and also the distributed, finite difference model, MODFLOW. One of the main challenges is to achieve credible separations of the hydrograph into the main pathways in relatively small catchments (sometimes less than 5km2) with short response times. To assist the numerical modelling, physical separation techniques have been used to constrain the separations within probable limits. Physical techniques include: Master Recession Analysis; a modified Lyne and Hollick one-parameter digital separation; an approach developed in Ireland involving the application of recharge coefficients to hydrologically effective rainfall estimates; and finally using the NAM and MODFLOW models themselves as means of investigating separations. The contribution from each of the pathways, combined with an understanding of the attenuation of the contaminants along those pathways, will inform the CMT. This understanding will lay the foundation for linking the parameters of the NAM model to watershed descriptors such as slope, drainage density, watershed area, soil type, etc., in order to predict the response of a watershed to rainfall. This is an important deliverable of this research and will be fundamental for initial investigations in ungauged watersheds. This approach to quantifying hydrological pathways will therefore have wider applicability across Ireland and in hydrological settings elsewhere internationally. The research is being carried out for the Environmental Protection Agency by a consortium involving Queen's University Belfast, University College Dublin and Trinity College Dublin. Pathway separations in a karst watershed. Observed discharge (Black) with separated pathways: quick diffuse flow (Blue); slow diffuse flow (Green); interflow (Light Blue) and overland flow (Red).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Taraj; Brasseur, James; Vijayakumar, Ganesh
2016-01-04
This study is aimed at gaining insight into the nonsteady transitional boundary layer dynamics of wind turbine blades and the predictive capabilities of URANS based transition and turbulence models for similar physics through the analysis of a controlled flow with similar nonsteady parameters.
NASA Astrophysics Data System (ADS)
Alekseev, Ilia M.; Makhviladze, Tariel M.; Minushev, Airat Kh.; Sarychev, Mikhail E.
2009-10-01
On the basis of the general thermodynamic approach developed in a model describing the influence of point defects on the separation work at an interface of solid materials is developed. The kinetic equations describing the defect exchange between the interface and the material bulks are formulated. The model have been applied to the case when joined materials contain such point defects as impurity atoms (interstitial and substitutional), concretized the main characteristic parameters required for a numerical modeling as well as clarified their domains of variability. The results of the numerical modeling concerning the dependences on impurity concentrations and the temperature dependences are obtained and analyzed. Particularly, the effects of interfacial strengthening and adhesion incompatibility predicted analytically for the case of impurity atoms are verified and analyzed.
NASA Astrophysics Data System (ADS)
Alekseev, Ilia M.; Makhviladze, Tariel M.; Minushev, Airat Kh.; Sarychev, Mikhail E.
2010-02-01
On the basis of the general thermodynamic approach developed in a model describing the influence of point defects on the separation work at an interface of solid materials is developed. The kinetic equations describing the defect exchange between the interface and the material bulks are formulated. The model have been applied to the case when joined materials contain such point defects as impurity atoms (interstitial and substitutional), concretized the main characteristic parameters required for a numerical modeling as well as clarified their domains of variability. The results of the numerical modeling concerning the dependences on impurity concentrations and the temperature dependences are obtained and analyzed. Particularly, the effects of interfacial strengthening and adhesion incompatibility predicted analytically for the case of impurity atoms are verified and analyzed.
The 'robust' capture-recapture design allows components of recruitment to be estimated
Pollock, K.H.; Kendall, W.L.; Nichols, J.D.; Lebreton, J.-D.; North, P.M.
1993-01-01
The 'robust' capture-recapture design (Pollock 1982) allows analyses which combine features of closed population model analyses (Otis et aI., 1978, White et aI., 1982) and open population model analyses (Pollock et aI., 1990). Estimators obtained under these analyses are more robust to unequal catch ability than traditional Jolly-Seber estimators (Pollock, 1982; Pollock et al., 1990; Kendall, 1992). The robust design also allows estimation of parameters for population size, survival rate and recruitment numbers for all periods of the study unlike under Jolly-Seber type models. The major advantage of this design that we emphasize in this short review paper is that it allows separate estimation of immigration and in situ recruitment numbers for a two or more age class model (Nichols and Pollock, 1990). This is contrasted with the age-dependent Jolly-Seber model (Pollock, 1981; Stokes, 1984; Pollock et L, 1990) which provides separate estimates for immigration and in situ recruitment for all but the first two age classes where there is at least a three age class model. The ability to achieve this separation of recruitment components can be very important to population modelers and wildlife managers as many species can only be separated into two easily identified age classes in the field.
Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.
El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher
2018-01-01
Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.
Structural Equation Model Trees
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2015-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree structures that separate a data set recursively into subsets with significantly different parameter estimates in a SEM. SEM Trees provide means for finding covariates and covariate interactions that predict differences in structural parameters in observed as well as in latent space and facilitate theory-guided exploration of empirical data. We describe the methodology, discuss theoretical and practical implications, and demonstrate applications to a factor model and a linear growth curve model. PMID:22984789
Phase Separation of Superconducting Phases in the Penson-Kolb-Hubbard Model
NASA Astrophysics Data System (ADS)
Jerzy Kapcia, Konrad; Czart, Wojciech Robert; Ptok, Andrzej
2016-04-01
In this paper, we determine the phase diagrams (for T = 0 as well as T > 0) of the Penson-Kolb-Hubbard model for two dimensional square lattice within Hartree-Fock mean-field theory focusing on an investigation of superconducting phases and on a possibility of the occurrence of the phase separation. We obtain that the phase separation, which is a state of coexistence of two different superconducting phases (with s- and η-wave symmetries), occurs in definite ranges of the electron concentration. In addition, increasing temperature can change the symmetry of the superconducting order parameter (from η-wave into s-wave). The system considered exhibits also an interesting multicritical behaviour including bicritical points. The relevance of the results to experiments for real materials is also discussed.
Low-Order Modeling of Dynamic Stall on Airfoils in Incompressible Flow
NASA Astrophysics Data System (ADS)
Narsipur, Shreyas
Unsteady aerodynamics has been a topic of research since the late 1930's and has increased in popularity among researchers studying dynamic stall in helicopters, insect/bird flight, micro air vehicles, wind-turbine aerodynamics, and ow-energy harvesting devices. Several experimental and computational studies have helped researchers gain a good understanding of the unsteady ow phenomena, but have proved to be expensive and time-intensive for rapid design and analysis purposes. Since the early 1970's, the push to develop low-order models to solve unsteady ow problems has resulted in several semi-empirical models capable of effectively analyzing unsteady aerodynamics in a fraction of the time required by high-order methods. However, due to the various complexities associated with time-dependent flows, several empirical constants and curve fits derived from existing experimental and computational results are required by the semi-empirical models to be an effective analysis tool. The aim of the current work is to develop a low-order model capable of simulating incompressible dynamic-stall type ow problems with a focus on accurately modeling the unsteady ow physics with the aim of reducing empirical dependencies. The lumped-vortex-element (LVE) algorithm is used as the baseline unsteady inviscid model to which augmentations are applied to model unsteady viscous effects. The current research is divided into two phases. The first phase focused on augmentations aimed at modeling pure unsteady trailing-edge boundary-layer separation and stall without leading-edge vortex (LEV) formation. The second phase is targeted at including LEV shedding capabilities to the LVE algorithm and combining with the trailing-edge separation model from phase one to realize a holistic, optimized, and robust low-order dynamic stall model. In phase one, initial augmentations to theory were focused on modeling the effects of steady trailing-edge separation by implementing a non-linear decambering flap to model the effect of the separated boundary-layer. Unsteady RANS results for several pitch and plunge motions showed that the differences in aerodynamic loads between steady and unsteady flows can be attributed to the boundary-layer convection lag, which can be modeled by choosing an appropriate value of the time lag parameter, tau2. In order to provide appropriate viscous corrections to inviscid unsteady calculations, the non-linear decambering flap is applied with a time lag determined by the tau2 value, which was found to be independent of motion kinematics for a given airfoil and Reynolds number. The predictions of the aerodynamic loads, unsteady stall, hysteresis loops, and ow reattachment from the low-order model agree well with CFD and experimental results, both for individual cases and for trends between motions. The model was also found to perform as well as existing semi-empirical models while using only a single empirically defined parameter. Inclusion of LEV shedding capabilities and combining the resulting algorithm with phase one's trailing-edge separation model was the primary objective of phase two. Computational results at low and high Reynolds numbers were used to analyze the ow morphology of the LEV to identify the common surface signature associated with LEV initiation at both low and high Reynolds numbers and relate it to the critical leading-edge suction parameter (LESP ) to control the initiation and termination of LEV shedding in the low-order model. The critical LESP, like the tau2 parameter, was found to be independent of motion kinematics for a given airfoil and Reynolds number. Results from the final low-order model compared excellently with CFD and experimental solutions, both in terms of aerodynamic loads and vortex ow pattern predictions. Overall, the final combined dynamic stall model that resulted from the current research was successful in accurately modeling the physics of unsteady ow thereby helping restrict the number of empirical coefficients to just two variables while successfully modeling the aerodynamic forces and ow patterns in a simple and precise manner.
Barbagallo, Gabriele; d’Agostino, Marco Valerio; Placidi, Luca; Neff, Patrizio
2016-01-01
In this paper, we propose the first estimate of some elastic parameters of the relaxed micromorphic model on the basis of real experiments of transmission of longitudinal plane waves across an interface separating a classical Cauchy material (steel plate) and a phononic crystal (steel plate with fluid-filled holes). A procedure is set up in order to identify the parameters of the relaxed micromorphic model by superimposing the experimentally based profile of the reflection coefficient (plotted as function of the wave-frequency) with the analogous profile obtained via numerical simulations. We determine five out of six constitutive parameters which are featured by the relaxed micromorphic model in the isotropic case, plus the determination of the micro-inertia parameter. The sixth elastic parameter, namely the Cosserat couple modulus μc, still remains undetermined, since experiments on transverse incident waves are not yet available. A fundamental result of this paper is the estimate of the non-locality intrinsically associated with the underlying microstructure of the metamaterial. We show that the characteristic length Lc measuring the non-locality of the phononic crystal is of the order of 13 of the diameter of its fluid-filled holes. PMID:27436984
Madeo, Angela; Barbagallo, Gabriele; d'Agostino, Marco Valerio; Placidi, Luca; Neff, Patrizio
2016-06-01
In this paper, we propose the first estimate of some elastic parameters of the relaxed micromorphic model on the basis of real experiments of transmission of longitudinal plane waves across an interface separating a classical Cauchy material (steel plate) and a phononic crystal (steel plate with fluid-filled holes). A procedure is set up in order to identify the parameters of the relaxed micromorphic model by superimposing the experimentally based profile of the reflection coefficient (plotted as function of the wave-frequency) with the analogous profile obtained via numerical simulations. We determine five out of six constitutive parameters which are featured by the relaxed micromorphic model in the isotropic case, plus the determination of the micro-inertia parameter. The sixth elastic parameter, namely the Cosserat couple modulus μ c , still remains undetermined, since experiments on transverse incident waves are not yet available. A fundamental result of this paper is the estimate of the non-locality intrinsically associated with the underlying microstructure of the metamaterial. We show that the characteristic length L c measuring the non-locality of the phononic crystal is of the order of [Formula: see text] of the diameter of its fluid-filled holes.
2014-01-01
Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522
Hanafi, Rasha Sayed; Lämmerhofer, Michael
2018-01-26
Quality-by-Design approach for enantioselective HPLC method development surpasses Quality-by-Testing in offering the optimal separation conditions with the least number of experiments and in its ability to describe the method's Design Space visually which helps to determine enantiorecognition to a significant extent. Although some schemes exist for enantiomeric separations on Cinchona-based zwitterionic stationary phases, the exact design space and the weights by which each of the chromatographic parameters influences the separation have not yet been statistically studied. In the current work, a screening design followed by a Response Surface Methodology optimization design were adopted for enantioseparation optimization of 3 model drugs namely the acidic Fmoc leucine, the amphoteric tryptophan and the basic salbutamol. The screening design proved that the acid/base additives are of utmost importance for the 3 chiral drugs, and that among 3 different pairs of acids and bases, acetic acid and diethylamine is the couple able to provide acceptable resolution at variable conditions. Visualization of the response surface of the retention factor, separation factor and resolution helped describe accurately the magnitude by which each chromatographic factor (% MeOH, concentration and ratio of acid base modifiers) affects the separation while interacting with other parameters. The global optima compromising highest enantioresolution with the least run time for the 3 chiral model drugs varied extremely, where it was best to set low % methanol with equal ratio of acid-base modifiers for the acidic drug, very high % methanol and 10-fold higher concentration of the acid for the amphoteric drug while 20 folds of the base modifier with moderate %methanol were needed for the basic drug. Considering the selected drugs as models for many series of structurally related compounds, the design space defined and the optimum conditions computed are the key for method development on cinchona-based chiral stationary phases. Copyright © 2017 Elsevier B.V. All rights reserved.
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.
NASA Astrophysics Data System (ADS)
Palkin, V. A.; Igoshin, I. S.
2017-01-01
The separation potentials suggested by various researchers for separating multicomponent isotopic mixtures are considered. An estimation of their applicability to determining the parameters of the efficiency of enrichment of a ternary mixture in a cascade with an optimum scheme of connection of stages made up of elements with three takeoffs is carried out. The separation potential most precisely characterizing the separative power and other efficiency parameters of stages and cascade schemes has been selected based on the results of the estimation made.
Natural extension of fast-slow decomposition for dynamical systems
NASA Astrophysics Data System (ADS)
Rubin, J. E.; Krauskopf, B.; Osinga, H. M.
2018-01-01
Modeling and parameter estimation to capture the dynamics of physical systems are often challenging because many parameters can range over orders of magnitude and are difficult to measure experimentally. Moreover, selecting a suitable model complexity requires a sufficient understanding of the model's potential use, such as highlighting essential mechanisms underlying qualitative behavior or precisely quantifying realistic dynamics. We present an approach that can guide model development and tuning to achieve desired qualitative and quantitative solution properties. It relies on the presence of disparate time scales and employs techniques of separating the dynamics of fast and slow variables, which are well known in the analysis of qualitative solution features. We build on these methods to show how it is also possible to obtain quantitative solution features by imposing designed dynamics for the slow variables in the form of specified two-dimensional paths in a bifurcation-parameter landscape.
An aerodynamic model for one and two degree of freedom wing rock of slender delta wings
NASA Technical Reports Server (NTRS)
Hong, John
1993-01-01
The unsteady aerodynamic effects due to the separated flow around slender delta wings in motion were analyzed. By combining the unsteady flow field solution with the rigid body Euler equations of motion, self-induced wing rock motion is simulated. The aerodynamic model successfully captures the qualitative characteristics of wing rock observed in experiments. For the one degree of freedom in roll case, the model is used to look into the mechanisms of wing rock and to investigate the effects of various parameters, like angle of attack, yaw angle, displacement of the separation point, and wing inertia. To investigate the roll and yaw coupling for the delta wing, an additional degree of freedom is added. However, no limit cycle was observed in the two degree of freedom case. Nonetheless, the model can be used to apply various control laws to actively control wing rock using, for example, the displacement of the leading edge vortex separation point by inboard span wise blowing.
NASA Astrophysics Data System (ADS)
Munyaneza, O.; Mukubwa, A.; Maskey, S.; Uhlenbrook, S.; Wenninger, J.
2014-12-01
In the present study, we developed a catchment hydrological model which can be used to inform water resources planning and decision making for better management of the Migina Catchment (257.4 km2). The semi-distributed hydrological model HEC-HMS (Hydrologic Engineering Center - the Hydrologic Modelling System) (version 3.5) was used with its soil moisture accounting, unit hydrograph, liner reservoir (for baseflow) and Muskingum-Cunge (river routing) methods. We used rainfall data from 12 stations and streamflow data from 5 stations, which were collected as part of this study over a period of 2 years (May 2009 and June 2011). The catchment was divided into five sub-catchments. The model parameters were calibrated separately for each sub-catchment using the observed streamflow data. Calibration results obtained were found acceptable at four stations with a Nash-Sutcliffe model efficiency index (NS) of 0.65 on daily runoff at the catchment outlet. Due to the lack of sufficient and reliable data for longer periods, a model validation was not undertaken. However, we used results from tracer-based hydrograph separation from a previous study to compare our model results in terms of the runoff components. The model performed reasonably well in simulating the total flow volume, peak flow and timing as well as the portion of direct runoff and baseflow. We observed considerable disparities in the parameters (e.g. groundwater storage) and runoff components across the five sub-catchments, which provided insights into the different hydrological processes on a sub-catchment scale. We conclude that such disparities justify the need to consider catchment subdivisions if such parameters and components of the water cycle are to form the base for decision making in water resources planning in the catchment.
Characterization of Nanofluidic Entropic Trap Array for DNA Separation
NASA Astrophysics Data System (ADS)
Han, Jongyoon
2003-03-01
Micromachined nanoscale fluidic structures can provide new opportunities in biomolecule manipulation and sorting, because their chemical and physical properties can be controlled easily unlike random nanoporous materials. As an example of regular nanostructures used for biomolecule manipulation and sorting, a nanofluidic entropic trap array for DNA separation is presented. Nanofluidic channels as thin as 75nm were used as a molecular sieve instead of agarose gel for DNA separation. The interaction between DNA molecules and the nanofluidic structure determines the DNA migration speed, which was used to separate DNA molecules in a dc electrophoresis. Separation of long DNA (up to 200kbp) has been achieved within 30 minutes, using less than a picogram quantities of DNA, with only 1.5cm long channels.[1] In addition to the efficiency improvement, nanofluidic DNA entropic traps have a regular structure that can be easily modeled theoretically. The theoretical model could be the basis for improving the system performance for further optimization in separation size range and resolution. The process of DNA moving out of the entropic trap was theoretically modeled, and the prediction of the theoretical model was compared with the experimental data.[2] The selectivity, resolution, and the separation range of DNA for a given entropic trap separation system was discussed in terms of the number of entropic traps, various structural parameters of the system, and the electric field. It is expected that this system could be used for analyzing a small amount of ultra-long DNA molecules. (1) Han, J.; Craighead, H. G. Science 2000, 288, 1026-1029. (2) Han, J.; Craighead, H. G. Anal. Chem. 2002, 74, 394-401.
Validation of buoyancy driven spectral tensor model using HATS data
NASA Astrophysics Data System (ADS)
Chougule, A.; Mann, J.; Kelly, M.; Larsen, G. C.
2016-09-01
We present a homogeneous spectral tensor model for wind velocity and temperature fluctuations, driven by mean vertical shear and mean temperature gradient. Results from the model, including one-dimensional velocity and temperature spectra and the associated co-spectra, are shown in this paper. The model also reproduces two-point statistics, such as coherence and phases, via cross-spectra between two points separated in space. Model results are compared with observations from the Horizontal Array Turbulence Study (HATS) field program (Horst et al. 2004). The spectral velocity tensor in the model is described via five parameters: the dissipation rate (ɛ), length scale of energy-containing eddies (L), a turbulence anisotropy parameter (Γ), gradient Richardson number (Ri) representing the atmospheric stability and the rate of destruction of temperature variance (ηθ).
Transient eddy formation around headlands
Signell, Richard P.; Geyer, W. Rockwell
1991-01-01
Eddies with length scales of 1-10 km are commonly observed in coastal waters and play an important role in the dispersion of water-borne materials. The generation and evolution of these eddies by oscillatory tidal flow around coastal headlands is investigated with analytical and numerical models. Using shallow water depth-averaged vorticity dynamics, eddies are shown to form when flow separation occurs near the tip of the headland, causing intense vorticity generated along the headland to be injected into the interior. An analytic boundary layer model demonstrates that flow separation occurs when the pressure gradient along the boundary switches from favoring (accelerating) to adverse (decelerating), and its occurrence depends principally on three parameters: the aspect ratio [b/a], where b and a are characteristic width and length scales of the headland; [H/CDa], where H is the water depth, CD is the depth-averaged drag coefficient; and [Uo/aa], where Uo and a are the magnitude and frequency of the far-field tidal flow. Simulations with a depth-averaged numerical model show a wide range of responses to changes in these parameters, including cases where no separation occurs, cases where only one eddy exists at a given time, and cases where bottom friction is weak enough that eddies produced during successive tidal cycles coexist, interacting strongly with each other. These simulations also demonstrate that in unsteady flow, a strong start-up vortex forms after the flow separates, leading to a much more intense patch of vorticity and stronger recirculation than found in steady flow.
NASA Astrophysics Data System (ADS)
Chen, K. S.; Ho, Y. T.; Lai, C. H.; Chou, Youn-Min
The events of high ozone concentrations and meteorological conditions covering the Kaohsiung metropolitan area were investigated based on data analysis and model simulation. A photochemical grid model was employed to analyze two ozone episodes in autumn (2000) and winter (2001) seasons, each covering three consecutive days (or 72 h) in the Kaohsiung City. The potential influence of the initial and boundary conditions on model performance was assessed. Model performance can be improved by separately considering the daytime and nighttime ozone concentrations on the lateral boundary conditions of the model domain. The sensitivity analyses of ozone concentrations to the emission reductions in volatile organic compounds (VOC) and nitrogen oxides (NO x) show a VOC-sensitive regime for emission reductions to lower than 30-40% VOC and 30-50% NO x and a NO x-sensitive regime for larger percentage reductions. Meteorological parameters show that warm temperature, sufficient sunlight, low wind, and high surface pressure are distinct parameters that tend to trigger ozone episodes in polluted urban areas, like Kaohsiung.
NASA Astrophysics Data System (ADS)
Lee, Kang Il
2012-08-01
The present study aims to provide insight into the relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21-0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
Atomistic Modeling of RuAl and (RuNi) Al Alloys
NASA Technical Reports Server (NTRS)
Gargano, Pablo; Mosca, Hugo; Bozzolo, Guillermo; Noebe, Ronald D.; Gray, Hugh R. (Technical Monitor)
2002-01-01
Atomistic modeling of RuAl and RuAlNi alloys, using the BFS (Bozzolo-Ferrante-Smith) method for alloys is performed. The lattice parameter and energy of formation of B2 RuAl as a function of stoichiometry and the lattice parameter of (Ru(sub 50-x)Ni(sub x)Al(sub 50)) alloys as a function of Ni concentration are computed. BFS based Monte Carlo simulations indicate that compositions close to Ru25Ni25Al50 are single phase with no obvious evidence of a miscibility gap and separation of the individual B2 phases.
An Empirical Bayes Approach to Spatial Analysis
NASA Technical Reports Server (NTRS)
Morris, C. N.; Kostal, H.
1983-01-01
Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.
Wang, Y; Harrison, M; Clark, B J
2006-02-10
An optimization strategy for the separation of an acidic mixture by employing a monolithic stationary phase is presented, with the aid of experimental design and response surface methodology (RSM). An orthogonal array design (OAD) OA(16) (2(15)) was used to choose the significant parameters for the optimization. The significant factors were optimized by using a central composite design (CCD) and the quadratic models between the dependent and the independent parameters were built. The mathematical models were tested on a number of simulated data set and had a coefficient of R(2) > 0.97 (n = 16). On applying the optimization strategy, the factor effects were visualized as three-dimensional (3D) response surfaces and contour plots. The optimal condition was achieved in less than 40 min by using the monolithic packing with the mobile phase of methanol/20 mM phosphate buffer pH 2.7 (25.5/74.5, v/v). The method showed good agreement between the experimental data and predictive value throughout the studied parameter space and were suitable for optimization studies on the monolithic stationary phase for acidic compounds.
Optimizing the separation performance of a gas centrifuge
NASA Astrophysics Data System (ADS)
Wood, H. G.
1997-11-01
Gas centrifuges were originally developed for the enrichment of U^235 from naturally occurring uranium for the purpose of providing fuel for nuclear power reactors and material for nuclear weapons. This required the separation of a binary mixture composed of U^235 and U^238. Since the end of the cold war, a surplus of enriched uranium exists on the world market, but many centrifuge plants exist in numerous countries. These circumstances together with the growing demand for stable isotopes for chemical and physical research and in medical science has led to the exploration of alternate applications of gas centrifuge technology. In order to acieve these multi-component separations, existing centrifuges must be modified or new centrifuges must be designed. In either case, it is important to have models of the internal flow fields to predict the separation performance and algorithms to seek the optimal operating conditions of the centrifuges. Here, we use the Onsager pancake model of the internal flow field, and we present an optimization strategy which exploits a similarity parameter in the pancake model. Numerical examples will be presented.
Wang, Y; Harrison, M; Clark, B J
2006-02-10
An optimization methodology is introduced for investigating the separation and the retention behavior of analytes on a new fluorinated reversed-phase packing. Ten basic compounds were selected as test probes to study the predictive models developed by using SPSS and MATLAB software. A two-level orthogonal array design (OAD) was used to extract significant parameters. The significant factors were optimised using a central composite design to obtain the quadratic relationship between the dependent and the independent variables. Using this strategy, response surfaces were derived as the 3D and contour plots, and mathematical models were defined for the separation. The models had a satisfactory coefficient (R(2) > 0.97, n = 16). For the test compounds, the best separation condition was: MeCN/30 mM phosphate buffer pH 7.1(55.5:44.5, v/v) and 10 basic solutes were resolved in 22 min. The significant influence of the concentration of buffer shows that different mechanisms of separation for basic compounds on the fluorinated packing exist compared with a common ODS stationary phase.
Cavitating flow during water hammer using a generalized interface vaporous cavitation model
NASA Astrophysics Data System (ADS)
Sadafi, Mohamadhosein; Riasi, Alireza; Nourbakhsh, Seyed Ahmad
2012-10-01
In a transient flow simulation, column separation may occur when the calculated pressure head decreases to the saturated vapor pressure head in a computational grid. Abrupt valve closure or pump failure can result in a fast transient flow with column separation, potentially causing problems such as pipe failure, hydraulic equipment damage, cavitation or corrosion. This paper reports a numerical study of water hammer with column separation in a simple reservoir-pipeline-valve system and pumping station. The governing equations for two-phase transient flow in pipes are solved based on the method of characteristics (MOC) using a generalized interface vaporous cavitating model (GIVCM). The numerical results were compared with the experimental data for validation purposes, and the comparison indicated that the GIVCM describes the experimental results more accurately than the discrete vapor cavity model (DVCM). In particular, the GIVCM correlated better with the experimental data than the DVCM in terms of timing and pressure magnitude. The effects of geometric and hydraulic parameters on flow behavior in a pumping station with column separation were also investigated in this study.
Modelling aspects regarding the control in 13C isotope separation column
NASA Astrophysics Data System (ADS)
Boca, M. L.
2016-08-01
Carbon represents the fourth most abundant chemical element in the world, having two stable and one radioactive isotope. The 13Carbon isotopes, with a natural abundance of 1.1%, plays an important role in numerous applications, such as the study of human metabolism changes, molecular structure studies, non-invasive respiratory tests, Alzheimer tests, air pollution and global warming effects on plants [9] A manufacturing control system manages the internal logistics in a production system and determines the routings of product instances, the assignment of workers and components, the starting of the processes on not-yet-finished product instances. Manufacturing control does not control the manufacturing processes themselves, but has to cope with the consequences of the processing results (e.g. the routing of products to a repair station). In this research it was fulfilled some UML (Unified Modelling Language) diagrams for modelling the C13 Isotope Separation column, implement in STARUML program. Being a critical process and needing a good control and supervising, the critical parameters in the column, temperature and pressure was control using some PLC (Programmable logic controller) and it was made some graphic analyze for this to observe some critical situation than can affect the separation process. The main parameters that need to be control are: -The liquid nitrogen (N2) level in the condenser. -The electrical power supplied to the boiler. -The vacuum pressure.
The validation of a generalized Hooke's law for coronary arteries.
Wang, Chong; Zhang, Wei; Kassab, Ghassan S
2008-01-01
The exponential form of constitutive model is widely used in biomechanical studies of blood vessels. There are two main issues, however, with this model: 1) the curve fits of experimental data are not always satisfactory, and 2) the material parameters may be oversensitive. A new type of strain measure in a generalized Hooke's law for blood vessels was recently proposed by our group to address these issues. The new model has one nonlinear parameter and six linear parameters. In this study, the stress-strain equation is validated by fitting the model to experimental data of porcine coronary arteries. Material constants of left anterior descending artery and right coronary artery for the Hooke's law were computed with a separable nonlinear least-squares method with an excellent goodness of fit. A parameter sensitivity analysis shows that the stability of material constants is improved compared with the exponential model and a biphasic model. A boundary value problem was solved to demonstrate that the model prediction can match the measured arterial deformation under experimental loading conditions. The validated constitutive relation will serve as a basis for the solution of various boundary value problems of cardiovascular biomechanics.
NASA Astrophysics Data System (ADS)
Marjani, Azam
2016-07-01
For biomolecules and cell particles purification and separation in biological engineering, besides the chromatography as mostly applied process, aqueous two-phase systems (ATPS) are of the most favorable separation processes that are worth to be investigated in thermodynamic theoretically. In recent years, thermodynamic calculation of ATPS properties has attracted much attention due to their great applications in chemical industries such as separation processes. These phase calculations of ATPS have inherent complexity due to the presence of ions and polymers in aqueous solution. In this work, for target ternary systems of polyethylene glycol (PEG4000)-salt-water, thermodynamic investigation for constituent systems with three salts (NaCl, KCl and LiCl) has been carried out as PEG is the most favorable polymer in ATPS. The modified perturbed hard sphere chain (PHSC) equation of state (EOS), extended Debye-Hückel and Pitzer models were employed for calculation of activity coefficients for the considered systems. Four additional statistical parameters were considered to ensure the consistency of correlations and introduced as objective functions in the particle swarm optimization algorithm. The results showed desirable agreement to the available experimental data, and the order of recommendation of studied models is PHSC EOS > extended Debye-Hückel > Pitzer. The concluding remark is that the all the employed models are reliable in such calculations and can be used for thermodynamic correlation/predictions; however, by using an ion-based parameter calculation method, the PHSC EOS reveals both reliability and universality of applications.
Entropy-based separation of yeast cells using a microfluidic system of conjoined spheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kai-Jian; Qin, S.-J., E-mail: shuijie.qin@gmail.com; Bai, Zhong-Chen
2013-11-21
A physical model is derived to create a biological cell separator that is based on controlling the entropy in a microfluidic system having conjoined spherical structures. A one-dimensional simplified model of this three-dimensional problem in terms of the corresponding effects of entropy on the Brownian motion of particles is presented. This dynamic mechanism is based on the Langevin equation from statistical thermodynamics and takes advantage of the characteristics of the Fokker-Planck equation. This mechanism can be applied to manipulate biological particles inside a microfluidic system with identical, conjoined, spherical compartments. This theoretical analysis is verified by performing a rapid andmore » a simple technique for separating yeast cells in these conjoined, spherical microfluidic structures. The experimental results basically match with our theoretical model and we further analyze the parameters which can be used to control this separation mechanism. Both numerical simulations and experimental results show that the motion of the particles depends on the geometrical boundary conditions of the microfluidic system and the initial concentration of the diffusing material. This theoretical model can be implemented in future biophysics devices for the optimized design of passive cell sorters.« less
Mapping the parameter space of a T2-dependent model of water diffusion MR in brain tissue.
Hansen, Brian; Vestergaard-Poulsen, Peter
2006-10-01
We present a new model for describing the diffusion-weighted (DW) proton nuclear magnetic resonance signal obtained from normal grey matter. Our model is analytical and, in some respects, is an extension of earlier model schemes. We model tissue as composed of three separate compartments with individual properties of diffusion and transverse relaxation. Our study assumes slow exchange between compartments. We attempt to take cell morphology into account, along with its effect on water diffusion in tissues. Using this model, we simulate diffusion-sensitive MR signals and compare model output to experimental data from human grey matter. In doing this comparison, we perform a global search for good fits in the parameter space of the model. The characteristic nonmonoexponential behavior of the signal as a function of experimental b value is reproduced quite well, along with established values for tissue-specific parameters such as volume fraction, tortuosity and apparent diffusion coefficient. We believe that the presented approach to modeling diffusion in grey matter adds new aspects to the treatment of a longstanding problem.
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Classification Studies in an Advanced Air Classifier
NASA Astrophysics Data System (ADS)
Routray, Sunita; Bhima Rao, R.
2016-10-01
In the present paper, experiments are carried out using VSK separator which is an advanced air classifier to recover heavy minerals from beach sand. In classification experiments the cage wheel speed and the feed rate are set and the material is fed to the air cyclone and split into fine and coarse particles which are collected in separate bags. The size distribution of each fraction was measured by sieve analysis. A model is developed to predict the performance of the air classifier. The objective of the present model is to predict the grade efficiency curve for a given set of operating parameters such as cage wheel speed and feed rate. The overall experimental data with all variables studied in this investigation is fitted to several models. It is found that the present model is fitting good to the logistic model.
Autoimmune control of lesion growth in CNS with minimal damage
NASA Astrophysics Data System (ADS)
Mathankumar, R.; Mohan, T. R. Krishna
2013-07-01
Lesions in central nervous system (CNS) and their growth leads to debilitating diseases like Multiple Sclerosis (MS), Alzheimer's etc. We developed a model earlier [1, 2] which shows how the lesion growth can be arrested through a beneficial auto-immune mechanism. We compared some of the dynamical patterns in the model with different facets of MS. The success of the approach depends on a set of control parameters and their phase space was shown to have a smooth manifold separating the uncontrolled lesion growth region from the controlled. Here we show that an optimal set of parameter values exist in the model which minimizes system damage while, at once, achieving control of lesion growth.
2009-12-01
INHALATION TOXICOLOGY RESEARCH 2.1.1 Development of a Fatigue Model & Blood Oxygen-based Parameter Corre- lates Liu et al. (2002) introduced a muscle ...and Stuhmiller, J.H. “Generalization of a ‘phenomenological’ muscle fatigue model.” Technical report J0287-10-382 (in preparation). Product 3. Sih...physiologic response to exercise and a model of muscle fatigue which have been developed and validated separately are integrated. Integration occurs through
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
NASA AVOSS Fast-Time Wake Prediction Models: User's Guide
NASA Technical Reports Server (NTRS)
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew
2014-01-01
The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.
Modeling of salt and pH gradient elution in ion-exchange chromatography.
Schmidt, Michael; Hafner, Mathias; Frech, Christian
2014-01-01
The separation of proteins by internally and externally generated pH gradients in chromatofocusing on ion-exchange columns is a well-established analytical method with a large number of applications. In this work, a stoichiometric displacement model was used to describe the retention behavior of lysozyme on SP Sepharose FF and a monoclonal antibody on Fractogel SO3 (S) in linear salt and pH gradient elution. The pH dependence of the binding charge B in the linear gradient elution model is introduced using a protein net charge model, while the pH dependence of the equilibrium constant is based on a thermodynamic approach. The model parameter and pH dependences are calculated from linear salt gradient elutions at different pH values as well as from linear pH gradient elutions at different fixed salt concentrations. The application of the model for the well-characterized protein lysozyme resulted in almost identical model parameters based on either linear salt or pH gradient elution data. For the antibody, only the approach based on linear pH gradients is feasible because of the limited pH range useful for salt gradient elution. The application of the model for the separation of an acid variant of the antibody from the major monomeric form is discussed. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid
2017-08-01
Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].
Single-hole spectral function and spin-charge separation in the t-J model
NASA Astrophysics Data System (ADS)
Mishchenko, A. S.; Prokof'ev, N. V.; Svistunov, B. V.
2001-07-01
Worm algorithm Monte Carlo simulations of the hole Green function with subsequent spectral analysis were performed for 0.1<=J/t<=0.4 on lattices with up to L×L=32×32 sites at a temperature as low as T=J/40, and present, apparently, the hole spectral function in the thermodynamic limit. Spectral analysis reveals a δ-function-sharp quasiparticle peak at the lower edge of the spectrum that is incompatible with the power-law singularity and thus rules out the possibility of spin-charge separation in this parameter range. Spectral continuum features two peaks separated by a gap ~4÷5 t.
NASA Astrophysics Data System (ADS)
Sheridan, T. E.
2009-12-01
A model of a dusty plasma (Yukawa) ring is presented. We consider n identical particles confined in a two-dimensional (2D) annular potential well and interacting through a Debye (i.e. Yukawa or screened Coulomb) potential. Equilibrium configurations are computed versus n, the Debye shielding parameter and the trap radius. When the particle separation exceeds a critical value the particles form a 1D chain with a ring topology. Below the critical separation the zigzag instability gives a 2D configuration. Computed critical separations are shown to agree well with a theoretical prediction for the zigzag threshold. Normal mode spectra for 1D rings are computed and found to be in excellent agreement with the longitudinal and transverse dispersion relations for unbounded straight chains. When the longitudinal and transverse dispersion relations intersect we observe a resonance due to the finite curvature of the ring.
Liu, Feng; Tai, An; Lee, Percy; Biswas, Tithi; Ding, George X.; El Naqa, Isaam; Grimm, Jimm; Jackson, Andrew; Kong, Feng-Ming (Spring); LaCouture, Tamara; Loo, Billy; Miften, Moyed; Solberg, Timothy; Li, X Allen
2017-01-01
Purpose To analyze pooled clinical data using different radiobiological models and to understand the relationship between biologically effective dose (BED) and tumor control probability (TCP) for stereotactic body radiotherapy (SBRT) of early-stage non-small cell lung cancer (NSCLC). Method and Materials The clinical data of 1-, 2-, 3-, and 5-year actuarial or Kaplan-Meier TCP from 46 selected studies were collected for SBRT of NSCLC in the literature. The TCP data were separated for Stage T1 and T2 tumors if possible, otherwise collected for combined stages. BED was calculated at isocenters using six radiobiological models. For each model, the independent model parameters were determined from a fit to the TCP data using the least chi-square (χ2) method with either one set of parameters regardless of tumor stages or two sets for T1 and T2 tumors separately. Results The fits to the clinic data yield consistent results of large α/β ratios of about 20 Gy for all models investigated. The regrowth model that accounts for the tumor repopulation and heterogeneity leads to a better fit to the data, compared to other 5 models where the fits were indistinguishable between the models. The models based on the fitting parameters predict that the T2 tumors require about additional 1 Gy physical dose at isocenters per fraction (≤5 fractions) to achieve the optimal TCP when compared to the T1 tumors. Conclusion This systematic analysis of a large set of published clinical data using different radiobiological models shows that local TCP for SBRT of early-stage NSCLC has strong dependence on BED with large α/β ratios of about 20 Gy. The six models predict that a BED (calculated with α/β of 20) of 90 Gy is sufficient to achieve TCP ≥ 95%. Among the models considered, the regrowth model leads to a better fit to the clinical data. PMID:27871671
Whang, Min Cheol; Lim, Joa Sang; Boucsein, Wolfram
Despite rapid advances in technology, computers remain incapable of responding to human emotions. An exploratory study was conducted to find out what physiological parameters might be useful to differentiate among 4 emotional states, based on 2 dimensions: pleasantness versus unpleasantness and arousal versus relaxation. The 4 emotions were induced by exposing 26 undergraduate students to different combinations of olfactory and auditory stimuli, selected in a pretest from 12 stimuli by subjective ratings of arousal and valence. Changes in electroencephalographic (EEG), heart rate variability, and electrodermal measures were used to differentiate the 4 emotions. EEG activity separates pleasantness from unpleasantness only in the aroused but not in the relaxed domain, where electrodermal parameters are the differentiating ones. All three classes of parameters contribute to a separation between arousal and relaxation in the positive valence domain, whereas the latency of the electrodermal response is the only differentiating parameter in the negative domain. We discuss how such a psychophysiological approach may be incorporated into a systemic model of a computer responsive to affective communication from the user.
Dynamic behavior of the interaction between epidemics and cascades on heterogeneous networks
NASA Astrophysics Data System (ADS)
Jiang, Lurong; Jin, Xinyu; Xia, Yongxiang; Ouyang, Bo; Wu, Duanpo
2014-12-01
Epidemic spreading and cascading failure are two important dynamical processes on complex networks. They have been investigated separately for a long time. But in the real world, these two dynamics sometimes may interact with each other. In this paper, we explore a model combined with the SIR epidemic spreading model and a local load sharing cascading failure model. There exists a critical value of the tolerance parameter for which the epidemic with high infection probability can spread out and infect a fraction of the network in this model. When the tolerance parameter is smaller than the critical value, the cascading failure cuts off the abundance of paths and blocks the spreading of the epidemic locally. While the tolerance parameter is larger than the critical value, the epidemic spreads out and infects a fraction of the network. A method for estimating the critical value is proposed. In simulations, we verify the effectiveness of this method in the uncorrelated configuration model (UCM) scale-free networks.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
The behavior of complex aerospace systems is governed by numerous parameters. For safety analysis it is important to understand how the system behaves with respect to these parameter values. In particular, understanding the boundaries between safe and unsafe regions is of major importance. In this paper, we describe a hierarchical Bayesian statistical modeling approach for the online detection and characterization of such boundaries. Our method for classification with active learning uses a particle filter-based model and a boundary-aware metric for best performance. From a library of candidate shapes incorporated with domain expert knowledge, the location and parameters of the boundaries are estimated using advanced Bayesian modeling techniques. The results of our boundary analysis are then provided in a form understandable by the domain expert. We illustrate our approach using a simulation model of a NASA neuro-adaptive flight control system, as well as a system for the detection of separation violations in the terminal airspace.
An experimental and modeling study of isothermal charge/discharge behavior of commercial Ni-MH cells
NASA Astrophysics Data System (ADS)
Pan, Y. H.; Srinivasan, V.; Wang, C. Y.
In this study, a previously developed nickel-metal hydride (Ni-MH) battery model is applied in conjunction with experimental characterization. Important geometric parameters, including the active surface area and micro-diffusion length for both electrodes, are measured and incorporated in the model. The kinetic parameters of the oxygen evolution reaction are also characterized using constant potential experiments. Two separate equilibrium equations for the Ni electrode, one for charge and the other for discharge, are determined to provide a better description of the electrode hysteresis effect, and their use results in better agreement of simulation results with experimental data on both charge and discharge. The Ni electrode kinetic parameters are re-calibrated for the battery studied. The Ni-MH cell model coupled with the updated electrochemical properties is then used to simulate a wide range of experimental discharge and charge curves with satisfactory agreement. The experimentally validated model is used to predict and compare various charge algorithms so as to provide guidelines for application-specific optimization.
NASA Technical Reports Server (NTRS)
Wittmer, Kenneth S.; Devenport, William J.
1996-01-01
The perpendicular interaction of a streamwise vortex with an infinite span helicopter blade was modeled experimentally in incompressible flow. Three-component velocity and turbulence measurements were made using a sub-miniature four sensor hot-wire probe. Vortex core parameters (radius, peak tangential velocity, circulation, and centerline axial velocity deficit) were determined as functions of blade-vortex separation, streamwise position, blade angle of attack, vortex strength, and vortex size. The downstream development of the flow shows that the interaction of the vortex with the blade wake is the primary cause of the changes in the core parameters. The blade sheds negative vorticity into its wake as a result of the induced angle of attack generated by the passing vortex. Instability in the vortex core due to its interaction with this negative vorticity region appears to be the catalyst for the magnification of the size and intensity of the turbulent flowfield downstream of the interaction. In general, the core radius increases while peak tangential velocity decreases with the effect being greater for smaller separations. These effects are largely independent of blade angle of attack; and if these parameters are normalized on their undisturbed values, then the effects of the vortex strength appear much weaker. Two theoretical models were developed to aid in extending the results to other flow conditions. An empirical model was developed for core parameter prediction which has some rudimentary physical basis, implying usefulness beyond a simple curve fit. An inviscid flow model was also created to estimate the vorticity shed by the interaction blade, and to predict the early stages of its incorporation into the interacting vortex.
The NASA Low-Pressure Turbine Flow Physics Program
NASA Technical Reports Server (NTRS)
Ashpis, David E.
1998-01-01
An overview of the NASA Lewis Low-Pressure Turbine (LPT) Flow Physics Program will be presented. The program was established in response to the aero-engine industry's need for improved LPT efficiency and designs. Modern jet engines have four to seven LPT stages, significantly contributing to engine weight. In addition, there is a significant efficiency degradation between takeoff and cruise conditions, of up to 2 points. Reducing the weight and part count of the LPT and minimizing the efficiency degradation will translate into fuel savings. Accurate prediction methods of LPT flows and losses are needed to accomplish those improvements. The flow in LPT passages is at low Reynolds number, and is dominated by interplay of three basic mechanisms: transition, separation and wake interaction. The affecting parameters traditionally considered are Reynolds number, freestream turbulence intensity, wake frequency parameter, and the pressure distribution (loading). Three-dimensional effects and additional parameters, particularly turbulence characteristics like length scales, spectra and other statistics, as well as wake turbulence intensity and properties also play a role. The flow of most interest is on the suction surface, where large losses are generated as the flow tends to separate at the low Reynolds numbers. Ignoring wakes, a common flow scenario, there is laminar separation, followed by transition on the separation bubble and turbulent reattachment. If transition starts earlier the separation will be eliminated and the boundary layer will be attached leading to the well known bypass transition issues. In contrast, transition over a separation bubble is closer to free shear layer transition and was not investigated as well, particularly in the turbine environment. Unsteadiness created by wakes complicates the picture. Wakes induce earlier transition, and the calmed regions trailing the induced turbulent spots can delay or eliminate separation via shear stress modification. Three-dimensional flow physics and geometry will have strong effects. Altogether a very complex and challenging problem emerges. The objective of the program is to provide improved models and physical understanding of the complex flow, which are essential for accurate prediction of flow and losses in the LPT. Experimental, computational and analytical work as complementing and augmenting approaches are used. The program involves industry, universities and research institutes, and other government laboratories. It is characterized by strong interaction among participants, quick dissemination of results, and responsiveness to industry's needs. The presentation will describe the work elements. Highlighting some activities in progress are experiments on simulated blade suction surface in low-speed wind tunnels, on curved wall, and on a flat-plate, both with pressure gradient. In the area of computation, assessment of existing models is performed using RANS (Reynolds Averaged Navier Stokes) simulations. Laminar flow DNS was completed. Analytical studies of instability and receptivity in attached and separated flows were started. In the near future the program is moving to include wake effects and development of improved modeling. Experimental work in preparation stages are: (1) Addition of wakes to the curved tunnel experiment; (2) Low-speed rotating rig experiment on GE90 engine LPT; and (3) Transonic cascade. In the area of computation, it is expected to move from model assessment towards development of improved models. In addition, a new project of Large Eddy Simulation (LES) of LPT is to begin and will provide numerical data bases. It is planned to implement the emerging improved models in a multistage turbomachinery code and to validate against the GE90 engine LPT.
NASA Astrophysics Data System (ADS)
Jensen, Kristoffer
2002-11-01
A timbre model is proposed for use in multiple applications. This model, which encompasses all voiced isolated musical instruments, has an intuitive parameter set, fixed size, and separates the sounds in dimensions akin to the timbre dimensions as proposed in timbre research. The analysis of the model parameters is fully documented, and it proposes, in particular, a method for the estimation of the difficult decay/release split-point. The main parameters of the model are the spectral envelope, the attack/release durations and relative amplitudes, and the inharmonicity and the shimmer and jitter (which provide both for the slow random variations of the frequencies and amplitudes, and also for additive noises). Some of the applications include synthesis, where a real-time application is being developed with an intuitive gui, classification, and search of sounds based on the content of the sounds, and a further understanding of acoustic musical instrument behavior. In order to present the background of the model, this presentation will start with sinusoidal A/S, some timbre perception research, then present the timbre model, show the validity for individual music instrument sounds, and finally introduce some expression additions to the model.
Guo, Yanyong; Li, Zhibin; Wu, Yao; Xu, Chengcheng
2018-06-01
Bicyclists running the red light at crossing facilities increase the potential of colliding with motor vehicles. Exploring the contributing factors could improve the prediction of running red-light probability and develop countermeasures to reduce such behaviors. However, individuals could have unobserved heterogeneities in running a red light, which make the accurate prediction more challenging. Traditional models assume that factor parameters are fixed and cannot capture the varying impacts on red-light running behaviors. In this study, we employed the full Bayesian random parameters logistic regression approach to account for the unobserved heterogeneous effects. Two types of crossing facilities were considered which were the signalized intersection crosswalks and the road segment crosswalks. Electric and conventional bikes were distinguished in the modeling. Data were collected from 16 crosswalks in urban area of Nanjing, China. Factors such as individual characteristics, road geometric design, environmental features, and traffic variables were examined. Model comparison indicates that the full Bayesian random parameters logistic regression approach is statistically superior to the standard logistic regression model. More red-light runners are predicted at signalized intersection crosswalks than at road segment crosswalks. Factors affecting red-light running behaviors are gender, age, bike type, road width, presence of raised median, separation width, signal type, green ratio, bike and vehicle volume, and average vehicle speed. Factors associated with the unobserved heterogeneity are gender, bike type, signal type, separation width, and bike volume. Copyright © 2018 Elsevier Ltd. All rights reserved.
Computational assessment of model-based wave separation using a database of virtual subjects.
Hametner, Bernhard; Schneider, Magdalena; Parragh, Stephanie; Wassertheurer, Siegfried
2017-11-07
The quantification of arterial wave reflection is an important area of interest in arterial pulse wave analysis. It can be achieved by wave separation analysis (WSA) if both the aortic pressure waveform and the aortic flow waveform are known. For better applicability, several mathematical models have been established to estimate aortic flow solely based on pressure waveforms. The aim of this study is to investigate and verify the model-based wave separation of the ARCSolver method on virtual pulse wave measurements. The study is based on an open access virtual database generated via simulations. Seven cardiac and arterial parameters were varied within physiological healthy ranges, leading to a total of 3325 virtual healthy subjects. For assessing the model-based ARCSolver method computationally, this method was used to perform WSA based on the aortic root pressure waveforms of the virtual patients. Asa reference, the values of WSA using both the pressure and flow waveforms provided by the virtual database were taken. The investigated parameters showed a good overall agreement between the model-based method and the reference. Mean differences and standard deviations were -0.05±0.02AU for characteristic impedance, -3.93±1.79mmHg for forward pressure amplitude, 1.37±1.56mmHg for backward pressure amplitude and 12.42±4.88% for reflection magnitude. The results indicate that the mathematical blood flow model of the ARCSolver method is a feasible surrogate for a measured flow waveform and provides a reasonable way to assess arterial wave reflection non-invasively in healthy subjects. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
You, Gexin; Liu, Xinsen; Chen, Xiri; Yang, Bo; Zhou, Xiuwen
2018-06-01
In this study, a two-element model consisting of a non-linear spring and a viscous dashpot was proposed to simulate tensile curve of polyurethane fibers. The results showed that the two-element model can simulate the tensile curve of the polyurethane fibers better with a simple and applicable feature compared to the existing three-element model and four-element model. The effects of isocyanate index (R) on the hydrogen bond (H-bond) and the micro-phase separation of polyurethane fibers were investigated by Fourier transform infrared spectroscopy and x-ray pyrometer, respectively. The degree of H-bond and micro-phase separation increased first and then decreased as the R value increased, and gain a maximum at the value of 1.76, which is in good agreement with parameters viscosity coefficient η and the initial modulus c in the model.
NASA Astrophysics Data System (ADS)
Pisarev, Gleb I.; Hoffmann, Alex C.
2011-09-01
This paper compares CFD simulations of the `end of the vortex' (EoV) behaviour in centrifugal separators with experiment. The EoV was studied in `swirl tubes', cylindrical cyclone separators with swirl vanes. We refer to the EoV as the phenomenon whereby the core of the vortex does not reach the bottom of the separator, but deviates from the swirl tube axis and attaches to the wall, where it rotates at some level above the bottom. The crucial parameters governing the EoV are geometrical, specifically the ratio of the separator length to its diameter (L/D), and operational, specifically the fluid flowrate. Swirl tubes with varying body lengths have been studied experimentally and numerically. CFD simulations were carried out using the commercial package Star-CD. The 3-D Navier-Stokes equations were solved using the finite volume method based on the SIMPLE pressure-correction algorithm and the LES turbulence model. The vortex behaviour was very similar between the experiments and the numerical simulations, this agreement being both qualitative and quantitative. However, there were some cases where the CFD predictions showed only qualitative agreement with experiments, with some of the parameter-values delimiting given types of flows being somewhat different between experiment and simulations.
Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang
2015-01-01
Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice. PMID:25594592
Liang, Zhenwei; Li, Yaoming; Zhao, Zhan; Xu, Lizhang
2015-01-14
Grain separation losses is a key parameter to weigh the performance of combine harvesters, and also a dominant factor for automatically adjusting their major working parameters. The traditional separation losses monitoring method mainly rely on manual efforts, which require a high labor intensity. With recent advancements in sensor technology, electronics and computational processing power, this paper presents an indirect method for monitoring grain separation losses in tangential-axial combine harvesters in real-time. Firstly, we developed a mathematical monitoring model based on detailed comparative data analysis of different feeding quantities. Then, we developed a grain impact piezoelectric sensor utilizing a YT-5 piezoelectric ceramic as the sensing element, and a signal process circuit designed according to differences in voltage amplitude and rise time of collision signals. To improve the sensor performance, theoretical analysis was performed from a structural vibration point of view, and the optimal sensor structural has been selected. Grain collide experiments have shown that the sensor performance was greatly improved. Finally, we installed the sensor on a tangential-longitudinal axial combine harvester, and grain separation losses monitoring experiments were carried out in North China, which results have shown that the monitoring method was feasible, and the biggest measurement relative error was 4.63% when harvesting rice.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
1D Cole-Cole inversion of TEM transients influenced by induced polarization
NASA Astrophysics Data System (ADS)
Seidel, Marc; Tezkan, Bülent
2017-03-01
Effects of induced polarization (IP) can have an impact on time-domain electromagnetic measurements (TEM) and may lead to sign reversals in the recorded transients. To study these IP effects on TEM data, a new 1D inversion algorithm was developed for both, the central-loop and the separate-loop TEM configurations using the Cole-Cole relaxation model. 1D forward calculations for a homogeneous half-space were conducted with the aim of analyzing the impacts of the Cole-Cole parameters on TEM transients with respect to possible sign reversals. The forward modelings showed that the variation of different parameters have comparable effects on the TEM transients. This leads to an increasing number of equivalent models as a result of inversion calculations. Subsequently, 1D inversions of synthetic data were performed to study the potentials and limitations of the algorithm regarding the resolution of the Cole-Cole parameters. In order to achieve optimal inversion results, it was essential to error-weight the data points in the direct vicinity of sign reversals. The obtained findings were eventually adopted on the inversion of real field data which contained considerable IP signatures such as sign reversals. One field data set was recorded at the Nakyn kimberlite field in Western Yakutiya, Russia, in the central-loop configuration. Another field data set originates from a waste site in Cologne, Germany, and was measured utilizing the separate-loop configuration.
NASA Astrophysics Data System (ADS)
Rafiee, Seyed Ehsan; Sadeghiazad, M. M.
2016-06-01
Air separators provide safe, clean, and appropriate air flow to engines and are widely used in vehicles with large engines such as ships and submarines. In this operational study, the separation process inside a Ranque-Hilsch vortex tube cleaning (cooling) system is investigated to analyze the impact of the operating gas type on the vortex tube performance; the operating gases used are air, nitrogen, oxygen, carbon dioxide and nitrogen dioxide. The computational fluid dynamic model used is equipped with a three-dimensional structure, and the steady-state condition is applied during computations. The standard k-ɛ turbulence model is employed to resolve nonlinear flow equations, and various key parameters, such as hot and cold exhaust thermal drops, and power separation rates, are described numerically. The results show that nitrogen dioxide creates the greatest separation power out of all gases tested, and the numerical results are validated by good agreement with available experimental data. In addition, a comparison is made between the use of two different boundary conditions, the pressure-far-field and the pressure-outlet, when analyzing complex turbulent flows inside the air separators. Results present a comprehensive and practical solution for use in future numerical studies.
NASA Astrophysics Data System (ADS)
Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu
2017-10-01
The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.
Damping behavior of nano-fibrous composites with viscous interface in anti-plane shear
NASA Astrophysics Data System (ADS)
Wang, Xu
2017-06-01
By using the composite cylinder assemblage model, we derive an explicit expression of the specific damping capacity of nano-fibrous composite with viscous interface when subjected to time-harmonic anti-plane shear loads. The fiber and the matrix are first endowed with separate and distinct Gurtin-Murdoch surface elasticities, and rate-dependent sliding occurs on the fiber-matrix interface. Our analysis indicates that the effective damping of the composite depends on five dimensionless parameters: the fiber volume fraction, the stiffness ratio, two parameters arising from surface elasticity and one parameter due to interface sliding.
The Shock and Vibration Digest. Volume 18, Number 8
1986-08-01
the swash plate . This is an active that vibration can be reduced by separation of control system...element program model . ture-borne sound intensity has been tried earlier The agreement is shown to be very good. A on thin- plate constructions in ...predicting the response of two displacement controlled laboratory tests that were used for the determination of the model parameters. 86-1532
Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.
1994-01-01
A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.
Stepwise calibration procedure for regional coupled hydrological-hydrogeological models
NASA Astrophysics Data System (ADS)
Labarthe, Baptiste; Abasq, Lena; de Fouquet, Chantal; Flipo, Nicolas
2014-05-01
Stream-aquifer interaction is a complex process depending on regional and local processes. Indeed, the groundwater component of hydrosystem and large scale heterogeneities control the regional flows towards the alluvial plains and the rivers. In second instance, the local distribution of the stream bed permeabilities controls the dynamics of stream-aquifer water fluxes within the alluvial plain, and therefore the near-river piezometric head distribution. In order to better understand the water circulation and pollutant transport in watersheds, the integration of these multi-dimensional processes in modelling platform has to be performed. Thus, the nested interfaces concept in continental hydrosystem modelling (where regional fluxes, simulated by large scale models, are imposed at local stream-aquifer interfaces) has been presented in Flipo et al (2014). This concept has been implemented in EauDyssée modelling platform for a large alluvial plain model (900km2) part of a 11000km2 multi-layer aquifer system, located in the Seine basin (France). The hydrosystem modelling platform is composed of four spatially distributed modules (Surface, Sub-surface, River and Groundwater), corresponding to four components of the terrestrial water cycle. Considering the large number of parameters to be inferred simultaneously, the calibration process of coupled models is highly computationally demanding and therefore hardly applicable to a real case study of 10000km2. In order to improve the efficiency of the calibration process, a stepwise calibration procedure is proposed. The stepwise methodology involves determining optimal parameters of all components of the coupled model, to provide a near optimum prior information for the global calibration. It starts with the surface component parameters calibration. The surface parameters are optimised based on the comparison between simulated and observed discharges (or filtered discharges) at various locations. Once the surface parameters have been determined, the groundwater component is calibrated. The calibration procedure is performed under steady state hypothesis (to minimize the procedure time length) using recharge rates given by the surface component calibration and imposed fluxes boundary conditions given by the regional model. The calibration is performed using pilot point where the prior variogram is calculated from observed transmissivities values. This procedure uses PEST (http//:www.pesthomepage.org/Home.php) as the inverse modelling tool and EauDyssée as the direct model. During the stepwise calibration process, each modules, even if they are actually dependant from each other, are run and calibrated independently, therefore contributions between each module have to be determined. For the surface module, groundwater and runoff contributions have been determined by hydrograph separation. Among the automated base-flow separation methods, the one-parameter Chapman filter (Chapman et al 1999) has been chosen. This filter is a decomposition of the actual base-flow between the previous base-flow and the discharge gradient weighted by functions of the recession coefficient. For the groundwater module, the recharge has been determined from surface and sub-surface module. References : Flipo, N., A. Mourhi, B. Labarthe, and S. Biancamaria (2014). Continental hydrosystem modelling : the concept of nested stream-aquifer interfaces. Hydrol. Earth Syst. Sci. Discuss. 11, 451-500. Chapman,TG. (1999). A comparison of algorithms for stream flow recession and base-flow separation. hydrological Processes 13, 701-714.
Dynamo onset as a first-order transition: lessons from a shell model for magnetohydrodynamics.
Sahoo, Ganapati; Mitra, Dhrubaditya; Pandit, Rahul
2010-03-01
We carry out systematic and high-resolution studies of dynamo action in a shell model for magnetohydrodynamic (MHD) turbulence over wide ranges of the magnetic Prandtl number PrM and the magnetic Reynolds number ReM. Our study suggests that it is natural to think of dynamo onset as a nonequilibrium first-order phase transition between two different turbulent, but statistically steady, states. The ratio of the magnetic and kinetic energies is a convenient order parameter for this transition. By using this order parameter, we obtain the stability diagram (or nonequilibrium phase diagram) for dynamo formation in our MHD shell model in the (PrM-1,ReM) plane. The dynamo boundary, which separates dynamo and no-dynamo regions, appears to have a fractal character. We obtain a hysteretic behavior of the order parameter across this boundary and suggestions of nucleation-type phenomena.
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
Adaptive Modal Identification for Flutter Suppression Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.
2016-01-01
In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.
Groundwater flow simulation of the Savannah River Site general separations area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.; Bagwell, L.; Bennett, P.
The most recent groundwater flow model of the General Separations Area, Savannah River Site, is referred to as the “GSA/PORFLOW” model. GSA/PORFLOW was developed in 2004 by porting an existing General Separations Area groundwater flow model from the FACT code to the PORFLOW code. The preceding “GSA/FACT” model was developed in 1997 using characterization and monitoring data through the mid-1990’s. Both models were manually calibrated to field data. Significantly more field data have been acquired since the 1990’s and model calibration using mathematical optimization software has become routine and recommended practice. The current task involved updating the GSA/PORFLOW model usingmore » selected field data current through at least 2015, and use of the PEST code to calibrate the model and quantify parameter uncertainty. This new GSA groundwater flow model is named “GSA2016” in reference to the year in which most development occurred. The GSA2016 model update is intended to address issues raised by the DOE Low-Level Waste (LLW) Disposal Facility Federal Review Group (LFRG) in a 2008 review of the E-Area Performance Assessment, and by the Nuclear Regulatory Commission in reviews of tank closure and Saltstone Disposal Facility Performance Assessments.« less
NASA Astrophysics Data System (ADS)
Brehme, Marc; Koschmieder, Steffen; Montazeri, Maryam; Copland, Mhairi; Oehler, Vivian G.; Radich, Jerald P.; Brümmendorf, Tim H.; Schuppert, Andreas
2016-04-01
Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer progression, biomarker identification and the design of individualized therapies. Using chronic myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification at unprecedented resolution. Linking CD34+ similarity as a disease progression marker to patient-derived gene expression entropy separated established CML progression stages and uncovered additional heterogeneity within disease stages. Importantly, our patient data informed model enables quantitative approximation of individual patients’ disease history within chronic phase (CP) and significantly separates “early” from “late” CP. Our findings provide a novel rationale for personalized and genome-informed disease progression risk assessment that is independent and complementary to conventional measures of CML disease burden and prognosis.
A model for the formation of the Local Group
NASA Technical Reports Server (NTRS)
Peebles, P. J. E.; Melott, A. L.; Holmes, M. R.; Jiang, L. R.
1989-01-01
Observational tests of a model for the formation of the Local Group are presented and analyzed in which the mass concentration grows by gravitational accretion of local-pressure matter onto two seed masses in an otherwise homogeneous initial mass distribution. The evolution of the mass distribution is studied in an analytic approximation and a numerical computation. The initial seed mass and separation are adjusted to produce the observed present separation and relative velocity of the Andromeda Nebula and the Galaxy. If H(0) is adjusted to about 80 km/s/Mpc with density parameter Omega = 1, then the model gives a good fit to the motions of the outer members of the Local Group. The same model gives particle orbits at radius of about 100 kpc that reasonably approximate the observed distribution of redshifts of the Galactic satellites.
Comparing methods for Earthquake Location
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Bodin, Thomas; Sylvander, Matthieu; Parroucau, Pierre; Manchuel, Kevin
2017-04-01
There are plenty of methods available for locating small magnitude point source earthquakes. However, it is known that these different approaches produce different results. For each approach, results also depend on a number of parameters which can be separated into two main branches: (1) parameters related to observations (number and distribution of for example) and (2) parameters related to the inversion process (velocity model, weighting parameters, initial location etc.). Currently, the results obtained from most of the location methods do not systematically include quantitative uncertainties. The effect of the selected parameters on location uncertainties is also poorly known. Understanding the importance of these different parameters and their effect on uncertainties is clearly required to better constrained knowledge on fault geometry, seismotectonic processes and at the end to improve seismic hazard assessment. In this work, realized in the frame of the SINAPS@ research program (http://www.institut-seism.fr/projets/sinaps/), we analyse the effect of different parameters on earthquakes location (e.g. type of phase, max. hypocentral separation etc.). We compare several codes available (Hypo71, HypoDD, NonLinLoc etc.) and determine their strengths and weaknesses in different cases by means of synthetic tests. The work, performed for the moment on synthetic data, is planned to be applied, in a second step, on data collected by the Midi-Pyrénées Observatory (OMP).
Mobile application MDDCS for modeling the expansion dynamics of a dislocation loop in FCC metals
NASA Astrophysics Data System (ADS)
Kirilyuk, Vasiliy; Petelin, Alexander; Eliseev, Andrey
2017-11-01
A mobile version of the software package Dynamic Dislocation of Crystallographic Slip (MDDCS) designed for modeling the expansion dynamics of dislocation loops and formation of a crystallographic slip zone in FCC-metals is examined. The paper describes the possibilities for using MDDCS, the application interface, and the database scheme. The software has a simple and intuitive interface and does not require special training. The user can set the initial parameters of the experiment, carry out computational experiments, export parameters and results of the experiment into separate text files, and display the experiment results on the device screen.
NASA Technical Reports Server (NTRS)
Dalling, D. K.; Bailey, B. K.; Pugmire, R. J.
1984-01-01
A proton and carbon-13 nuclear magnetic resonance (NMR) study was conducted of Ashland shale oil refinery products, experimental referee broadened-specification jet fuels, and of related isoprenoid model compounds. Supercritical fluid chromatography techniques using carbon dioxide were developed on a preparative scale, so that samples could be quantitatively separated into saturates and aromatic fractions for study by NMR. An optimized average parameter treatment was developed, and the NMR results were analyzed in terms of the resulting average parameters; formulation of model mixtures was demonstrated. Application of novel spectroscopic techniques to fuel samples was investigated.
Hydrograph separation for karst watersheds using a two-domain rainfall-discharge model
Long, Andrew J.
2009-01-01
Highly parameterized, physically based models may be no more effective at simulating the relations between rainfall and outflow from karst watersheds than are simpler models. Here an antecedent rainfall and convolution model was used to separate a karst watershed hydrograph into two outflow components: one originating from focused recharge in conduits and one originating from slow flow in a porous annex system. In convolution, parameters of a complex system are lumped together in the impulse-response function (IRF), which describes the response of the system to an impulse of effective precipitation. Two parametric functions in superposition approximate the two-domain IRF. The outflow hydrograph can be separated into flow components by forward modeling with isolated IRF components, which provides an objective criterion for separation. As an example, the model was applied to a karst watershed in the Madison aquifer, South Dakota, USA. Simulation results indicate that this watershed is characterized by a flashy response to storms, with a peak response time of 1 day, but that 89% of the flow results from the slow-flow domain, with a peak response time of more than 1 year. This long response time may be the result of perched areas that store water above the main water table. Simulation results indicated that some aspects of the system are stationary but that nonlinearities also exist.
Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M
2014-02-01
Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.
Bashir, Mubasher A; Radke, Wolfgang
2007-09-07
The suitability of a retention model especially designed for polymers is investigated to describe and predict the chromatographic retention behavior of poly(methyl methacrylate)s as a function of mobile phase composition and gradient steepness. It is found that three simple yet rationally chosen chromatographic experiments suffice to extract the analyte specific model parameters necessary to calculate the retention volumes. This allows predicting accurate retention volumes based on a minimum number of initial experiments. Therefore, methods for polymer separations can be developed in relatively short time. The suitability of the virtual chromatography approach to predict the separation of polymer blend is demonstrated for the first time using a blend of different polyacrylates.
Modeling pH-zone refining countercurrent chromatography: a dynamic approach.
Kotland, Alexis; Chollet, Sébastien; Autret, Jean-Marie; Diard, Catherine; Marchal, Luc; Renault, Jean-Hugues
2015-04-24
A model based on mass transfer resistances and acid-base equilibriums at the liquid-liquid interface was developed for the pH-zone refining mode when it is used in countercurrent chromatography (CCC). The binary separation of catharanthine and vindoline, two alkaloids used as starting material for the semi-synthesis of chemotherapy drugs, was chosen for the model validation. Toluene/CH3CN/water (4/1/5, v/v/v) was selected as biphasic solvent system. First, hydrodynamics and mass transfer were studied by using chemical tracers. Trypan blue only present in the aqueous phase allowed the determination of the parameters τextra and Pe for hydrodynamic characterization whereas acetone, which partitioned between the two phases, allowed the determination of the transfer parameter k0a. It was shown that mass transfer was improved by increasing both flow rate and rotational speed, which is consistent with the observed mobile phase dispersion. Then, the different transfer parameters of the model (i.e. the local transfer coefficient for the different species involved in the process) were determined by fitting experimental concentration profiles. The model accurately predicted both equilibrium and dynamics factors (i.e. local mass transfer coefficients and acid-base equilibrium constant) variation with the CCC operating conditions (cell number, flow rate, rotational speed and thus stationary phase retention). The initial hypotheses (the acid-base reactions occurs instantaneously at the interface and the process is mainly governed by mass transfer) are thus validated. Finally, the model was used as a tool for catharanthine and vindoline separation prediction in the whole experimental domain that corresponded to a flow rate between 20 and 60 mL/min and rotational speeds from 900 and 2100 rotation per minutes. Copyright © 2015 Elsevier B.V. All rights reserved.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Modelling droplet collision outcomes for different substances and viscosities
NASA Astrophysics Data System (ADS)
Sommerfeld, Martin; Kuschel, Matthias
2016-12-01
The main objective of the present study is the derivation of models describing the outcome of binary droplet collisions for a wide range of dynamic viscosities in the well-known collision maps (i.e. normalised lateral droplet displacement at collision, called impact parameter, versus collision Weber number). Previous studies by Kuschel and Sommerfeld (Exp Fluids 54:1440, 2013) for different solution droplets having a range of solids contents and hence dynamic viscosities (here between 1 and 60 mPa s) revealed that the locations of the triple point (i.e. coincidence of bouncing, stretching separation and coalescence) and the critical Weber number (i.e. condition for the transition from coalescence to separation for head-on collisions) show a clear dependence on dynamic viscosity. In order to extend these findings also to pure liquids and to provide a broader data basis for modelling the viscosity effect, additional binary collision experiments were conducted for different alcohols (viscosity range 1.2-15.9 mPa s) and the FVA1 reference oil at different temperatures (viscosity range 3.0-28.2 mPa s). The droplet size for the series of alcohols was around 365 and 385 µm for the FVA1 reference oil, in each case with fixed diameter ratio at Δ= 1. The relative velocity between the droplets was varied in the range 0.5-3.5 m/s, yielding maximum Weber numbers of around 180. Individual binary droplet collisions with defined conditions were generated by two droplet chains each produced by vibrating orifice droplet generators. For recording droplet motion and the binary collision process with good spatial and temporal resolution high-speed shadow imaging was employed. The results for varied relative velocity and impact angle were assembled in impact parameter-Weber number maps. With increasing dynamic viscosity a characteristic displacement of the regimes for the different collision scenarios was also observed for pure liquids similar to that observed for solutions. This displacement could be described on a physical basis using the similarity number and structure parameter K which was obtained through flow process evaluation and optimal proportioning of momentum and energy by Naue and Bärwolff (Transportprozesse in Fluiden. Deutscher Verlag für Grundstoffindustrie GmbH, Leipzig 1992). Two correlations including the structure parameter K could be derived which describe the location of the triple point and the critical We number. All fluids considered, pure liquids and solutions, are very well fitted by these physically based correlations. The boundary model of Jiang et al. (J Fluid Mech 234:171-190, 1992) for distinguishing between coalescence and stretching separation could be adapted to go through the triple point by the two involved model parameters C a and C b, which were correlated with the relaxation velocity u_{{relax}} = {σ/μ}. Based on the predicted critical Weber number, denoting the onset of reflexive separation, the model of Ashgriz and Poo (J Fluid Mech 221:183-204, 1990) was adapted accordingly. The proper performance of the new generalised models was validated based on the present and previous measurements for a wide range of dynamic viscosities (i.e. 1-60 mPa s) and liquid properties. Although the model for the lower boundary of bouncing (Estrade et al. in J Heat Fluid Flow 20:486-491, 1999) could be adapted through the shape factor, it was found not suitable for the entire range of Weber numbers and viscosities.
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao
2018-03-01
Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.
VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern
2009-08-01
The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating “what if” scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intendedmore » as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., “reactor types” not individual reactors and “separation types” not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft Excel input files, a Powersim Studio core, and several Microsoft Excel output files. All must be co-located in the same folder on a PC to function. We use Microsoft Excel 2003 and have not tested VISION with Microsoft Excel 2007. The VISION team uses both Powersim Studio 2005 and 2009 and it should work with either.« less
On splice site prediction using weight array models: a comparison of smoothing techniques
NASA Astrophysics Data System (ADS)
Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard
2007-11-01
In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.
High-power CO laser with RF discharge for isotope separation employing condensation repression
NASA Astrophysics Data System (ADS)
Baranov, I. Ya.; Koptev, A. V.
2008-10-01
High-power CO laser can be the effective tool in such applications as isotope separation using the free-jet CRISLA method. The way of transfer from CO small-scale experimental installation to industrial high-power CO lasers is proposed through the use of a low-current radio-frequency (RF) electric discharge in a supersonic stream without an electron gun. The calculation model of scaling CO laser with RF discharge in supersonic stream was developed. The developed model allows to calculate parameters of laser installation and optimize them with the purpose of reception of high efficiency and low cost of installation as a whole. The technical decision of industrial CO laser for isotope separation employing condensation repression is considered. The estimated cost of laser is some hundred thousand dollars USA and small sizes of laser head give possibility to install it in any place.
Re-entrant phase behavior for systems with competition between phase separation and self-assembly
NASA Astrophysics Data System (ADS)
Reinhardt, Aleks; Williamson, Alexander J.; Doye, Jonathan P. K.; Carrete, Jesús; Varela, Luis M.; Louis, Ard A.
2011-03-01
In patchy particle systems where there is a competition between the self-assembly of finite clusters and liquid-vapor phase separation, re-entrant phase behavior can be observed, with the system passing from a monomeric vapor phase to a region of liquid-vapor phase coexistence and then to a vapor phase of clusters as the temperature is decreased at constant density. Here, we present a classical statistical mechanical approach to the determination of the complete phase diagram of such a system. We model the system as a van der Waals fluid, but one where the monomers can assemble into monodisperse clusters that have no attractive interactions with any of the other species. The resulting phase diagrams show a clear region of re-entrance. However, for the most physically reasonable parameter values of the model, this behavior is restricted to a certain range of density, with phase separation still persisting at high densities.
Mapping the Chevallier-Polarski-Linder parametrization onto physical dark energy Models
NASA Astrophysics Data System (ADS)
Scherrer, Robert J.
2015-08-01
We examine the Chevallier-Polarski-Linder (CPL) parametrization, in the context of quintessence and barotropic dark energy models, to determine the subset of such models to which it can provide a good fit. The CPL parametrization gives the equation of state parameter w for the dark energy as a linear function of the scale factor a , namely w =w0+wa(1 -a ). In the case of quintessence models, we find that over most of the w0, wa parameter space the CPL parametrization maps onto a fairly narrow form of behavior for the potential V (ϕ ), while a one-dimensional subset of parameter space, for which wa=κ (1 +w0) , with κ constant, corresponds to a wide range of functional forms for V (ϕ ). For barotropic models, we show that the functional dependence of the pressure on the density, up to a multiplicative constant, depends only on wi=wa+w0 and not on w0 and wa separately. Our results suggest that the CPL parametrization may not be optimal for testing either type of model.
Numerical investigation of compaction of deformable particles with bonded-particle model
NASA Astrophysics Data System (ADS)
Dosta, Maksym; Costa, Clara; Al-Qureshi, Hazim
2017-06-01
In this contribution, a novel approach developed for the microscale modelling of particles which undergo large deformations is presented. The proposed method is based on the bonded-particle model (BPM) and multi-stage strategy to adjust material and model parameters. By the BPM, modelled objects are represented as agglomerates which consist of smaller ideally spherical particles and are connected with cylindrical solid bonds. Each bond is considered as a separate object and in each time step the forces and moments acting in them are calculated. The developed approach has been applied to simulate the compaction of elastomeric rubber particles as single particles or in a random packing. To describe the complex mechanical behaviour of the particles, the solid bonds were modelled as ideally elastic beams. The functional parameters of solid bonds as well as material parameters of bonds and primary particles were estimated based on the experimental data for rubber spheres. Obtained results for acting force and for particle deformations during uniaxial compression are in good agreement with experimental data at higher strains.
Jilge, G; Unger, K K; Esser, U; Schäfer, H J; Rathgeber, G; Müller, W
1989-08-04
The linear solvent strength model of Snyder was applied to describe fast protein separations on 2.1-micron non-porous, silica-based strong anion exchangers. It was demonstrated on short columns packed with these anion exchangers that (i) a substantially higher resolution of proteins and nucleotides was obtained at gradient times of less than 5 min than on porous anion exchangers; (ii) the low external surface area of the non-porous anion exchanger is not a critical parameter in analytical separations and (iii) microgram-amounts of enzymes of high purity and full biological activity were isolated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, A.; Norcross, D.W.
1992-02-01
We report low-energy (0.001--10-eV) electron-CO scattering cross sections obtained using an exact-exchange (via a separable-exchange formulation) plus a parameter-free correlation-polarization model in the fixed-nuclei approximation (FNA). The differential, total, and momentum-transfer cross sections are reported for rotationally elastic, inelastic, and summed processes. To remove the limitations of the FNA with respect to the convergence of total and differential cross sections, the multipole-extracted-adiabatic-nuclei approximation is used. The position and width of the well-known {sup 2}{Pi} shape-resonance structure in the cross section around 2 eV are reproduced quite well; however, some discrepancy between theory and experiment in the magnitude of the totalmore » cross section in the resonance region exists. We also present results for {sup 2}{Pi} shape-resonance parameters as a function of internuclear separation. Differential-cross-section results agree well with the measurements of Tanaka, Srivastava, and Chutjian (J. Chem. Phys. 69, 5329 (1978)) but are about a factor of 2 larger than the results obtained by Jung {ital et} {ital al}. (J. Phys. B 15, 3535 (1982)) in the vicinity of the {sup 2}{Pi} resonance.« less
Verification of reflectance models in turbid waters
NASA Technical Reports Server (NTRS)
Tanis, F. J.; Lyzenga, D. R.
1981-01-01
Inherent optical parameters of very turbid waters were used to evaluate existing water reflectance models. Measured upwelling radiance spectra and Monte Carlo simulations of the radiative transfer equations were compared with results from models based upon two flow, quasi-single scattering, augmented isotropic scattering, and power series approximation. Each model was evaluated for three separate components of upwelling radiance: (1) direct sunlight; (2) diffuse skylight; and (3) internally reflected light. Limitations of existing water reflectance models as applied to turbid waters and possible applications to the extraction of water constituent information are discussed.
Nonequilibrium Phase Transition in a Model for Social Influence
NASA Astrophysics Data System (ADS)
Castellano, Claudio; Marsili, Matteo; Vespignani, Alessandro
2000-10-01
We present extensive numerical simulations of the Axelrod's model for social influence, aimed at understanding the formation of cultural domains. This is a nonequilibrium model with short range interactions and a remarkably rich dynamical behavior. We study the phase diagram of the model and uncover a nonequilibrium phase transition separating an ordered (culturally polarized) phase from a disordered (culturally fragmented) one. The nature of the phase transition can be continuous or discontinuous depending on the model parameters. At the transition, the size of cultural regions is power-law distributed.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology for quantitatively analyzing the reliability of redundant avionics systems, in general, and the dual, separated Redundant Strapdown Inertial Measurement Unit (RSDIMU), in particular, is presented. The RSDIMU is described and a candidate failure detection and isolation system presented. A Markov reliability model is employed. The operational states of the system are defined and the single-step state transition diagrams discussed. Graphical results, showing the impact of major system parameters on the reliability of the RSDIMU system, are presented and discussed.
Intervertebral disc response to cyclic loading--an animal model.
Ekström, L; Kaigle, A; Hult, E; Holm, S; Rostedt, M; Hansson, T
1996-01-01
The viscoelastic response of a lumbar motion segment loaded in cyclic compression was studied in an in vivo porcine model (N = 7). Using surgical techniques, a miniaturized servohydraulic exciter was attached to the L2-L3 motion segment via pedicle fixation. A dynamic loading scheme was implemented, which consisted of one hour of sinusoidal vibration at 5 Hz, 50 N peak load, followed by one hour of restitution at zero load and one hour of sinusoidal vibration at 5 Hz, 100 N peak load. The force and displacement responses of the motion segment were sampled at 25 Hz. The experimental data were used for evaluating the parameters of two viscoelastic models: a standard linear solid model (three-parameter) and a linear Burger's fluid model (four-parameter). In this study, the creep behaviour under sinusoidal vibration at 5 Hz closely resembled the creep behaviour under static loading observed in previous studies. Expanding the three-parameter solid model into a four-parameter fluid model made it possible to separate out a progressive linear displacement term. This deformation was not fully recovered during restitution and is therefore an indication of a specific effect caused by the cyclic loading. High variability was observed in the parameters determined from the 50 N experimental data, particularly for the elastic modulus E1. However, at the 100 N load level, significant differences between the models were found. Both models accurately predicted the creep response under the first 800 s of 100 N loading, as displayed by mean absolute errors for the calculated deformation data from the experimental data of 1.26 and 0.97 percent for the solid and fluid models respectively. The linear Burger's fluid model, however, yielded superior predictions particularly for the initial elastic response.
Mikaeli, S; Thorsén, G; Karlberg, B
2001-01-12
A novel approach to multivariate evaluation of separation electrolytes for micellar electrokinetic chromatography is presented. An initial screening of the experimental parameters is performed using a Plackett-Burman design. Significant parameters are further evaluated using full factorial designs. The total resolution of the separation is calculated and used as response. The proposed scheme has been applied to the optimisation of the separation of phenols and the chiral separation of (+)-1-(9-anthryl)-2-propyl chloroformate-derivatized amino acids. A total of eight experimental parameters were evaluated and optimal conditions found in less than 48 experiments.
Kinetics and mechanism of olefin catalytic hydroalumination by organoaluminum compounds
NASA Astrophysics Data System (ADS)
Koledina, K. F.; Gubaidullin, I. M.
2016-05-01
The complex reaction mechanism of α-olefin catalytic hydroalumination by alkylalanes is investigated via mathematical modeling that involves plotting the kinetic models for the individual reactions that make up a complex system and a separate study of their principles. Kinetic parameters of olefin catalytic hydroalumination are estimated. Activation energies of the possible steps of the schemes of complex reaction mechanisms are compared and possible reaction pathways are determined.
Sperm function and assisted reproduction technology
MAAß, GESA; BÖDEKER, ROLF‐HASSO; SCHEIBELHUT, CHRISTINE; STALF, THOMAS; MEHNERT, CLAAS; SCHUPPE, HANS‐CHRISTIAN; JUNG, ANDREAS; SCHILL, WOLF‐BERNHARD
2005-01-01
The evaluation of different functional sperm parameters has become a tool in andrological diagnosis. These assays determine the sperm's capability to fertilize an oocyte. It also appears that sperm functions and semen parameters are interrelated and interdependent. Therefore, the question arose whether a given laboratory test or a battery of tests can predict the outcome in in vitro fertilization (IVF). One‐hundred and sixty‐one patients who underwent an IVF treatment were selected from a database of 4178 patients who had been examined for male infertility 3 months before or after IVF. Sperm concentration, motility, acrosin activity, acrosome reaction, sperm morphology, maternal age, number of transferred embryos, embryo score, fertilization rate and pregnancy rate were determined. In addition, logistic regression models to describe fertilization rate and pregnancy were developed. All the parameters in the models were dichotomized and intra‐ and interindividual variability of the parameters were assessed. Although the sperm parameters showed good correlations with IVF when correlated separately, the only essential parameter in the multivariate model was morphology. The enormous intra‐ and interindividual variability of the values was striking. In conclusion, our data indicate that the andrological status at the end of the respective treatment does not necessarily represent the status at the time of IVF. Despite a relatively low correlation coefficient in the logistic regression model, it appears that among the parameters tested, the most reliable parameter to predict fertilization is normal sperm morphology. (Reprod Med Biol 2005; 4: 7–30) PMID:29699207
Awad, Ibrahim; Ladani, Leila
2015-12-04
Carbon nanotube (CNT)/copper (Cu) composite material is proposed to replace Cu-based through-silicon vias (TSVs) in micro-electronic packages. The proposed material is believed to offer extraordinary mechanical and electrical properties and the presence of CNTs in Cu is believed to overcome issues associated with miniaturization of Cu interconnects, such as electromigration. This study introduces a multi-scale modeling of the proposed TSV in order to evaluate its mechanical integrity under mechanical and thermo-mechanical loading conditions. Molecular dynamics (MD) simulation was used to determine CNT/Cu interface adhesion properties. A cohesive zone model (CZM) was found to be most appropriate to model the interface adhesion, and CZM parameters at the nanoscale were determined using MD simulation. CZM parameters were then used in the finite element analysis in order to understand the mechanical and thermo-mechanical behavior of composite TSV at micro-scale. From the results, CNT/Cu separation does not take place prior to plastic deformation of Cu in bending, and separation does not take place when standard thermal cycling is applied. Further investigation is recommended in order to alleviate the increased plastic deformation in Cu at the CNT/Cu interface in both loading conditions.
Dynamic rupture modeling of thrust faults with parallel surface traces.
NASA Astrophysics Data System (ADS)
Peshette, P.; Lozos, J.; Yule, D.
2017-12-01
Fold and thrust belts (such as those found in the Himalaya or California Transverse Ranges) consist of many neighboring thrust faults in a variety of geometries. Active thrusts within these belts individually contribute to regional seismic hazard, but further investigation is needed regarding the possibility of multi-fault rupture in a single event. Past analyses of historic thrust surface traces suggest that rupture within a single event can jump up to 12 km. There is also observational precedent for long distance triggering between subparallel thrusts (e.g. the 1997 Harnai, Pakistan events, separated by 50 km). However, previous modeling studies find a maximum jumping rupture distance between thrust faults of merely 200 m. Here, we present a new dynamic rupture modeling parameter study that attempts to reconcile these differences and determine which geometrical and stress conditions promote jumping rupture. We use a community verified 3D finite element method to model rupture on pairs of thrust faults with parallel surface traces. We vary stress drop and fault strength to determine which conditions produce jumping rupture at different dip angles and different separations between surface traces. This parameter study may help to understand the likelihood of jumping rupture in real-world thrust systems, and may thereby improve earthquake hazard assessment.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.
Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis
2015-01-01
Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.
Nishino, Ko; Lombardi, Stephen
2011-01-01
We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.
Study on the separation effect of high-speed ultrasonic vibration cutting.
Zhang, Xiangyu; Sui, He; Zhang, Deyuan; Jiang, Xinggang
2018-07-01
High-speed ultrasonic vibration cutting (HUVC) has been proven to be significantly effective when turning Ti-6Al-4V alloy in recent researches. Despite of breaking through the cutting speed restriction of the ultrasonic vibration cutting (UVC) method, HUVC can also achieve the reduction of cutting force and the improvements in surface quality and cutting efficiency in the high-speed machining field. These benefits all result from the separation effect that occurs during the HUVC process. Despite the fact that the influences of vibration and cutting parameters have been discussed in previous researches, the separation analysis of HUVC should be conducted in detail in real cutting situations, and the tool geometry parameters should also be considered. In this paper, three situations are investigated in details: (1) cutting without negative transient clearance angle and without tool wear, (2) cutting with negative transient clearance angle and without tool wear, and (3) cutting with tool wear. And then, complete separation state, partial separation state and continuous cutting state are deduced according to real cutting processes. All the analysis about the above situations demonstrate that the tool-workpiece separation will take place only if appropriate cutting parameters, vibration parameters, and tool geometry parameters are set up. The best separation effect was obtained with a low feedrate and a phase shift approaching 180 degrees. Moreover, flank face interference resulted from the negative transient clearance angle and tool wear contributes to an improved separation effect that makes the workpiece and tool separate even at zero phase shift. Finally, axial and radial transient cutting force are firstly obtained to verify the separation effect of HUVC, and the cutting chips are collected to weigh the influence of flank face interference. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Orlando, Elena
2016-04-01
Galactic synchrotron radiation observed from radio to microwaves is produced by cosmic-ray (CR) electrons propagating in magnetic fields (B-fields). The low-frequency foreground component separated maps by WMAP and Planck depend on the assumed synchrotron spectrum. The synchrotron spectrum varies for different line of sights as a result of changes on the CR spectrum due to propagation effects and source distributions. Our present knowledge of the CR spectrum at different locations in the Galaxy is not sufficient to distinguish various possibilities in the modeling. As a consequence uncertainties on synchrotron emission models complicate the foreground component separation analysis with Planck and future microwave telescopes. Hence, any advancement in synchrotron modeling is important for separating the different foreground components.The first step towards a more comprehensive understanding of degeneracy and correlation among the synchrotron model parameters is outlined in our Strong et al. 2011 and Orlando et al. 2013 papers. In the latter the conclusion was that CR spectrum, propagation models, B-fields, and foreground component separation analysis need to be studied simultaneously in order to properly obtain and interpret the synchrotron foreground. Indeed for the officially released Planck maps, we use only the best spectral model from our above paper for the component separation analysis.Here we present a collections of our latest results on synchrotron, CRs and B-fields in the context of CR propagation, showing also our recent work on B-fields within the Planck Collaboration. We underline also the importance of using the constraints on CRs that we obtain from gamma ray observations. Methods and perspectives for further studies on the synchrotron foreground will be addressed.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Animal population dynamics: Identification of critical components
Emlen, J.M.; Pikitch, E.K.
1989-01-01
There is a growing interest in the use of population dynamics models in environmental risk assessment and the promulgation of environmental regulatory policies. Unfortunately, because of species and areal differences in the physical and biotic influences on population dynamics, such models must almost inevitably be both complex and species- or site-specific. Given the emormous variety of species and sites of potential concern, this fact presents a problem; it simply is not possible to construct models for all species and circumstances. Therefore, it is useful, before building predictive population models, to discover what input parameters are of critical importance to the desired output. This information should enable the construction of simpler and more generalizable models. As a first step, it is useful to consider population models as composed to two, partly separable classes, one comprising the purely mechanical descriptors of dynamics from given demographic parameter values, and the other describing the modulation of the demographic parameters by environmental factors (changes in physical environment, species interactions, pathogens, xenobiotic chemicals). This division permits sensitivity analyses to be run on the first of these classes, providing guidance for subsequent model simplification. We here apply such a sensitivity analysis to network models of mammalian and avian population dynamics.
Kröner, Frieder; Elsäßer, Dennis; Hubbuch, Jürgen
2013-11-29
The accelerating growth of the market for biopharmaceutical proteins, the market entry of biosimilars and the growing interest in new, more complex molecules constantly pose new challenges for bioseparation process development. In the presented work we demonstrate the application of a multidimensional, analytical separation approach to obtain the relevant physicochemical parameters of single proteins in a complex mixture for in silico chromatographic process development. A complete cell lysate containing a low titre target protein was first fractionated by multiple linear salt gradient anion exchange chromatography (AEC) with varying gradient length. The collected fractions were subsequently analysed by high-throughput capillary gel electrophoresis (HT-CGE) after being desalted and concentrated. From the obtained data of the 2D-separation the retention-volumes and the concentration of the single proteins were determined. The retention-volumes of the single proteins were used to calculate the related steric-mass action model parameters. In a final evaluation experiment the received parameters were successfully applied to predict the retention behaviour of the single proteins in salt gradient AEC. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Grigorian, H.
2007-05-01
We describe the basic formulation of the parametrization scheme for the instantaneous nonlocal chiral quark model in the three-flavor case. We choose to discuss the Gaussian, Lorentzian-type, Woods-Saxon, and sharp cutoff (NJL) functional forms of the momentum dependence for the form factor of the separable interaction. The four parameters, light and strange quark masses and coupling strength (G S) and range of the interaction (Λ), have been fixed by the same phenomenological inputs: pion and kaon masses and the pion decay constant and light quark mass in vacuum. The Woods-Saxon and Lorentzian-type form factors are suitable for an interpolation between sharp cutoff and soft momentum dependence. Results are tabulated for applications in models of hadron structure and quark matter at finite temperatures and chemical potentials, where separable models have been proven successfully.
NASA Technical Reports Server (NTRS)
Fontana, R. R.; Hubbard, J. E., Jr.
1983-01-01
Mini-tuft and smoke flow visualization techniques have been developed for the investigation of model helicopter rotor blade vortex interaction noise at low tip speeds. These techniques allow the parameters required for calculation of the blade vortex interaction noise using the Widnall/Wolf model to be determined. The measured acoustics are compared with the predicted acoustics for each test condition. Under the conditions tested it is determined that the dominating acoustic pulse results from the interaction of the blade with a vortex 1-1/4 revolutions old at an interaction angle of less than 8 deg. The Widnall/Wolf model predicts the peak sound pressure level within 3 dB for blade vortex separation distances greater than 1 semichord, but it generally over predicts the peak S.P.L. by over 10 dB for blade vortex separation distances of less than 1/4 semichord.
Brehme, Marc; Koschmieder, Steffen; Montazeri, Maryam; Copland, Mhairi; Oehler, Vivian G.; Radich, Jerald P.; Brümmendorf, Tim H.; Schuppert, Andreas
2016-01-01
Modelling the parameters of multistep carcinogenesis is key for a better understanding of cancer progression, biomarker identification and the design of individualized therapies. Using chronic myeloid leukemia (CML) as a paradigm for hierarchical disease evolution we show that combined population dynamic modelling and CML patient biopsy genomic analysis enables patient stratification at unprecedented resolution. Linking CD34+ similarity as a disease progression marker to patient-derived gene expression entropy separated established CML progression stages and uncovered additional heterogeneity within disease stages. Importantly, our patient data informed model enables quantitative approximation of individual patients’ disease history within chronic phase (CP) and significantly separates “early” from “late” CP. Our findings provide a novel rationale for personalized and genome-informed disease progression risk assessment that is independent and complementary to conventional measures of CML disease burden and prognosis. PMID:27048866
Zhou, Weichen; Ma, Yanyun; Zhang, Jun; Hu, Jingyi; Zhang, Menghan; Wang, Yi; Li, Yi; Wu, Lijun; Pan, Yida; Zhang, Yitong; Zhang, Xiaonan; Zhang, Xinxin; Zhang, Zhanqing; Zhang, Jiming; Li, Hai; Lu, Lungen; Jin, Li; Wang, Jiucun; Yuan, Zhenghong; Liu, Jie
2017-11-01
Liver biopsy is the gold standard to assess pathological features (eg inflammation grades) for hepatitis B virus-infected patients although it is invasive and traumatic; meanwhile, several gene profiles of chronic hepatitis B (CHB) have been separately described in relatively small hepatitis B virus (HBV)-infected samples. We aimed to analyse correlations among inflammation grades, gene expressions and clinical parameters (serum alanine amino transaminase, aspartate amino transaminase and HBV-DNA) in large-scale CHB samples and to predict inflammation grades by using clinical parameters and/or gene expressions. We analysed gene expressions with three clinical parameters in 122 CHB samples by an improved regression model. Principal component analysis and machine-learning methods including Random Forest, K-nearest neighbour and support vector machine were used for analysis and further diagnosis models. Six normal samples were conducted to validate the predictive model. Significant genes related to clinical parameters were found enriching in the immune system, interferon-stimulated, regulation of cytokine production, anti-apoptosis, and etc. A panel of these genes with clinical parameters can effectively predict binary classifications of inflammation grade (area under the ROC curve [AUC]: 0.88, 95% confidence interval [CI]: 0.77-0.93), validated by normal samples. A panel with only clinical parameters was also valuable (AUC: 0.78, 95% CI: 0.65-0.86), indicating that liquid biopsy method for detecting the pathology of CHB is possible. This is the first study to systematically elucidate the relationships among gene expressions, clinical parameters and pathological inflammation grades in CHB, and to build models predicting inflammation grades by gene expressions and/or clinical parameters as well. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Park, Chanhun; Nam, Hee-Geun; Jo, Se-Hee; Wang, Nien-Hwa Linda; Mun, Sungyong
2016-02-26
The economical efficiency of valine production in related industries is largely affected by the performance of a valine separation process, in which valine is to be separated from leucine, alanine, and ammonium sulfate. Such separation is currently handled by a batch-mode hybrid process based on ion-exchange and crystallization schemes. To make a substantial improvement in the economical efficiency of an industrial valine production, such a batch-mode process based on two different separation schemes needs to be converted into a continuous-mode separation process based on a single separation scheme. To address this issue, a simulated moving bed (SMB) technology was applied in this study to the development of a continuous-mode valine-separation chromatographic process with uniformity in adsorbent and liquid phases. It was first found that a Chromalite-PCG600C resin could be eligible for the adsorbent of such process, particularly in an industrial scale. The intrinsic parameters of each component on the Chromalite-PCG600C adsorbent were determined and then utilized in selecting a proper set of configurations for SMB units, columns, and ports, under which the SMB operating parameters were optimized with a genetic algorithm. Finally, the optimized SMB based on the selected configurations was tested experimentally, which confirmed its effectiveness in continuous separation of valine from leucine, alanine, ammonium sulfate with high purity, high yield, high throughput, and high valine product concentration. It is thus expected that the developed SMB process in this study will be able to serve as one of the trustworthy ways of improving the economical efficiency of an industrial valine production process. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluation of Industry Standard Turbulence Models on an Axisymmetric Supersonic Compression Corner
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2015-01-01
Reynolds-averaged Navier-Stokes computations of a shock-wave/boundary-layer interaction (SWBLI) created by a Mach 2.85 flow over an axisymmetric 30-degree compression corner were carried out. The objectives were to evaluate four turbulence models commonly used in industry, for SWBLIs, and to evaluate the suitability of this test case for use in further turbulence model benchmarking. The Spalart-Allmaras model, Menter's Baseline and Shear Stress Transport models, and a low-Reynolds number k- model were evaluated. Results indicate that the models do not accurately predict the separation location; with the SST model predicting the separation onset too early and the other models predicting the onset too late. Overall the Spalart-Allmaras model did the best job in matching the experimental data. However there is significant room for improvement, most notably in the prediction of the turbulent shear stress. Density data showed that the simulations did not accurately predict the thermal boundary layer upstream of the SWBLI. The effect of turbulent Prandtl number and wall temperature were studied in an attempt to improve this prediction and understand their effects on the interaction. The data showed that both parameters can significantly affect the separation size and location, but did not improve the agreement with the experiment. This case proved challenging to compute and should provide a good test for future turbulence modeling work.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
Joint Bayesian Component Separation and CMB Power Spectrum Estimation
NASA Technical Reports Server (NTRS)
Eriksen, H. K.; Jewell, J. B.; Dickinson, C.; Banday, A. J.; Gorski, K. M.; Lawrence, C. R.
2008-01-01
We describe and implement an exact, flexible, and computationally efficient algorithm for joint component separation and CMB power spectrum estimation, building on a Gibbs sampling framework. Two essential new features are (1) conditional sampling of foreground spectral parameters and (2) joint sampling of all amplitude-type degrees of freedom (e.g., CMB, foreground pixel amplitudes, and global template amplitudes) given spectral parameters. Given a parametric model of the foreground signals, we estimate efficiently and accurately the exact joint foreground- CMB posterior distribution and, therefore, all marginal distributions such as the CMB power spectrum or foreground spectral index posteriors. The main limitation of the current implementation is the requirement of identical beam responses at all frequencies, which restricts the analysis to the lowest resolution of a given experiment. We outline a future generalization to multiresolution observations. To verify the method, we analyze simple models and compare the results to analytical predictions. We then analyze a realistic simulation with properties similar to the 3 yr WMAP data, downgraded to a common resolution of 3 deg FWHM. The results from the actual 3 yr WMAP temperature analysis are presented in a companion Letter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be
2015-06-15
We extend the construction of 2D superintegrable Hamiltonians with separation of variables in spherical coordinates using combinations of shift, ladder, and supercharge operators to models involving rational extensions of the two-parameter Lissajous systems on the sphere. These new families of superintegrable systems with integrals of arbitrary order are connected with Jacobi exceptional orthogonal polynomials of type I (or II) and supersymmetric quantum mechanics. Moreover, we present an algebraic derivation of the degenerate energy spectrum for the one- and two-parameter Lissajous systems and the rationally extended models. These results are based on finitely generated polynomial algebras, Casimir operators, realizations as deformedmore » oscillator algebras, and finite-dimensional unitary representations. Such results have only been established so far for 2D superintegrable systems separable in Cartesian coordinates, which are related to a class of polynomial algebras that display a simpler structure. We also point out how the structure function of these deformed oscillator algebras is directly related with the generalized Heisenberg algebras spanned by the nonpolynomial integrals.« less
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; An, Hyunuk; Kim, Sanghyun
2015-04-01
Soil moisture, a critical factor in hydrologic systems, plays a key role in synthesizing interactions among soil, climate, hydrological response, solute transport and ecosystem dynamics. The spatial and temporal distribution of soil moisture at a hillslope scale is essential for understanding hillslope runoff generation processes. In this study, we implement Monte Carlo simulations in the hillslope scale using a three-dimensional surface-subsurface integrated model (3D model). Numerical simulations are compared with multiple soil moistures which had been measured using TDR(Mini_TRASE) for 22 locations in 2 or 3 depths during a whole year at a hillslope (area: 2100 square meters) located in Bongsunsa Watershed, South Korea. In stochastic simulations via Monte Carlo, uncertainty of the soil parameters and input forcing are considered and model ensembles showing good performance are selected separately for several seasonal periods. The presentation will be focused on the characterization of seasonal variations of model parameters based on simulations with field measurements. In addition, structural limitations of the contemporary modeling method will be discussed.
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Yamakou, Marius E.; Jost, Jürgen
2017-10-01
In recent years, several, apparently quite different, weak-noise-induced resonance phenomena have been discovered. Here, we show that at least two of them, self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR), can be related by a simple parameter switch in one of the simplest models, the FitzHugh-Nagumo (FHN) neuron model. We consider a FHN model with a unique fixed point perturbed by synaptic noise. Depending on the stability of this fixed point and whether it is located to either the left or right of the fold point of the critical manifold, two distinct weak-noise-induced phenomena, either SISR or ISR, may emerge. SISR is more robust to parametric perturbations than ISR, and the coherent spike train generated by SISR is more robust than that generated deterministically. ISR also depends on the location of initial conditions and on the time-scale separation parameter of the model equation. Our results could also explain why real biological neurons having similar physiological features and synaptic inputs may encode very different information.
Numerical modeling and optimization of the Iguassu gas centrifuge
NASA Astrophysics Data System (ADS)
Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.
2017-07-01
The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stepinski, Dominique C.; Youker, Amanda J.; Krahn, Elizabeth O.
2017-03-01
Molybdenum-99 is a parent of the most widely used medical isotope technetium-99m. Proliferation concerns have prompted development of alternative Mo production methods utilizing low enriched uranium. Alumina and titania sorbents were evaluated for separation of Mo from concentrated uranyl nitrate solutions. System, mass transfer, and isotherm parameters were determined to enable design of Mo separation processes under a wide range of conditions. A model-based approach was utilized to design representative commercial-scale column processes. The designs and parameters were verified with bench-scale experiments. The results are essential for design of Mo separation processes from irradiated uranium solutions, selection of support materialmore » and process optimization. Mo uptake studies show that adsorption decreases with increasing concentration of uranyl nitrate; howeveL, examination of Mo adsorption as a function of nitrate ion concentration shows no dependency, indicating that uranium competes with Mo for adsorption sites. These results are consistent with reports indicating that Mo forms inner-sphere complexes with titania and alumina surface groups.« less
First-principles study of metallic iron interfaces
NASA Astrophysics Data System (ADS)
Hung, A.; Yarovsky, I.; Muscat, J.; Russo, S.; Snook, I.; Watts, R. O.
2002-04-01
Adhesion between clean, bulk-terminated bcc Fe(1 0 0) and Fe(1 1 0) matched and mismatched surfaces was simulated within the theoretical framework of the density functional theory. The generalized-gradient spin approximation exchange-correlation functional was used in conjunction with a plane wave-ultrasoft pseudopotential representation. The structure and properties of bulk bcc Fe were calculated in order to establish the reliability of the methodology employed, as well as to determine suitably converged values of computational parameters to be used in subsequent surface calculations. Interfaces were modelled using a single supercell approach, with the interfacial separation distance manipulated by the size of vacuum separation between vertically adjacent surface cells. The adhesive energies at discrete interfacial separations were calculated for each interface and the resulting data fitted to the universal binding energy relation (UBER) of Rose et al. [Phys. Rev. Lett. 47 (1981) 675]. An interpretation of the values of the fitted UBER parameters for the four Fe interfaces studied is given. In addition, a discussion on the validity of the employed computational methodology is presented.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Maas, Anne H; Rozendaal, Yvonne J W; van Pul, Carola; Hilbers, Peter A J; Cottaar, Ward J; Haak, Harm R; van Riel, Natal A W
2015-03-01
Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. © 2014 Diabetes Technology Society.
Linking Item Parameters to a Base Scale
ERIC Educational Resources Information Center
Kang, Taehoon; Petersen, Nancy S.
2012-01-01
This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord in "Appl Psychol Measure"…
NASA Astrophysics Data System (ADS)
Alkharji, Mohammed N.
Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The results showed that the hybrid algorithm successfully predicted the fracture parametrization, geometry, and the fluid content within the modeled reservoir. The method was also applied on an elastic tensor extracted from the Weyburn field in Saskatchewan, Canada. The solution suggested no presence of fractures but only a VTI system caused by the shale layering in the targeted reservoir, this interpretation is supported by other Weyburn field data.
Fundamentals of capillary electrochromatography: migration behavior of ionized sample components.
Xiang, Rong; Horváth, Csaba
2002-02-15
The mechanism of separating charged species by capillary electrochromatography (CEC) was modeled with the conditions of ideal/linear chromatography by using a simple random walk. The most novel aspect of the work rests with the assumption that in sufficiently high electric field ionized sample components can also migrate in the adsorbed state on the ionized surface of the stationary phase. This feature of CEC leads to the introduction of three dimensionless parameters: alpha, reduced mobility of a sample component with the electrosmotic mobility as the reference; beta, the CEC retention factor; and gamma, the ratio of the electrophoretic migration velocity and the velocity of surface electrodiffusion. Since the interplay of retentive and electrophoretic forces determines the overall migration velocity, the separation mechanism in CEC is governed by the relative importance of the above parameters. The model predicts conditions under which the features of the CEC system engender migration behavior that manifests itself in a relatively narrow elution window and in a gradient like elution pattern in the separation of peptides and proteins by using pro forma isocratic CEC. It is believed that such elution patterns, which resemble those obtained by the use of external gradient of the eluent, are brought about by the formation of an internal gradient in the CEC system that gave rise to concomitant peak compression. The peculiarities of CEC are discussed in the three operational modalities of the technique: co-current, countercurrent, and co-counter CEC. The results suggest that CEC, which is often called "liquid chromatography on electrophoretic platform" is an analytical tool with great potential in the separation of peptides and proteins.
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
Program for computer aided reliability estimation
NASA Technical Reports Server (NTRS)
Mathur, F. P. (Inventor)
1972-01-01
A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.
The nature of the continuous non-equilibrium phase transition of Axelrod's model
NASA Astrophysics Data System (ADS)
Peres, Lucas R.; Fontanari, José F.
2015-09-01
Axelrod's model in the square lattice with nearest-neighbors interactions exhibits culturally homogeneous as well as culturally fragmented absorbing configurations. In the case in which the agents are characterized by F = 2 cultural features and each feature assumes k states drawn from a Poisson distribution of parameter q, these regimes are separated by a continuous transition at qc = 3.10 +/- 0.02 . Using Monte Carlo simulations and finite-size scaling we show that the mean density of cultural domains μ is an order parameter of the model that vanishes as μ ∼ (q - q_c)^β with β = 0.67 +/- 0.01 at the critical point. In addition, for the correlation length critical exponent we find ν = 1.63 +/- 0.04 and for Fisher's exponent, τ = 1.76 +/- 0.01 . This set of critical exponents places the continuous phase transition of Axelrod's model apart from the known universality classes of non-equilibrium lattice models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Schimpe, Michael; von Kuepach, Markus Edler
For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately.For parameterization, a lifetime test study is conducted including storagemore » and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. The model error for the cell capacity loss in the application-based tests is at the end of testing below 1 % of the original cell capacity.« less
NASA Astrophysics Data System (ADS)
Sutradhar, S.; Basu, S.; Paul, R.
2015-10-01
Cell division through proper spindle formation is one of the key puzzles in cell biology. In most mammalian cells, chromosomes spontaneously arrange to achieve a stable bipolar spindle during metaphase which eventually ensures proper segregation of the DNA into the daughter cells. In this paper, we present a robust three-dimensional mechanistic model to investigate the formation and maintenance of a bipolar mitotic spindle in mammalian cells under different physiological constraints. Using realistic parameters, we test spindle viability by measuring the spindle length and studying the chromosomal configuration. The model strikingly predicts a feature of the spindle instability arising from the insufficient intercentrosomal angular separation and impaired sliding of the interpolar microtubules. In addition, our model successfully reproduces chromosomal patterns observed in mammalian cells, when activity of different motor proteins is perturbed.
Reaction-mediated entropic effect on phase separation in a binary polymer system
NASA Astrophysics Data System (ADS)
Sun, Shujun; Guo, Miaocai; Yi, Xiaosu; Zhang, Zuoguang
2017-10-01
We present a computer simulation to study the phase separation behavior induced by polymerization in a binary system comprising polymer chains and reactive monomers. We examined the influence of interaction parameter between components and monomer concentration on the reaction-induced phase separation. The simulation results demonstrate that increasing interaction parameter (enthalpic effect) would accelerate phase separation, while entropic effect plays a key role in the process of phase separation. Furthermore, scanning electron microscopy observations illustrate identical morphologies as found in theoretical simulation. This study may enrich our comprehension of phase separation in polymer mixture.
NASA Astrophysics Data System (ADS)
Cabassi, Giovanni; Cavalli, Daniele; Borrelli, Lamberto; Degano, Luigi; Marino Gallina, Pietro
2014-05-01
The use of simulation models to study the turnover of soil organic matter (SOM) can support experimental data interpretation and the optimization of manure management. Icbm/2 (Katter, 2001) is a SOM simulation model that describes the turnover of SOM with three pools : one for old humified SOM (CO) and two for added manure, CL ( labile "young" C) and CS (stable "young" C). C outflows from CL and CR to be humified (h) and lost as CO2-C (1-h). All pools decay with firs-order kinetics with parameter kYL, kYR and kO (fig. 1).With this model of SOM turnover, during manure decomposition into the soil, only the evolved CO2 can be easily measured. Near infrared spectroscopy has been proved to be a useful technique for soil C evaluation. Since different soil C pools are expected to have different chemical composition, it was proven that NIR can be used as a cheap technique to develop calibration models to estimate the amount of C belonging to different pools). The aim of this work was compare the calibration of ICBM/2 using C respiration data or optimal NIR prediction of CO and CL pools. A total of six laboratory treatments were established using the same soil corresponding to the application of five fertilisers and a control treatment: 1) control without N fertilisation; 2) ammonium sulphate; 3) anaerobically digested dairy cow slurry (Digested slurry); 4-5) the liquid (Liquid fraction) and solid (Solid fraction) fractions after mechanical separation of Digested slurry; and 6) anaerobically stored dairy cow slurry (Stored slurry). The "nursery" method was used with 12 sampling dates. NIR analysis were performed on the air dried grounded soils. Spectra were collected using an FT-NIR Spectrometer. Parameters calibration was done separately for each soil using the downhill simplex method. For each manure, a C partitioning factor (Fi) was optimised. In each optimization step respiration measured data or NIR estimates CL and CO were used as imput for minimisation objective function. At the end the algorithm found those parameters that gave the lowest averaged RMSE between errors in the estimation of respired C. The model parameter extimations obtained using C respiration data and NIR predictions were comparable indicating a general ability of the NIR method to estimate model parameters together with a good prediction of C mineralisation.
The performance and relationship among range-separated schemes for density functional theory
NASA Astrophysics Data System (ADS)
Nguyen, Kiet A.; Day, Paul N.; Pachter, Ruth
2011-08-01
The performance and relationship among different range-separated (RS) hybrid functional schemes are examined using the Coulomb-attenuating method (CAM) with different values for the fractions of exact Hartree-Fock (HF) exchange (α), long-range HF (β), and a range-separation parameter (μ), where the cases of α + β = 1 and α + β = 0 were designated as CA and CA0, respectively. Attenuated PBE exchange-correlation functionals with α = 0.20 and μ = 0.20 (CA-PBE) and α = 0.25 and μ = 0.11 (CA0-PBE) are closely related to the LRC-ωPBEh and HSE functionals, respectively. Time-dependent density functional theory calculations were carried out for a number of classes of molecules with varying degrees of charge-transfer (CT) character to provide an assessment of the accuracy of excitation energies from the CA functionals and a number of other functionals with different exchange hole models. Functionals that provided reasonable estimates for local and short-range CT transitions were found to give large errors for long-range CT excitations. In contrast, functionals that afforded accurate long-range CT excitation energies significantly overestimated energies for short-range CT and local transitions. The effects of exchange hole models and parameters developed for RS functionals for CT excitations were analyzed in detail. The comparative analysis across compound classes provides a useful benchmark for CT excitations.
The open XXX spin chain in the SoV framework: scalar product of separate states
NASA Astrophysics Data System (ADS)
Kitanine, N.; Maillet, J. M.; Niccoli, G.; Terras, V.
2017-06-01
We consider the XXX open spin-1/2 chain with the most general non-diagonal boundary terms, that we solve by means of the quantum separation of variables (SoV) approach. We compute the scalar products of separate states, a class of states which notably contains all the eigenstates of the model. As usual for models solved by SoV, these scalar products can be expressed as some determinants with a non-trivial dependance in terms of the inhomogeneity parameters that have to be introduced for the method to be applicable. We show that these determinants can be transformed into alternative ones in which the homogeneous limit can easily be taken. These new representations can be considered as generalizations of the well-known determinant representation for the scalar products of the Bethe states of the periodic chain. In the particular case where a constraint is applied on the boundary parameters, such that the transfer matrix spectrum and eigenstates can be characterized in terms of polynomial solutions of a usual T-Q equation, the scalar product that we compute here corresponds to the scalar product between two off-shell Bethe-type states. If in addition one of the states is an eigenstate, the determinant representation can be simplified, hence leading in this boundary case to direct analogues of algebraic Bethe ansatz determinant representations of the scalar products for the periodic chain.
Samlan, Robin A.; Story, Brad H.
2011-01-01
Purpose To relate vocal fold structure and kinematics to two acoustic measures: cepstral peak prominence (CPP) and the amplitude of the first harmonic relative to the second (H1-H2). Method A computational, kinematic model of the medial surfaces of the vocal folds was used to specify features of vocal fold structure and vibration in a manner consistent with breathy voice. Four model parameters were altered: degree of vocal fold adduction, surface bulging, vibratory nodal point, and supraglottal constriction. CPP and H1-H2 were measured from simulated glottal area, glottal flow and acoustic waveforms and related to the underlying vocal fold kinematics. Results CPP decreased with increased separation of the vocal processes, whereas the nodal point location had little effect. H1-H2 increased as a function of separation of the vocal processes in the range of 1–1.5 mm and decreased with separation > 1.5 mm. Conclusions CPP is generally a function of vocal process separation. H1*-H2* will increase or decrease with vocal process separation based on vocal fold shape, pivot point for the rotational mode, and supraglottal vocal tract shape, limiting its utility as an indicator of breathy voice. Future work will relate the perception of breathiness to vocal fold kinematics and acoustic measures. PMID:21498582
Kinetics of motility-induced phase separation and swim pressure
NASA Astrophysics Data System (ADS)
Patch, Adam; Yllanes, David; Marchetti, M. Cristina
Active Brownian particles (ABPs) represent a minimal model of active matter consisting of self-propelled spheres with purely repulsive interactions and rotational noise. We correlate the time evolution of the mean pressure towards its steady state value with the kinetics of motility-induced phase separation. For parameter values corresponding to phase separated steady states, we identify two dynamical regimes. The pressure grows monotonically in time during the initial regime of rapid cluster formation, overshooting its steady state value and then quickly relaxing to it, and remains constant during the subsequent slower period of cluster coalescence and coarsening. The overshoot is a distinctive feature of active systems. NSF-DMR-1305184, NSF-DGE-1068780, ACI-1341006, FIS2015-65078-C02, BIFI-ZCAM.
Linking Item Parameters to a Base Scale. ACT Research Report Series, 2009-2
ERIC Educational Resources Information Center
Kang, Taehoon; Petersen, Nancy S.
2009-01-01
This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord (1983) characteristic curve method…
Control of unsteady separated flow associated with the dynamic stall of airfoils
NASA Technical Reports Server (NTRS)
Wilder, M. C.
1994-01-01
A unique active flow-control device is proposed for the control of unsteady separated flow associated with the dynamic stall of airfoils. The device is an adaptive-geometry leading-edge which will allow controlled, dynamic modification of the leading-edge profile of an airfoil while the airfoil is executing an angle-of-attack pitch-up maneuver. A carbon-fiber composite skin has been bench tested, and a wind tunnel model is under construction. A baseline parameter study of compressible dynamic stall was performed for flow over an NACA 0012 airfoil. Parameters included Mach number, pitch rate, pitch history, and boundary layer tripping. Dynamic stall data were recorded via point-diffraction interferometry and the interferograms were analyzed with in-house developed image processing software. A new high-speed phase-locked photographic image recording system was developed for real-time documentation of dynamic stall.
NASA Astrophysics Data System (ADS)
Tian, Ye; Yan, Chunhua; Zhang, Tianlong; Tang, Hongsheng; Li, Hua; Yu, Jialu; Bernard, Jérôme; Chen, Li; Martin, Serge; Delepine-Gilon, Nicole; Bocková, Jana; Veis, Pavel; Chen, Yanping; Yu, Jin
2017-09-01
Laser-induced breakdown spectroscopy (LIBS) has been applied to classify French wines according to their production regions. The use of the surface-assisted (or surface-enhanced) sample preparation method enabled a sub-ppm limit of detection (LOD), which led to the detection and identification of at least 22 metal and nonmetal elements in a typical wine sample including majors, minors and traces. An ensemble of 29 bottles of French wines, either red or white wines, from five production regions, Alsace, Bourgogne, Beaujolais, Bordeaux and Languedoc, was analyzed together with a wine from California, considered as an outlier. A non-supervised classification model based on principal component analysis (PCA) was first developed for the classification. The results showed a limited separation power of the model, which however allowed, in a step by step approach, to understand the physical reasons behind each step of sample separation and especially to observe the influence of the matrix effect in the sample classification. A supervised classification model was then developed based on random forest (RF), which is in addition a nonlinear algorithm. The obtained classification results were satisfactory with, when the parameters of the model were optimized, a classification accuracy of 100% for the tested samples. We especially discuss in the paper, the effect of spectrum normalization with an internal reference, the choice of input variables for the classification models and the optimization of parameters for the developed classification models.
Seven protective miRNA signatures for prognosis of cervical cancer.
Liu, Bei; Ding, Jin-Feng; Luo, Jian; Lu, Li; Yang, Fen; Tan, Xiao-Dong
2016-08-30
Cervical cancer is the second cause of cancer death in females in their 20s and 30s, but there were limited studies about its prognosis. This study aims to identify miRNA related to prognosis and study their functions. TCGA data of patients with cervical cancer were used to build univariate Cox's model with single clinical parameter or miRNA expression level. Multivariate Cox's model was built using both clinical information and miRNA expression levels. At last, STRING was used to enrich gene ontology or pathway for validated targets of significant miRNAs, and visualize the interactions among them. Using univariate Cox's model with clinical parameters, we found that two clinical parameters, tobacco use and clinical stage, and seven miRNAs were highly correlated with the survival status. Only using the expression level of miRNA signatures, the model could separate patients into high-risk and low-risk groups successfully. An optimal feature-selected model was proposed based on two clinical parameters and seven miRNAs. Functional analysis of these seven miRNAs showed they were associated to various pathways related to cancer, including MAPK, VEGF and P53 pathways. These results helped the research of identifying targets for targeted therapy which could potentially allow tailoring of treatment for cervical cancer patients.
Kim, Eun Sook; Wang, Yan
2017-01-01
Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691
Estimation of Boreal Forest Biomass Using Spaceborne SAR Systems
NASA Technical Reports Server (NTRS)
Saatchi, Sassan; Moghaddam, Mahta
1995-01-01
In this paper, we report on the use of a semiempirical algorithm derived from a two layer radar backscatter model for forest canopies. The model stratifies the forest canopy into crown and stem layers, separates the structural and biometric attributes of the canopy. The structural parameters are estimated by training the model with polarimetric SAR (synthetic aperture radar) data acquired over homogeneous stands with known above ground biomass. Given the structural parameters, the semi-empirical algorithm has four remaining parameters, crown biomass, stem biomass, surface soil moisture, and surface rms height that can be estimated by at least four independent SAR measurements. The algorithm has been used to generate biomass maps over the entire images acquired by JPL AIRSAR and SIR-C SAR systems. The semi-empirical algorithms are then modified to be used by single frequency radar systems such as ERS-1, JERS-1, and Radarsat. The accuracy. of biomass estimation from single channel radars is compared with the case when the channels are used together in synergism or in a polarimetric system.
Analytical fitting model for rough-surface BRDF.
Renhorn, Ingmar G E; Boreman, Glenn D
2008-08-18
A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.
NASA Astrophysics Data System (ADS)
Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.
2017-04-01
We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.
Computer Modelling of Cyclic Deformation of High-Temperature Materials
1993-06-14
possible plying the kink pair separation by the kink height a. to extract the thermodynamic parameters which This latter procedure is not accurate for...angle a(a) (a is the radius of the circle of intersection between particle and slip plane) by F = Vb2 sin a(a) where each breaking angle (a) relates to an
On the causes of geomagnetic activity
NASA Technical Reports Server (NTRS)
Svalgaard, L.
1975-01-01
The causes of geomagnetic activity are studied both theoretically in terms of the reconnection model and empirically using the am-index and interplanetary solar wind parameters. It is found that two separate mechanisms supply energy to the magnetosphere. One mechanism depends critically on the magnitude and direction of the interplanetary magnetic field. Both depend strongly on solar wind speed.
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.
2013-12-01
Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.
Electrokinetic dispersion in microfluidic separation systems
NASA Astrophysics Data System (ADS)
Molho, Joshua Irving
Numerous efforts have focused on engineering miniaturized chemical analysis devices that are faster, more portable and consume smaller volumes of expensive reagents than their macroscale counterparts. Many of these analysis devices employ electrokinetic effects to transport picoliter volumes of liquids and to separate chemical species from an initially mixed sample volume. In these microfluidic separation systems, dispersion must be minimized to obtain the highest resolution separation possible. This work focuses on modeling, simulation and experimental measurement of two electrokinetic dispersion mechanisms that can reduce the effectiveness of microfluidic separation systems: dispersion resulting from non-uniform wall zeta-potential, and dispersion caused by microchannel turns. When the surface of a microchannel has non-uniform zeta-potential (e.g., if the surface charge varies along the length of the microchannel), an applied electric field creates both electroosmotic and pressure-driven flow. A caged-fluorescence imaging technique was used to visualize the dispersion caused by this electrokinetically induced pressure-driven flow. A simple model for a single channel with an axially varying surface charge is presented and compared to experimental measurements. Microchannel turns have been shown to create dispersion of electrokinetically transported analyte bands. Using a method of moments analysis, a model is developed that quantifies this dispersion and identifies the conditions under which turn dispersion limits the resolution of a microfluidic separation system. Measurements using the caged-fluorescence visualization technique were used to verify this model. New turn geometries are presented and were optimized using both a reduced parameter technique as well as a more generalized, numerical shape optimization approach. These improved turn designs were manufactured using two fabrication techniques and then tested experimentally. The turn optimization approaches and resulting turn geometries described here are shown to reduce turn dispersion to less than 1% of the dispersion caused by unoptimized, constant-width turns.
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
Impact of grade separator on pedestrian risk taking behavior.
Khatoon, Mariya; Tiwari, Geetam; Chatterjee, Niladri
2013-01-01
Pedestrians on Delhi roads are often exposed to high risks. This is because the basic needs of pedestrians are not recognized as a part of the urban transport infrastructure improvement projects in Delhi. Rather, an ever increasing number of cars and motorized two-wheelers encourage the construction of large numbers of flyovers/grade separators to facilitate signal free movement for motorized vehicles, exposing pedestrians to greater risk. This paper describes the statistical analysis of pedestrian risk taking behavior while crossing the road, before and after the construction of a grade separator at an intersection of Delhi. A significant number of pedestrians are willing to take risks in both before and after situations. The results indicate that absence of signals make pedestrians behave independently, leading to increased variability in their risk taking behavior. Variability in the speeds of all categories of vehicles has increased after the construction of grade separators. After the construction of the grade separator, the waiting time of pedestrians at the starting point of crossing has increased and the correlation between waiting times and gaps accepted by pedestrians show that after certain time of waiting, pedestrians become impatient and accepts smaller gap size to cross the road. A Logistic regression model is fitted by assuming that the probability of road crossing by pedestrians depends on the gap size (in s) between pedestrian and conflicting vehicles, sex, age, type of pedestrians (single or in a group) and type of conflicting vehicles. The results of Logistic regression explained that before the construction of the grade separator the probability of road crossing by the pedestrian depends on only the gap size parameter; however after the construction of the grade separator, other parameters become significant in determining pedestrian risk taking behavior. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
NASA Astrophysics Data System (ADS)
Theodorsen, A.; E Garcia, O.; Rypdal, M.
2017-05-01
Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type.
Saturn systems holddown acoustic efficiency and normalized acoustic power spectrum.
NASA Technical Reports Server (NTRS)
Gilbert, D. W.
1972-01-01
Saturn systems field acoustic data are used to derive mid- and far-field prediction parameters for rocket engine noise. The data were obtained during Saturn vehicle launches at the Kennedy Space Center. The data base is a sorted set of acoustic data measured during the period 1961 through 1971 for Saturn system launches SA-1 through AS-509. The model assumes hemispherical radiation from a simple source located at the intersection of the longitudinal axis of each booster and the engine exit plane. The model parameters are evaluated only during vehicle holddown. The acoustic normalized power spectrum and efficiency for each system are isolated as a composite from the data using linear numerical methods. The specific definitions of each allows separation. The resulting power spectra are nondimensionalized as a function of rocket engine parameters. The nondimensional Saturn system acoustic spectrum and efficiencies are compared as a function of Strouhal number with power spectra from other systems.
Locating and Modeling Regional Earthquakes with Broadband Waveform Data
NASA Astrophysics Data System (ADS)
Tan, Y.; Zhu, L.; Helmberger, D.
2003-12-01
Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and focal mechanism. We show that the whole process can be easily automated and routinely provide reliable source parameter estimates with a couple of broadband stations. Two applications in the Tibet Plateau and Southern California will be presented along with comparisons of results against other methods.
Unitarity of the Cabibbo-Kobayashi-Maskawa matrix and a nonuniversal gauge interaction model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kang Young
2007-12-01
Recent measurements of |V{sub us}| from kaon decays strongly support the unitarity of the Cabibbo-Kobayashi-Maskawa matrix. The unitarity provides a stringent constraint on the parameter space of the nonuniversal gauge interaction model based on the separate SU(2){sub L} gauge group acting on the third generation fermions. I show that this constraint is stronger than those from the CERN LEP and SLC data and low-energy experiment data.
Measuring, modeling, and minimizing capacitances in heterojunction bipolar transistors
NASA Astrophysics Data System (ADS)
Anholt, R.; Bozada, C.; Dettmer, R.; Via, D.; Jenkins, T.; Barrette, J.; Ebel, J.; Havasy, C.; Sewell, J.; Quach, T.
1996-07-01
We demonstrate methods to separate junction and pad capacitances from on-wafer S-parameter measurements of HBTs with different areas and layouts. The measured junction capacitances are in good agreement with models, indicating that large-area devices are suitable for monitoring vendor epi-wafer doping. Measuring open HBTs does not give the correct pad capacitances. Finally, a capacitance comparison for a variety of layouts shows that bar-devices consistently give smaller base-collector values than multiple dot HBTs.
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang
2018-05-01
The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.
Motion control of musculoskeletal systems with redundancy.
Park, Hyunjoo; Durand, Dominique M
2008-12-01
Motion control of musculoskeletal systems for functional electrical stimulation (FES) is a challenging problem due to the inherent complexity of the systems. These include being highly nonlinear, strongly coupled, time-varying, time-delayed, and redundant. The redundancy in particular makes it difficult to find an inverse model of the system for control purposes. We have developed a control system for multiple input multiple output (MIMO) redundant musculoskeletal systems with little prior information. The proposed method separates the steady-state properties from the dynamic properties. The dynamic control uses a steady-state inverse model and is implemented with both a PID controller for disturbance rejection and an artificial neural network (ANN) feedforward controller for fast trajectory tracking. A mechanism to control the sum of the muscle excitation levels is also included. To test the performance of the proposed control system, a two degree of freedom ankle-subtalar joint model with eight muscles was used. The simulation results show that separation of steady-state and dynamic control allow small output tracking errors for different reference trajectories such as pseudo-step, sinusoidal and filtered random signals. The proposed control method also demonstrated robustness against system parameter and controller parameter variations. A possible application of this control algorithm is FES control using multiple contact cuff electrodes where mathematical modeling is not feasible and the redundancy makes the control of dynamic movement difficult.
An experimental study of helicopter rotor rotational noise in a wind tunnel
NASA Technical Reports Server (NTRS)
Lee, A.; Harris, W. L.; Widnall, S. E.
1976-01-01
The rotational noise of model helicopter rotors in forward flight was studied in an anechoic wind tunnel. The parameters under study were the rotor thrust (blade loading), blade number and advance ratio. The separate effects of each parameter were identified with the other parameters being held constant. The directivity of the noise was also measured. Twelve sets of data for rotational noise as a function of frequency were compared with the theory of Lowson and Ollerhead. In general, the agreement is reasonably good, except for the cases of (1) low and high disk loadings, (2) the four bladed rotor, and (3) low advance ratios. The theory always under-estimates the rotational noise at high harmonics.
Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning
NASA Astrophysics Data System (ADS)
Zuberi, M. AH; Pratt, R. G.
2018-04-01
The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.
Wilson, Emma; Rustighi, Emiliano; Newland, Philip L; Mace, Brian R
2012-03-01
Muscle models are an important tool in the development of new rehabilitation and diagnostic techniques. Many models have been proposed in the past, but little work has been done on comparing the performance of models. In this paper, seven models that describe the isometric force response to pulse train inputs are investigated. Five of the models are from the literature while two new models are also presented. Models are compared in terms of their ability to fit to isometric force data, using Akaike's and Bayesian information criteria and by examining the ability of each model to describe the underlying behaviour in response to individual pulses. Experimental data were collected by stimulating the locust extensor tibia muscle and measuring the force generated at the tibia. Parameters in each model were estimated by minimising the error between the modelled and actual force response for a set of training data. A separate set of test data, which included physiological kick-type data, was used to assess the models. It was found that a linear model performed the worst whereas a new model was found to perform the best. The parameter sensitivity of this new model was investigated using a one-at-a-time approach, and it found that the force response is not particularly sensitive to changes in any parameter.
Koseki, Shigenobu; Nakamura, Nobutaka; Shiina, Takeo
2015-01-01
Bacterial pathogens such as Listeria monocytogenes, Escherichia coli O157:H7, Salmonella enterica, and Cronobacter sakazakii have demonstrated long-term survival in/on dry or low-water activity (aw) foods. However, there have been few comparative studies on the desiccation tolerance among these bacterial pathogens separately in a same food matrix. In the present study, the survival kinetics of the four bacterial pathogens separately inoculated onto powdered infant formula as a model low-aw food was compared during storage at 5, 22, and 35°C. No significant differences in the survival kinetics between E. coli O157:H7 and L. monocytogenes were observed. Salmonella showed significantly higher desiccation tolerance than these pathogens, and C. sakazakii demonstrated significantly higher desiccation tolerance than all other three bacteria studied. Thus, the desiccation tolerance was represented as C. sakazakii > Salmonella > E. coli O157:H7 = L. monocytogenes. The survival kinetics of each bacterium was mathematically analyzed, and the observed kinetics was successfully described using the Weibull model. To evaluate the variability of the inactivation kinetics of the tested bacterial pathogens, the Monte Carlo simulation was performed using assumed probability distribution of the estimated fitted parameters. The simulation results showed that the storage temperature significantly influenced survival of each bacterium under the dry environment, where the bacterial inactivation became faster with increasing storage temperature. Furthermore, the fitted rate and shape parameters of the Weibull model were successfully modelled as a function of temperature. The numerical simulation of the bacterial inactivation was realized using the functions of the parameters under arbitrary fluctuating temperature conditions.
Numerical modeling and preliminary validation of drag-based vertical axis wind turbine
NASA Astrophysics Data System (ADS)
Krysiński, Tomasz; Buliński, Zbigniew; Nowak, Andrzej J.
2015-03-01
The main purpose of this article is to verify and validate the mathematical description of the airflow around a wind turbine with vertical axis of rotation, which could be considered as representative for this type of devices. Mathematical modeling of the airflow around wind turbines in particular those with the vertical axis is a problematic matter due to the complex nature of this highly swirled flow. Moreover, it is turbulent flow accompanied by a rotation of the rotor and the dynamic boundary layer separation. In such conditions, the key aspects of the mathematical model are accurate turbulence description, definition of circular motion as well as accompanying effects like centrifugal force or the Coriolis force and parameters of spatial and temporal discretization. The paper presents the impact of the different simulation parameters on the obtained results of the wind turbine simulation. Analysed models have been validated against experimental data published in the literature.
NASA Astrophysics Data System (ADS)
Kaliuzhnyi, Mykola; Bushuev, Felix; Shulga, Oleksandr; Sybiryakova, Yevgeniya; Shakun, Leonid; Bezrukovs, Vladislavs; Moskalenko, Sergiy; Kulishenko, Vladislav; Malynovskyi, Yevgen
2016-12-01
An international network of passive correlation ranging of a geostationary telecommunication satellite is considered in the article. The network is developed by the RI "MAO". The network consists of five spatially separated stations of synchronized reception of DVB-S signals of digital satellite TV. The stations are located in Ukraine and Latvia. The time difference of arrival (TDOA) on the network stations of the DVB-S signals, radiated by the satellite, is a measured parameter. The results of TDOA estimation obtained by the network in May-August 2016 are presented in the article. Orbital parameters of the tracked satellite are determined using measured values of the TDOA and two models of satellite motion: the analytical model SGP4/SDP4 and the model of numerical integration of the equations of satellite motion. Both models are realized using the free low-level space dynamics library OREKIT (ORbit Extrapolation KIT).
Mathematical model of an air-filled alpha stirling refrigerator
NASA Astrophysics Data System (ADS)
McFarlane, Patrick; Semperlotti, Fabio; Sen, Mihir
2013-10-01
This work develops a mathematical model for an alpha Stirling refrigerator with air as the working fluid and will be useful in optimizing the mechanical design of these machines. Two pistons cyclically compress and expand air while moving sinusoidally in separate chambers connected by a regenerator, thus creating a temperature difference across the system. A complete non-linear mathematical model of the machine, including air thermodynamics, and heat transfer from the walls, as well as heat transfer and fluid resistance in the regenerator, is developed. Non-dimensional groups are derived, and the mathematical model is numerically solved. The heat transfer and work are found for both chambers, and the coefficient of performance of each chamber is calculated. Important design parameters are varied and their effect on refrigerator performance determined. This sensitivity analysis, which shows what the significant parameters are, is a useful tool for the design of practical Stirling refrigeration systems.
NASA Astrophysics Data System (ADS)
Jiang, Jin-Wu
2015-08-01
We propose parametrizing the Stillinger-Weber potential for covalent materials starting from the valence force-field model. All geometrical parameters in the Stillinger-Weber potential are determined analytically according to the equilibrium condition for each individual potential term, while the energy parameters are derived from the valence force-field model. This parametrization approach transfers the accuracy of the valence force field model to the Stillinger-Weber potential. Furthermore, the resulting Stilliinger-Weber potential supports stable molecular dynamics simulations, as each potential term is at an energy-minimum state separately at the equilibrium configuration. We employ this procedure to parametrize Stillinger-Weber potentials for single-layer MoS2 and black phosphorous. The obtained Stillinger-Weber potentials predict an accurate phonon spectrum and mechanical behaviors. We also provide input scripts of these Stillinger-Weber potentials used by publicly available simulation packages including GULP and LAMMPS.
Jiang, Jin-Wu
2015-08-07
We propose parametrizing the Stillinger-Weber potential for covalent materials starting from the valence force-field model. All geometrical parameters in the Stillinger-Weber potential are determined analytically according to the equilibrium condition for each individual potential term, while the energy parameters are derived from the valence force-field model. This parametrization approach transfers the accuracy of the valence force field model to the Stillinger-Weber potential. Furthermore, the resulting Stilliinger-Weber potential supports stable molecular dynamics simulations, as each potential term is at an energy-minimum state separately at the equilibrium configuration. We employ this procedure to parametrize Stillinger-Weber potentials for single-layer MoS2 and black phosphorous. The obtained Stillinger-Weber potentials predict an accurate phonon spectrum and mechanical behaviors. We also provide input scripts of these Stillinger-Weber potentials used by publicly available simulation packages including GULP and LAMMPS.
Mathematical analysis of frontal affinity chromatography in particle and membrane configurations.
Tejeda-Mansir, A; Montesinos, R M; Guzmán, R
2001-10-30
The scaleup and optimization of large-scale affinity-chromatographic operations in the recovery, separation and purification of biochemical components is of major industrial importance. The development of mathematical models to describe affinity-chromatographic processes, and the use of these models in computer programs to predict column performance is an engineering approach that can help to attain these bioprocess engineering tasks successfully. Most affinity-chromatographic separations are operated in the frontal mode, using fixed-bed columns. Purely diffusive and perfusion particles and membrane-based affinity chromatography are among the main commercially available technologies for these separations. For a particular application, a basic understanding of the main similarities and differences between particle and membrane frontal affinity chromatography and how these characteristics are reflected in the transport models is of fundamental relevance. This review presents the basic theoretical considerations used in the development of particle and membrane affinity chromatography models that can be applied in the design and operation of large-scale affinity separations in fixed-bed columns. A transport model for column affinity chromatography that considers column dispersion, particle internal convection, external film resistance, finite kinetic rate, plus macropore and micropore resistances is analyzed as a framework for exploring further the mathematical analysis. Such models provide a general realistic description of almost all practical systems. Specific mathematical models that take into account geometric considerations and transport effects have been developed for both particle and membrane affinity chromatography systems. Some of the most common simplified models, based on linear driving-force (LDF) and equilibrium assumptions, are emphasized. Analytical solutions of the corresponding simplified dimensionless affinity models are presented. Particular methods for estimating the parameters that characterize the mass-transfer and adsorption mechanisms in affinity systems are described.
Schmidt, Irma; Minceva, Mirjana; Arlt, Wolfgang
2012-02-17
The X-ray computed tomography (CT) is used to determine local parameters related to the column packing homogeneity and hydrodynamics in columns packed with spherically and irregularly shaped particles of same size. The results showed that the variation of porosity and axial dispersion coefficient along the column axis is insignificant, compared to their radial distribution. The methodology of using the data attained by CT measurements to perform a CFD simulation of a batch separation of model binary mixtures, with different concentration and separation factors is demonstrated. The results of the CFD simulation study show that columns packed with spherically shaped particles provide higher yield in comparison to columns packed with irregularly shaped particles only below a certain value of the separation factor. The presented methodology can be used for selecting a suited packing material for a particular separation task. Copyright © 2012 Elsevier B.V. All rights reserved.
Zhao, Ziliang; Li, Qi; Ji, Xiangling; Dimova, Rumiana; Lipowsky, Reinhard; Liu, Yonggang
2016-06-24
Dextran and poly(ethylene glycol) (PEG) in phase separated aqueous two-phase systems (ATPSs) of these two polymers, with a broad molar mass distribution for dextran and a narrow molar mass distribution for PEG, were separated and quantified by gel permeation chromatography (GPC). Tie lines constructed by GPC method are in excellent agreement with those established by the previously reported approach based on density measurements of the phases. The fractionation of dextran during phase separation of ATPS leads to the redistribution of dextran of different chain lengths between the two phases. The degree of fractionation for dextran decays exponentially as a function of chain length. The average separation parameters, for both dextran and PEG, show a crossover from mean field behavior to Ising model behavior, as the critical point is approached. Copyright © 2016 Elsevier B.V. All rights reserved.
Color separation in forensic image processing using interactive differential evolution.
Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb
2015-01-01
Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.
Energy dissipation in polymer-polymer adhesion contacts
NASA Astrophysics Data System (ADS)
Garif, Yev Skip
This study focuses on self-adhesion in elastomers as a way of approaching a broader polymer adhesion problem. The model systems studied are cross-linked acrylic pressure-sensitive adhesives (PSA-LNs) synthesized to attain four surface types: neutral, acidic, basic, and polar. As the study progressed, it distinguished itself as the first of its kind to consistently report the effect of temperature on measurable intrinsic parameters of polymer adhesion. The main goal of the study was to understand why the magnitude of the practical adhesion energies of the four PSA-LN systems tested varies disproportionately greater than their respective surface energies. To achieve this goal, continuous sweeps of adhesion energy as a function of rate of interfacial separation were performed using three different adhesion-probing techniques--- peel, micro-scratch, and normal contact. The answer was found in the sub-micron-per-second limit of separation rates. In approaching this limit, the power law behavior of adhesion gradually transitioned into a linear region of markedly weaker sensitivity to rate. Referred to as the "intrinsic window", this linear region was characterized by three parameters: (1) the intrinsic adhesion energy at zero rate of separation; (2) the intrinsic rate sensitivity equal to the proportionality constant of the linear fit; and (3) the critical separation rate in the middle of the transition to the power law. All three were found to be thermally activated. Activation energies suggested that interfacial processes are attributed mainly to dispersive and electrostatic molecular interactions such as hydrogen bonding or van der Waals attraction. Comparative analysis of the intrinsic window of the four PSA-LNs tested showed that an increase in the intrinsic adhesion energy associated with higher surface energy is inherently coupled with an increase in the intrinsic rate sensitivity and reduction in the critical separation rate. When combined, the three parameters reshape the intrinsic window such that the entire power-law portion of the adhesion response is shifted to a level that appears disproportionately high based on the false assumption that there is only one intrinsic parameter contributing to the shift. Thus, the goal of explaining this disproportionality was achieved.
Modification of Gaussian mixture models for data classification in high energy physics
NASA Astrophysics Data System (ADS)
Štěpánek, Michal; Franc, Jiří; Kůs, Václav
2015-01-01
In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).
Phase separation in artificial vesicles driven by light and curvature
NASA Astrophysics Data System (ADS)
Rinaldin, Melissa; Pomp, Wim; Schmidt, Thomas; Giomi, Luca; Kraft, Daniela; Physics of Life Processes Team; Soft; Bio Mechanics Collaboration; Self-Assembly in Soft Matter Systems Collaboration
The role of phase-demixing in living cells, leading to the lipid-raft hypothesis, has been extensively studied. Lipid domains of higher lipid chain order are proposed to regulate protein spatial organization. Giant Unilamellar Vesicles provide an artificial model to study phase separation. So far temperature was used to initiate the process. Here we introduce a new methodology based on the induction of phase separation by light. To this aim, the composition of the lipid membrane is varied by photo-oxidation of lipids. The control of the process gained by using light allowed us to observe vesicle shape fluctuations during phase-demixing. The presence of fluctuations near the critical mixing point resembles features of a critical process. We quantitatively analyze these fluctuations using a 2d elastic model, from which we can estimate the material parameters such as bending rigidity and surface tension, demonstrating the non-equilibrium critical behaviour. Finally, I will describe recent attempts toward tuning the membrane composition by controlling the vesicle curvature.
Biogas desulfurization and biogas upgrading using a hybrid membrane system--modeling study.
Makaruk, A; Miltner, M; Harasek, M
2013-01-01
Membrane gas permeation using glassy membranes proved to be a suitable method for biogas upgrading and natural gas substitute production on account of low energy consumption and high compactness. Glassy membranes are very effective in the separation of bulk carbon dioxide and water from a methane-containing stream. However, the content of hydrogen sulfide can be lowered only partially. This work employs process modeling based upon the finite difference method to evaluate a hybrid membrane system built of a combination of rubbery and glassy membranes. The former are responsible for the separation of hydrogen sulfide and the latter separate carbon dioxide to produce standard-conform natural gas substitute. The evaluation focuses on the most critical upgrading parameters like achievable gas purity, methane recovery and specific energy consumption. The obtained results indicate that the evaluated hybrid membrane configuration is a potentially efficient system for the biogas processing tasks that do not require high methane recoveries, and allows effective desulfurization for medium and high hydrogen sulfide concentrations without additional process steps.
Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin
2016-12-05
Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.
Non-linear Parameter Estimates from Non-stationary MEG Data
Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth
2016-01-01
We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815
The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.
van Battum, L J; Huizenga, H
2006-07-01
Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.
Model based optimization of driver-pickup separation for eddy current measurement of gap
NASA Astrophysics Data System (ADS)
Klein, G.; Morelli, J.; Krause, T. W.
2018-04-01
The fuel channels in CANDU® (CANada Deuterium Uranium) nuclear reactors consist of a pressure tube (PT) contained within a larger diameter calandria tube (CT). The separation between the tubes, known as the PT-CT gap, ensures PT hydride blisters, which could lead to potential cracking of the PT, do not develop. Therefore, accurate measurements are required to confirm that contact between PT and CT is not imminent. Gap measurement uses an eddy current probe. However this probe is sensitive to lift-off variations, which can adversely affect estimated gap. A validated analytical flat plate model of eddy current response to gap was used to examine the effect of driver-pickup spacing on lift-off and response to gap at a frequency of 4 kHz, which is used for in-reactor measurements. This model was compared against, and shown to have good agreement with, a COMSOL® finite element method (FEM) model. The optimum coil separation, which included the constraint of coil size, was found to be 11 mm, resulting in a phase response between lift-off and response to change in gap of 66°. This work demonstrates the advantages of using analytical models for optimizing coil designs for measurement of parameters that may negatively influence the outcome of an inspection measurement.
Digital terrain modelling and industrial surface metrology - Converging crafts
Pike, R.J.
2001-01-01
Quantitative characterisation of surface form, increasingly from digital 3-D height data, is cross-disciplinary and can be applied at any scale. Thus, separation of industrial-surface metrology from its Earth-science counterpart, (digital) terrain modelling, is artificial. Their growing convergence presents an opportunity to develop in surface morphometry a unified approach to surface representation. This paper introduces terrain modelling and compares it with metrology, noting their differences and similarities. Examples of potential redundancy among parameters illustrate one of the many issues common to both disciplines. ?? 2001 Elsevier Science Ltd. All rights reserved.
A New Approach to Predict the Fish Fillet Shelf-Life in Presence of Natural Preservative Agents.
Giuffrida, Alessandro; Giarratana, Filippo; Valenti, Davide; Muscolino, Daniele; Parisi, Roberta; Parco, Alessio; Marotta, Stefania; Ziino, Graziella; Panebianco, Antonio
2017-04-13
Three data sets concerning the behaviour of spoilage flora of fillets treated with natural preservative substances (NPS) were used to construct a new kind of mathematical predictive model. This model, unlike other ones, allows expressing the antibacterial effect of the NPS separately from the prediction of the growth rate. This approach, based on the introduction of a parameter into the predictive primary model, produced a good fitting of observed data and allowed characterising quantitatively the increase of shelf-life of fillets.
NASA Technical Reports Server (NTRS)
Riddick, Stephen E.; Hinton, David A.
2000-01-01
A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).
The critical role of uncertainty in projections of hydrological extremes
NASA Astrophysics Data System (ADS)
Meresa, Hadush K.; Romanowicz, Renata J.
2017-08-01
This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.
NASA Astrophysics Data System (ADS)
Coman, Marius
The kaon electroproduction reaction H(e, e 'K+)Λ was studied as a function of the four momentum transfer, Q2, for different values of the virtual photon polarization parameter. Electrons and kaons were detected in coincidence in two High Resolution Spectrometers (HRS) at Jefferson Lab. Data were taken at electron beam energies ranging from 3.4006 to 5.7544 GeV. The kaons were identified using combined time of flight information and two Aerogel Cerenkov detectors used for particle identification. For different values of Q2 ranging from 1.90 to 2.35 GeV/c2 the center of mass cross sections for the Λ hyperon were determined for 20 kinematics and the longitudinal, sigma L, and transverse, sigmaT, terms were separated using the Rosenbluth separation technique. Comparisons between available models and data have been studied. The comparison supports the t-channel dominance behavior for kaon electroproduction. All models seem to underpredict the transverse cross section. An estimate of the kaon form factor has been explored by determining the sensitivity of the separated cross sections to variations of the kaon EM form factor. From comparison between models and data we can conclude that interpreting the data using the Regge model is quite sensitive to a particular choice for the EM form factors. The data from the E98-108 experiment extends the range of the available kaon electroproduction cross section data to an unexplored region of Q2 where no separations have ever been performed.
Single-particle strength from nucleon transfer in oxygen isotopes: Sensitivity to model parameters
NASA Astrophysics Data System (ADS)
Flavigny, F.; Keeley, N.; Gillibert, A.; Obertelli, A.
2018-03-01
In the analysis of transfer reaction data to extract nuclear structure information the choice of input parameters to the reaction model such as distorting potentials and overlap functions has a significant impact. In this paper we consider a set of data for the (d ,t ) and (d ,3He ) reactions on 14,16,18O as a well-delimited subject for a study of the sensitivity of such analyses to different choices of distorting potentials and overlap functions with particular reference to a previous investigation of the variation of valence nucleon correlations as a function of the difference in nucleon separation energy Δ S =| Sp-Sn| [Phys. Rev. Lett. 110, 122503 (2013), 10.1103/PhysRevLett.110.122503].
Bioethanol production optimization: a thermodynamic analysis.
Alvarez, Víctor H; Rivera, Elmer Ccopa; Costa, Aline C; Filho, Rubens Maciel; Wolf Maciel, Maria Regina; Aznar, Martín
2008-03-01
In this work, the phase equilibrium of binary mixtures for bioethanol production by continuous extractive process was studied. The process is composed of four interlinked units: fermentor, centrifuge, cell treatment unit, and flash vessel (ethanol-congener separation unit). A proposal for modeling the vapor-liquid equilibrium in binary mixtures found in the flash vessel has been considered. This approach uses the Predictive Soave-Redlich-Kwong equation of state, with original and modified molecular parameters. The congeners considered were acetic acid, acetaldehyde, furfural, methanol, and 1-pentanol. The results show that the introduction of new molecular parameters r and q in the UNIFAC model gives more accurate predictions for the concentration of the congener in the gas phase for binary and ternary systems.
Analysis of screeching in a cold flow jet experiment
NASA Technical Reports Server (NTRS)
Wang, M. E.; Slone, R. M., Jr.; Robertson, J. E.; Keefe, L.
1975-01-01
The screech phenomenon observed in a one-sixtieth scale model space shuttle test of the solid rocket booster exhaust flow noise has been investigated. A critical review is given of the cold flow test data representative of Space Shuttle launch configurations to define those parameters which contribute to screech generation. An acoustic feedback mechanism is found to be responsible for the generation of screech. A simple equation which permits prediction of screech frequency in terms of basic testing parameters such as the jet exhaust Mach number and the separating distance from nozzle exit to the surface of model launch pad is presented and is found in good agreement with the test data. Finally, techniques are recommended to eliminate or reduce the screech.
Modeling of sheet metal fracture via cohesive zone model and application to spot welds
NASA Astrophysics Data System (ADS)
Wu, Joseph Z.
Even though the cohesive zone model (CZM) has been widely used to analyze ductile fracture, it is not yet clearly understood how to calibrate the cohesive parameters including the specific work of separation (the work of separation per unit crack area) and the peak stress. A systematic approach is presented to first determine the cohesive values for sheet metal and then apply the calibrated model to various structure problems including the failure of spot welds. Al5754-0 was chosen for this study since it is not sensitive to heat treatment so the effect of heat-affected-zone (HAZ) can be ignored. The CZM has been applied to successfully model both mode-I and mode-III fracture for various geometries including Kahn specimens, single-notch specimens, and deep double-notch specimens for mode-I and trouser specimens for mode-III. The mode-I fracture of coach-peel spot-weld nugget and the mixed-mode fracture of nugget pull-out have also been well simulated by the CZM. Using the mode-I average specific work of separation of 13 kJ/m2 identified in a previous work and the mode-III specific work of separation of 38 kJ/m 2 found in this thesis, the cohesive peak stress has been determined to range from 285 MPa to 600 MPa for mode-I and from 165 MPa to 280 MPa for mode-III, depending on the degree of plastic deformation. The uncertainty of these cohesive values has also been examined. It is concluded that, if the specific work of separation is a material constant, the peak stress changes with the degree of plastic deformation and is therefore geometry-dependent.
Toward an improvement over Kerner-Klenov-Wolf three-phase cellular automaton model.
Jiang, Rui; Wu, Qing-Song
2005-12-01
The Kerner-Klenov-Wolf (KKW) three-phase cellular automaton model has a nonrealistic velocity of the upstream front in widening synchronized flow pattern which separates synchronized flow downstream and free flow upstream. This paper presents an improved model, which is a combination of the initial KKW model and a modified Nagel-Schreckenberg (MNS) model. In the improved KKW model, a parameter is introduced to determine the vehicle moves according to the MNS model or the initial KKW model. The improved KKW model can not only simulate the empirical observations as the initial KKW model, but also overcome the nonrealistic velocity problem. The mechanism of the improvement is discussed.
The Dynamics of HPV Infection and Cervical Cancer Cells.
Asih, Tri Sri Noor; Lenhart, Suzanne; Wise, Steven; Aryati, Lina; Adi-Kusumo, F; Hardianti, Mardiah S; Forde, Jonathan
2016-01-01
The development of cervical cells from normal cells infected by human papillomavirus into invasive cancer cells can be modeled using population dynamics of the cells and free virus. The cell populations are separated into four compartments: susceptible cells, infected cells, precancerous cells and cancer cells. The model system of differential equations also has a free virus compartment in the system, which infect normal cells. We analyze the local stability of the equilibrium points of the model and investigate the parameters, which play an important role in the progression toward invasive cancer. By simulation, we investigate the boundary between initial conditions of solutions, which tend to stable equilibrium point, representing controlled infection, and those which tend to unbounded growth of the cancer cell population. Parameters affected by drug treatment are varied, and their effect on the risk of cancer progression is explored.
NASA Astrophysics Data System (ADS)
Bagchi, Manjari
2013-08-01
Luminosity is an intrinsic property of radio pulsars related to the properties of the magnetospheric plasma and the beam geometry, and inversely proportional to the observing frequency. In traditional models, luminosity has been considered as a function of the spin parameters of pulsars. On the other hand, parameter independent models like power law and lognormal have been also used to fit the observed luminosities. Some of the older studies on pulsar luminosities neglected observational biases, but all of the recent studies tried to model observational effects as accurately as possible. Luminosities of pulsars in globular clusters (GCs) and in the Galactic disk have been studied separately. Older studies concluded that these two categories of pulsars have different luminosity distributions, but the most recent study concluded that those are the same. This paper reviews all significant works on pulsar luminosities and discusses open questions.
Strange stars in f( R) theories of gravity in the Palatini formalism
NASA Astrophysics Data System (ADS)
Panotopoulos, Grigoris
2017-05-01
In the present work we study strange stars in f( R) theories of gravity in the Palatini formalism. We consider two concrete well-known cases, namely the R+R^2/(6 M^2) model as well as the R-μ ^4/R model for two different values of the mass parameter M or μ . We integrate the modified Tolman-Oppenheimer-Volkoff equations numerically, and we show the mass-radius diagram for each model separately. The standard case corresponding to the General Relativity is also shown in the same figure for comparison. Our numerical results show that the interior solution can be vastly different depending on the model and/or the value of the parameter of each model. In addition, our findings imply that (i) for the cosmologically interesting values of the mass scales M,μ the effect of modified gravity on strange stars is negligible, while (ii) for the values predicting an observable effect, the modified gravity models discussed here would be ruled out by their cosmological effects.
Hierarchical Bayesian calibration of tidal orbit decay rates among hot Jupiters
NASA Astrophysics Data System (ADS)
Collier Cameron, Andrew; Jardine, Moira
2018-05-01
Transiting hot Jupiters occupy a wedge-shaped region in the mass ratio-orbital separation diagram. Its upper boundary is eroded by tidal spiral-in of massive, close-in planets and is sensitive to the stellar tidal dissipation parameter Q_s^'. We develop a simple generative model of the orbital separation distribution of the known population of transiting hot Jupiters, subject to tidal orbital decay, XUV-driven evaporation and observational selection bias. From the joint likelihood of the observed orbital separations of hot Jupiters discovered in ground-based wide-field transit surveys, measured with respect to the hyperparameters of the underlying population model, we recover narrow posterior probability distributions for Q_s^' in two different tidal forcing frequency regimes. We validate the method using mock samples of transiting planets with known tidal parameters. We find that Q_s^' and its temperature dependence are retrieved reliably over five orders of magnitude in Q_s^'. A large sample of hot Jupiters from small-aperture ground-based surveys yields log _{10} Q_s^' }=(8.26± 0.14) for 223 systems in the equilibrium-tide regime. We detect no significant dependence of Q_s^' on stellar effective temperature. A further 19 systems in the dynamical-tide regime yield log _{10} Q_s^' }=7.3± 0.4, indicating stronger coupling. Detection probabilities for transiting planets at a given orbital separation scale inversely with the increase in their tidal migration rates since birth. The resulting bias towards younger systems explains why the surface gravities of hot Jupiters correlate with their host stars' chromospheric emission fluxes. We predict departures from a linear transit-timing ephemeris of less than 4 s for WASP-18 over a 20-yr baseline.
Scaling theory in a model of corrosion and passivation.
Aarão Reis, F D A; Stafiej, Janusz; Badiali, J-P
2006-09-07
We study a model for corrosion and passivation of a metallic surface after small damage of its protective layer using scaling arguments and simulation. We focus on the transition between an initial regime of slow corrosion rate (pit nucleation) to a regime of rapid corrosion (propagation of the pit), which takes place at the so-called incubation time. The model is defined in a lattice in which the states of the sites represent the possible states of the metal (bulk, reactive, and passive) and the solution (neutral, acidic, or basic). Simple probabilistic rules describe passivation of the metal surface, dissolution of the passive layer, which is enhanced in acidic media, and spatially separated electrochemical reactions, which may create pH inhomogeneities in the solution. On the basis of a suitable matching of characteristic times of creation and annihilation of pH inhomogeneities in the solution, our scaling theory estimates the average radius of the dissolved region at the incubation time as a function of the model parameters. Among the main consequences, that radius decreases with the rate of spatially separated reactions and the rate of dissolution in acidic media, and it increases with the diffusion coefficient of H(+) and OH(-) ions in solution. The average incubation time can be written as the sum of a series of characteristic times for the slow dissolution in neutral media, until significant pH inhomogeneities are observed in the dissolved cavity. Despite having a more complex dependence on the model parameters, it is shown that the average incubation time linearly increases with the rate of dissolution in neutral media, under the reasonable assumption that this is the slowest rate of the process. Our theoretical predictions are expected to apply in realistic ranges of values of the model parameters. They are confirmed by numerical simulation in two-dimensional lattices, and the expected extension of the theory to three dimensions is discussed.
ERIC Educational Resources Information Center
Arce-Ferrer, Alvaro J.; Bulut, Okan
2017-01-01
This study examines separate and concurrent approaches to combine the detection of item parameter drift (IPD) and the estimation of scale transformation coefficients in the context of the common item nonequivalent groups design with the three-parameter item response theory equating. The study uses real and synthetic data sets to compare the two…
Dynamic parameter identification of robot arms with servo-controlled electrical motors
NASA Astrophysics Data System (ADS)
Jiang, Zhao-Hui; Senda, Hiroshi
2005-12-01
This paper addresses the issue of dynamic parameter identification of the robot manipulator with servo-controlled electrical motors. An assumption is made that all kinematical parameters, such as link lengths, are known, and only dynamic parameters containing mass, moment of inertia, and their functions need to be identified. First, we derive dynamics of the robot arm with a linear form of the unknown dynamic parameters by taking dynamic characteristics of the motor and servo unit into consideration. Then, we implement the parameter identification approach to identify the unknown parameters with respect to individual link separately. A pseudo-inverse matrix is used for formulation of the parameter identification. The optimal solution is guaranteed in a sense of least-squares of the mean errors. A Direct Drive (DD) SCARA type industrial robot arm AdeptOne is used as an application example of the parameter identification. Simulations and experiments for both open loop and close loop controls are carried out. Comparison of the results confirms the correctness and usefulness of the parameter identification and the derived dynamic model.
Fieselmann, Andreas; Dennerlein, Frank; Deuerling-Zheng, Yu; Boese, Jan; Fahrig, Rebecca; Hornegger, Joachim
2011-06-21
Filtered backprojection is the basis for many CT reconstruction tasks. It assumes constant attenuation values of the object during the acquisition of the projection data. Reconstruction artifacts can arise if this assumption is violated. For example, contrast flow in perfusion imaging with C-arm CT systems, which have acquisition times of several seconds per C-arm rotation, can cause this violation. In this paper, we derived and validated a novel spatio-temporal model to describe these kinds of artifacts. The model separates the temporal dynamics due to contrast flow from the scan and reconstruction parameters. We introduced derivative-weighted point spread functions to describe the spatial spread of the artifacts. The model allows prediction of reconstruction artifacts for given temporal dynamics of the attenuation values. Furthermore, it can be used to systematically investigate the influence of different reconstruction parameters on the artifacts. We have shown that with optimized redundancy weighting function parameters the spatial spread of the artifacts around a typical arterial vessel can be reduced by about 70%. Finally, an inversion of our model could be used as the basis for novel dynamic reconstruction algorithms that further minimize these artifacts.
Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity
Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.
2010-01-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.
Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L
2010-02-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.
Estimation of the ARNO model baseflow parameters using daily streamflow data
NASA Astrophysics Data System (ADS)
Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu
1999-09-01
An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.
Wang, Kang; Xia, Xing-Hua
2006-03-31
The end of separation channel in a microchip was electrochemically mapped using the feedback imaging mode of scanning electrochemical microscopy (SECM). This method provides a convenient way for microchannel-electrode alignment in microchip capillary electrophoresis. Influence of electrode-to-channel positions on separation parameters in this capillary electrophoresis-electrochemical detection (CE-ED) was then investigated. For the trapezoid shaped microchannel, detection in the central area resulted in the best apparent separation efficiency and peak shape. In the electrode-to-channel distance ranging from 65 to 15mum, the limiting peak currents of dopamine increased with the decrease of the detection distance due to the limited diffusion and convection of the sample band. Results showed that radial position and axial distance of the detection electrode to microchannel was important for the improvement of separation parameters in CE amperometric detection.
Evaluating performances of simplified physically based landslide susceptibility models.
NASA Astrophysics Data System (ADS)
Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale
2015-04-01
Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk Monitoring, Early Warning and Mitigation Along the Main Lifelines", CUP B31H11000370005, in the framework of the National Operational Program for "Research and Competitiveness" 2007-2013.
NASA Astrophysics Data System (ADS)
Lawi, Armin; Adhitya, Yudhi
2018-03-01
The objective of this research is to determine the quality of cocoa beans through morphology of their digital images. Samples of cocoa beans were scattered on a bright white paper under a controlled lighting condition. A compact digital camera was used to capture the images. The images were then processed to extract their morphological parameters. Classification process begins with an analysis of cocoa beans image based on morphological feature extraction. Parameters for extraction of morphological or physical feature parameters, i.e., Area, Perimeter, Major Axis Length, Minor Axis Length, Aspect Ratio, Circularity, Roundness, Ferret Diameter. The cocoa beans are classified into 4 groups, i.e.: Normal Beans, Broken Beans, Fractured Beans, and Skin Damaged Beans. The model of classification used in this paper is the Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM), a proposed improvement model of SVM using ensemble method in which the separate hyperplanes are obtained by least square approach and the multiclass procedure uses One-Against- All method. The result of our proposed model showed that the classification with morphological feature input parameters were accurately as 99.705% for the four classes, respectively.
Vacuum phase transition solves the H0 tension
NASA Astrophysics Data System (ADS)
Di Valentino, Eleonora; Linder, Eric V.; Melchiorri, Alessandro
2018-02-01
Taking the Planck cosmic microwave background data and the more direct Hubble constant measurement data as unaffected by systematic offsets, the values of the Hubble constant H0 interpreted within the Λ CDM cosmological constant and cold dark matter cosmological model are in ˜3.3 σ tension. We show that the Parker vacuum metamorphosis (VM) model, physically motivated by quantum gravitational effects and with the same number of parameters as Λ CDM , can remove the H0 tension and can give an improved fit to data (up to a mean Δ χ2=-7.5 ). It also ameliorates tensions with weak lensing data and the high redshift Lyman alpha forest data. Considering Bayesian evidence, we found in the case of the Planck data set alone positive evidence for a VM model against a cosmological constant both in the six- and nine-parameter framework. When the R16 data set is also considered, we found a strong evidence for the VM model against a cosmological constant in nine-parameter space. We separately consider a scale-dependent scaling of the gravitational lensing amplitude, such as provided by modified gravity, neutrino mass, or cold dark energy, motivated by the somewhat different cosmological parameter estimates for low and high CMB multipoles. We find that no such scale dependence is preferred.
Assessment of all-solid-state lithium-ion batteries
NASA Astrophysics Data System (ADS)
Braun, P.; Uhlmann, C.; Weiss, M.; Weber, A.; Ivers-Tiffée, E.
2018-07-01
All-solid-state lithium-ion batteries (ASSBs) are considered as next generation energy storage systems. A model might be very useful, which describes all contributions to the internal cell resistance, enables an optimization of the cell design, and calculates the performance of an open choice of cell architectures. A newly developed one-dimensional model for ASSBs is presented, based on a design concept which employs the use of composite electrodes. The internal cell resistance is calculated by linking two-phase transmission line models representing the composite electrodes with an ohmic resistance representing the solid electrolyte (separator). Thereby, electrical parameters, i.e. ionic and electronic conductivity, electrochemical parameters, i.e. charge-transfer resistance at interfaces and lithium solid-state diffusion, and microstructure parameters, i.e. electrode thickness, particle size, interface area, phase composition and tortuosity, are considered as the most important material and design parameters. Subsequently, discharge curves are simulated, and energy- and power-density characteristics of all-solid-state cell architectures are calculated. These model calculations are discussed and compared with experimental data from literature for a high power LiCoO2-Li10GeP2S12/Li10GeP2S12/Li4Ti5O12-Li10GeP2S12 cell.
Samlan, Robin A; Story, Brad H
2011-10-01
To relate vocal fold structure and kinematics to 2 acoustic measures: cepstral peak prominence (CPP) and the amplitude of the first harmonic relative to the second (H1-H2). The authors used a computational, kinematic model of the medial surfaces of the vocal folds to specify features of vocal fold structure and vibration in a manner consistent with breathy voice. Four model parameters were altered: degree of vocal fold adduction, surface bulging, vibratory nodal point, and supraglottal constriction. CPP and H1-H2 were measured from simulated glottal area, glottal flow, and acoustic waveforms and were related to the underlying vocal fold kinematics. CPP decreased with increased separation of the vocal processes, whereas the nodal point location had little effect. H1-H2 increased as a function of separation of the vocal processes in the range of 1.0 mm to 1.5 mm and decreased with separation > 1.5 mm. CPP is generally a function of vocal process separation. H1*-H2* (see paragraph 6 of article text for an explanation of the asterisks) will increase or decrease with vocal process separation on the basis of vocal fold shape, pivot point for the rotational mode, and supraglottal vocal tract shape, limiting its utility as an indicator of breathy voice. Future work will relate the perception of breathiness to vocal fold kinematics and acoustic measures.
Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui
2016-01-01
The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.
Buchwald, Peter
2017-06-01
A generalized model of receptor function is proposed that relies on the essential assumptions of the minimal two-state receptor theory (i.e., ligand binding followed by receptor activation), but uses a different parametrization and allows nonlinear response (transduction) for possible signal amplification. For the most general case, three parameters are used: K d , the classic equilibrium dissociation constant to characterize binding affinity; ε , an intrinsic efficacy to characterize the ability of the bound ligand to activate the receptor (ranging from 0 for an antagonist to 1 for a full agonist); and γ , a gain (amplification) parameter to characterize the nonlinearity of postactivation signal transduction (ranging from 1 for no amplification to infinity). The obtained equation, E/Emax=εγLεγ+1-εL+Kd, resembles that of the operational (Black and Leff) or minimal two-state (del Castillo-Katz) models, E/Emax=τLτ+1L+Kd, with εγ playing a role somewhat similar to that of the τ efficacy parameter of those models, but has several advantages. Its parameters are more intuitive as they are conceptually clearly related to the different steps of binding, activation, and signal transduction (amplification), and they are also better suited for optimization by nonlinear regression. It allows fitting of complex data where receptor binding and response are measured separately and the fractional occupancy and response are mismatched. Unlike the previous models, it is a true generalized model as simplified forms can be reproduced with special cases of its parameters. Such simplified forms can be used on their own to characterize partial agonism, competing partial and full agonists, or signal amplification.
Microphase separation in random multiblock copolymers
NASA Astrophysics Data System (ADS)
Govorun, E. N.; Chertovich, A. V.
2017-01-01
Microphase separation in random multiblock copolymers is studied with the mean-field theory assuming that long blocks of a copolymer are strongly segregated, whereas short blocks are able to penetrate into "alien" domains and exchange between the domains and interfacial layer. A bidisperse copolymer with blocks of only two sizes (long and short) is considered as a model of multiblock copolymers with high polydispersity in the block size. Short blocks of the copolymer play an important role in the microphase separation. First, their penetration into the "alien" domains leads to the formation of joint long blocks in their own domains. Second, short blocks localized at the interface considerably change the interfacial tension. The possibility of penetration of short blocks into the "alien" domains is controlled by the product χ Nsh (χ is the Flory-Huggins interaction parameter and Nsh is the short block length). At not very large χ Nsh , the domain size is larger than that for a regular copolymer consisting of the same long blocks as in the considered random copolymer. At a fixed mean block size, the domain size grows with an increase in the block size dispersity, the rate of the growth being dependent of the more detailed parameters of the block size distribution.
NASA Astrophysics Data System (ADS)
Liu, Lingling; Li, Chenxi; Zhao, Huijuan; Yi, Xi; Gao, Feng; Meng, Wei; Lu, Yiming
2014-03-01
Radiance is sensitive to the variations of tissue optical parameters, such as absorption coefficient μa, scattering coefficient μs, and anisotropy factor g. Therefore, similar to fluence, radiance can be used for tissue characterization. Compared with fluence, radiance has the advantage of offering the direction information of light intensity. Taking such advantage, the optical parameters can be determined by rotating the detector through 360 deg with only a single optode pair. Instead of the translation mode used in the fluence-based technologies, the Rotation mode has less invasiveness in the clinical diagnosis. This paper explores a new method to obtain the optical properties by measuring the distribution of light intensity in liquid phantom with only a single optode pair and the detector rotation through 360 deg. The angular radiance and distance-dependent radiance are verified by comparing experimental measurement data with Monte Carlo (MC) simulation for the short source-detector separations and diffusion approximation for the large source-detector separations. Detecting angular radiance with only a single optode pair under a certain source-detection separation will present a way for prostate diagnose and light dose calculation during the photon dynamic therapy (PDT).
Sweep and Compressibility Effects on Active Separation Control at High Reynolds Numbers
NASA Technical Reports Server (NTRS)
Seifert, Avi; Pack, LaTunia G.
2000-01-01
This paper explores the effects of compressibility, sweep and excitation location on active separation control at high Reynolds numbers. The model, which was tested in a cryogenic pressurized wind tunnel, simulates the upper surface of a 20% thick GlauertGoldschmied type airfoil at zero angle of attack. The flow is fully turbulent since the tunnel sidewall boundary layer flows over the model. Without control, the flow separates at the highly convex area and a large turbulent separation bubble is formed. Periodic excitation is applied to gradually eliminate the separation bubble. Two alternative blowing slot locations as well as the effect of compressibility, sweep and steady suction or blowing were studied. During the test the Reynolds numbers ranged from 2 to 40 million and Mach numbers ranged from 0.2 to 0.7. Sweep angles were 0 and 30 deg. It was found that excitation must be introduced slightly upstream of the separation region regardless of the sweep angle at low Mach number. Introduction of excitation upstream of the shock wave is more effective than at its foot. Compressibility reduces the ability of steady mass transfer and periodic excitation to control the separation bubble but excitation has an effect on the integral parameters, which is similar to that observed in low Mach numbers. The conventional swept flow scaling is valid for fully and even partially attached flow, but different scaling is required for the separated 3D flow. The effectiveness of the active control is not reduced by sweep. Detailed flow field dynamics are described in the accompanying paper.
Sweep and Compressibility Effects on Active Separation Control at High Reynolds Numbers
NASA Technical Reports Server (NTRS)
Seifert, Avi; Pack, LaTunia G.
2000-01-01
This paper explores the effects of compressibility, sweep and excitation location on active separation control at high Reynolds numbers. The model, which was tested in a cryogenic pressurized wind tunnel, simulates the upper surface of a 20% thick Glauert Goldschmied type airfoil at zero angle of attack. The flow is fully turbulent since the tunnel sidewall boundary layer flows over the model. Without control, the flow separates at the highly convex area and a large turbulent separation bubble is formed. Periodic excitation is applied to gradually eliminate the separation bubble. Two alternative blowing slot locations as well as the effect of compressibility, sweep and steady suction or blowing were studied. During the test the Reynolds numbers ranged from 2 to 40 million and Mach numbers ranged from 0.2 to 0.7. Sweep angles were 0 and 30 deg. It was found that excitation must be introduced slightly upstream of the separation region regardless of the sweep angle at low Mach number. Introduction of excitation upstream of the shock wave is more effective than at its foot. Compressibility reduces the ability of steady mass transfer and periodic excitation to control the separation bubble but excitation has an effect on the integral parameters, which is similar to that observed in low Mach numbers. The conventional swept flow scaling is valid for fully and even partially attached flow, but different scaling is required for the separated 3D flow. The effectiveness of the active control is not reduced by sweep. Detailed flow field dynamics are described in the accompanying paper.
ERIC Educational Resources Information Center
Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.
2016-01-01
In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…
Multivariate system of polarization tomography of biological crystals birefringence networks
NASA Astrophysics Data System (ADS)
Zabolotna, N. I.; Pavlov, S. V.; Ushenko, A. G.; Sobko, O. V.; Savich, V. O.
2014-08-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of benign and malignant tissues of the women reproductive sphere were found.
UTM: Universal Transit Modeller
NASA Astrophysics Data System (ADS)
Deeg, Hans J.
2014-12-01
The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.
Ultrafast photoinduced charge separation in metal-semiconductor nanohybrids.
Mongin, Denis; Shaviv, Ehud; Maioli, Paolo; Crut, Aurélien; Banin, Uri; Del Fatti, Natalia; Vallée, Fabrice
2012-08-28
Hybrid nano-objects formed by two or more disparate materials are among the most promising and versatile nanosystems. A key parameter in their properties is interaction between their components. In this context we have investigated ultrafast charge separation in semiconductor-metal nanohybrids using a model system of gold-tipped CdS nanorods in a matchstick architecture. Experiments are performed using an optical time-resolved pump-probe technique, exciting either the semiconductor or the metal component of the particles, and probing the light-induced change of their optical response. Electron-hole pairs photoexcited in the semiconductor part of the nanohybrids are shown to undergo rapid charge separation with the electron transferred to the metal part on a sub-20 fs time scale. This ultrafast gold charging leads to a transient red-shift and broadening of the metal surface plasmon resonance, in agreement with results for free clusters but in contrast to observation for static charging of gold nanoparticles in liquid environments. Quantitative comparison with a theoretical model is in excellent agreement with the experimental results, confirming photoexcitation of one electron-hole pair per nanohybrid followed by ultrafast charge separation. The results also point to the utilization of such metal-semiconductor nanohybrids in light-harvesting applications and in photocatalysis.
Critical conditions of polymer adsorption and chromatography on non-porous substrates.
Cimino, Richard T; Rasmussen, Christopher J; Brun, Yefim; Neimark, Alexander V
2016-07-15
We present a novel thermodynamic theory and Monte Carlo simulation model for adsorption of macromolecules to solid surfaces that is applied for calculating the chain partition during separation on chromatographic columns packed with non-porous particles. We show that similarly to polymer separation on porous substrates, it is possible to attain three chromatographic modes: size exclusion chromatography at very weak or no adsorption, liquid adsorption chromatography when adsorption effects prevail, and liquid chromatography at critical conditions that occurs at the critical point of adsorption. The main attention is paid to the analysis of the critical conditions, at which the retention is chain length independent. The theoretical results are verified with specially designed experiments on isocratic separation of linear polystyrenes on a column packed with non-porous particles at various solvent compositions. Without invoking any adjustable parameters related to the column and particle geometry, we describe quantitatively the observed transition between the size exclusion and adsorption separation regimes upon the variation of solvent composition, with the intermediate mode occurring at a well-defined critical point of adsorption. A relationship is established between the experimental solvent composition and the effective adsorption potential used in model simulations. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fisher, W. P., Jr.; Elbaum, B.; Coulter, A.
2010-07-01
Reliability coefficients indicate the proportion of total variance attributable to differences among measures separated along a quantitative continuum by a testing, survey, or assessment instrument. Reliability is usually considered to be influenced by both the internal consistency of a data set and the number of items, though textbooks and research papers rarely evaluate the extent to which these factors independently affect the data in question. Probabilistic formulations of the requirements for unidimensional measurement separate consistency from error by modelling individual response processes instead of group-level variation. The utility of this separation is illustrated via analyses of small sets of simulated data, and of subsets of data from a 78-item survey of over 2,500 parents of children with disabilities. Measurement reliability ultimately concerns the structural invariance specified in models requiring sufficient statistics, parameter separation, unidimensionality, and other qualities that historically have made quantification simple, practical, and convenient for end users. The paper concludes with suggestions for a research program aimed at focusing measurement research more on the calibration and wide dissemination of tools applicable to individuals, and less on the statistical study of inter-variable relations in large data sets.
Flow Separation Control on A Full-Scale Vertical Tail Model Using Sweeping Jet Actuators
NASA Technical Reports Server (NTRS)
Andino, Marlyn Y.; Lin, John C.; Washburn, Anthony E.; Whalen, Edward A.; Graff, Emilio C.; Wygnanski, Israel J.
2015-01-01
This paper describes test results of a joint NASA/Boeing research effort to advance Active Flow Control (AFC) technology to enhance aerodynamic efficiency. A full-scale Boeing 757 vertical tail model equipped with sweeping jets AFC was tested at the National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel at NASA Ames Research Center. The flow separation control optimization was performed at 100 knots, a maximum rudder deflection of 30deg, and sideslip angles of 0deg and -7.5deg. Greater than 20% increments in side force were achieved at the two sideslip angles with a 31-actuator AFC configuration. Flow physics and flow separation control associated with the AFC are presented in detail. AFC caused significant increases in suction pressure on the actuator side and associated side force enhancement. The momentum coefficient (C sub mu) is shown to be a useful parameter to use for scaling-up sweeping jet AFC from sub-scale tests to full-scale applications. Reducing the number of actuators at a constant total C(sub mu) of approximately 0.5% and tripling the actuator spacing did not significantly affect the flow separation control effectiveness.
Io's Heat Flow: A Model Including "Warm" Polar Regions
NASA Astrophysics Data System (ADS)
Veeder, G. J.; Matson, D. L.; Johnson, T. V.; Davies, A. G.; Blaney, D. L.
2002-12-01
Some 90 percent of Io's surface is thermally "passive" material. It is separate from the sites of active volcanic eruptions. Though "passive", its thermal behavior continues to be a challenge for modelers. The usual approach is to take albedo, average daytime temperature, temperature as a function of time of day, etc., and attempt to match these constraints with a uniform surface with a single value of thermal inertia. Io is a case where even globally averaged observations are inconsistent with a single-thermal-inertia model approach. The Veeder et al. (1994) model for "passive" thermal emission addressed seven constraints derived from a decade of ground-based, global observations - average albedo plus infrared fluxes at three separate wavelengths (4.8, 8.7, and 20 microns) for both daytime and eclipsed conditions. This model has only two components - a unit of infinite thermal inertia and a unit of zero thermal inertia. The free parameters are the areal coverage ratio of the two units and their relative albedos (constrained to match the known average albedo). This two-parameter model agreed with the global radiometric data and also predicted significantly higher non-volcanic nighttime temperatures than traditional ("lunar-like") single thermal inertia models. Recent observations from the Galileo infrared radiometer show relatively uniform minimum-night-time temperatures. In particular, they show little variation with either latitude or time of night (Spencer et al., 2000; Rathbun et al., 2002). Additionally, detailed analyses of Io's scattering properties and reflectance variations have led to the interesting conclusion that Io's albedo at regional scales varies little with latitude (Simonelli, et al., 2001). This effectively adds four new observational constraints - lack of albedo variation with latitude, average minimum nighttime temperature and lack of variation of temperature with either latitude or longitude. We have made the fewest modifications necessary for the Veeder et al. model to match these new constrains - we added two model parameters to characterize the volcanically heated high-latitude units. These are the latitude above which the unit exists and its nighttime temperature. The resulting four-parameter model is the first that encompasses all of the available observations of Io's thermal emission and that quantitatively satisfies all eleven observational constraints. While no model is unique, this model is significant because it is the first to accommodate widespread polar regions that are relatively "warm". This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA.
Loft, Shayne; Bolland, Scott; Humphreys, Michael S; Neal, Andrew
2009-06-01
A performance theory for conflict detection in air traffic control is presented that specifies how controllers adapt decisions to compensate for environmental constraints. This theory is then used as a framework for a model that can fit controller intervention decisions. The performance theory proposes that controllers apply safety margins to ensure separation between aircraft. These safety margins are formed through experience and reflect the biasing of decisions to favor safety over accuracy, as well as expectations regarding uncertainty in aircraft trajectory. In 2 experiments, controllers indicated whether they would intervene to ensure separation between pairs of aircraft. The model closely predicted the probability of controller intervention across the geometry of problems and as a function of controller experience. When controller safety margins were manipulated via task instructions, the parameters of the model changed in the predicted direction. The strength of the model over existing and alternative models is that it better captures the uncertainty and decision biases involved in the process of conflict detection. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Torque-coupled thermodynamic model for FoF1 -ATPase
NASA Astrophysics Data System (ADS)
Ai, Guangkuo; Liu, Pengfei; Ge, Hao
2017-05-01
FoF1 -ATPase is a motor protein complex that utilizes transmembrane ion flow to drive the synthesis of adenosine triphosphate (ATP) from adenosine diphosphate (ADP) and phosphate (Pi). While many theoretical models have been proposed to account for its rotary activity, most of them focus on the Fo or F1 portions separately rather than the complex as a whole. Here, we propose a simple but new torque-coupled thermodynamic model of FoF1 -ATPase. Solving this model at steady state, we find that the monotonic variation of each portion's efficiency becomes much more robust over a wide range of parameters when the Fo and F1 portions are coupled together, as compared to cases when they are considered separately. Furthermore, the coupled model predicts the dependence of each portion's kinetic behavior on the parameters of the other. Specifically, the power and efficiency of the F1 portion are quite sensitive to the proton gradient across the membrane, while those of the Fo portion as well as the related Michaelis constants for proton concentrations respond insensitively to concentration changes in the reactants of ATP synthesis. The physiological proton gradient across the membrane in the Fo portion is also shown to be optimal for the Michaelis constants of ADP and phosphate in the F1 portion during ATP synthesis. Together, our coupled model is able to predict key dynamic and thermodynamic features of the FoF1 -ATPase in vivo semiquantitatively, and suggests that such coupling approach could be further applied to other biophysical systems.
Bayesian inference for joint modelling of longitudinal continuous, binary and ordinal events.
Li, Qiuju; Pan, Jianxin; Belcher, John
2016-12-01
In medical studies, repeated measurements of continuous, binary and ordinal outcomes are routinely collected from the same patient. Instead of modelling each outcome separately, in this study we propose to jointly model the trivariate longitudinal responses, so as to take account of the inherent association between the different outcomes and thus improve statistical inferences. This work is motivated by a large cohort study in the North West of England, involving trivariate responses from each patient: Body Mass Index, Depression (Yes/No) ascertained with cut-off score not less than 8 at the Hospital Anxiety and Depression Scale, and Pain Interference generated from the Medical Outcomes Study 36-item short-form health survey with values returned on an ordinal scale 1-5. There are some well-established methods for combined continuous and binary, or even continuous and ordinal responses, but little work was done on the joint analysis of continuous, binary and ordinal responses. We propose conditional joint random-effects models, which take into account the inherent association between the continuous, binary and ordinal outcomes. Bayesian analysis methods are used to make statistical inferences. Simulation studies show that, by jointly modelling the trivariate outcomes, standard deviations of the estimates of parameters in the models are smaller and much more stable, leading to more efficient parameter estimates and reliable statistical inferences. In the real data analysis, the proposed joint analysis yields a much smaller deviance information criterion value than the separate analysis, and shows other good statistical properties too. © The Author(s) 2014.
Group Contribution Methods for Phase Equilibrium Calculations.
Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian
2015-01-01
The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
An action potential-driven model of soleus muscle activation dynamics for locomotor-like movements
NASA Astrophysics Data System (ADS)
Kim, Hojeong; Sandercock, Thomas G.; Heckman, C. J.
2015-08-01
Objective. The goal of this study was to develop a physiologically plausible, computationally robust model for muscle activation dynamics (A(t)) under physiologically relevant excitation and movement. Approach. The interaction of excitation and movement on A(t) was investigated comparing the force production between a cat soleus muscle and its Hill-type model. For capturing A(t) under excitation and movement variation, a modular modeling framework was proposed comprising of three compartments: (1) spikes-to-[Ca2+]; (2) [Ca2+]-to-A; and (3) A-to-force transformation. The individual signal transformations were modeled based on physiological factors so that the parameter values could be separately determined for individual modules directly based on experimental data. Main results. The strong dependency of A(t) on excitation frequency and muscle length was found during both isometric and dynamically-moving contractions. The identified dependencies of A(t) under the static and dynamic conditions could be incorporated in the modular modeling framework by modulating the model parameters as a function of movement input. The new modeling approach was also applicable to cat soleus muscles producing waveforms independent of those used to set the model parameters. Significance. This study provides a modeling framework for spike-driven muscle responses during movement, that is suitable not only for insights into molecular mechanisms underlying muscle behaviors but also for large scale simulations.
Kaur, Ravneet; Albano, Peter P.; Cole, Justin G.; Hagerty, Jason; LeAnder, Robert W.; Moss, Randy H.; Stoecker, William V.
2015-01-01
Background/Purpose Early detection of malignant melanoma is an important public health challenge. In the USA, dermatologists are seeing more melanomas at an early stage, before classic melanoma features have become apparent. Pink color is a feature of these early melanomas. If rapid and accurate automatic detection of pink color in these melanomas could be accomplished, there could be significant public health benefits. Methods Detection of three shades of pink (light pink, dark pink, and orange pink) was accomplished using color analysis techniques in five color planes (red, green, blue, hue and saturation). Color shade analysis was performed using a logistic regression model trained with an image set of 60 dermoscopic images of melanoma that contained pink areas. Detected pink shade areas were further analyzed with regard to the location within the lesion, average color parameters over the detected areas, and histogram texture features. Results Logistic regression analysis of a separate set of 128 melanomas and 128 benign images resulted in up to 87.9% accuracy in discriminating melanoma from benign lesions measured using area under the receiver operating characteristic curve. The accuracy in this model decreased when parameters for individual shades, texture, or shade location within the lesion were omitted. Conclusion Texture, color, and lesion location analysis applied to multiple shades of pink can assist in melanoma detection. When any of these three details: color location, shade analysis, or texture analysis were omitted from the model, accuracy in separating melanoma from benign lesions was lowered. Separation of colors into shades and further details that enhance the characterization of these color shades are needed for optimal discrimination of melanoma from benign lesions. PMID:25809473
Kaur, R; Albano, P P; Cole, J G; Hagerty, J; LeAnder, R W; Moss, R H; Stoecker, W V
2015-11-01
Early detection of malignant melanoma is an important public health challenge. In the USA, dermatologists are seeing more melanomas at an early stage, before classic melanoma features have become apparent. Pink color is a feature of these early melanomas. If rapid and accurate automatic detection of pink color in these melanomas could be accomplished, there could be significant public health benefits. Detection of three shades of pink (light pink, dark pink, and orange pink) was accomplished using color analysis techniques in five color planes (red, green, blue, hue, and saturation). Color shade analysis was performed using a logistic regression model trained with an image set of 60 dermoscopic images of melanoma that contained pink areas. Detected pink shade areas were further analyzed with regard to the location within the lesion, average color parameters over the detected areas, and histogram texture features. Logistic regression analysis of a separate set of 128 melanomas and 128 benign images resulted in up to 87.9% accuracy in discriminating melanoma from benign lesions measured using area under the receiver operating characteristic curve. The accuracy in this model decreased when parameters for individual shades, texture, or shade location within the lesion were omitted. Texture, color, and lesion location analysis applied to multiple shades of pink can assist in melanoma detection. When any of these three details: color location, shade analysis, or texture analysis were omitted from the model, accuracy in separating melanoma from benign lesions was lowered. Separation of colors into shades and further details that enhance the characterization of these color shades are needed for optimal discrimination of melanoma from benign lesions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Lu, Haibao; Yu, Kai; Huang, Wei Min; Leng, Jinsong
2016-12-01
We present an explicit model to study the mechanics and physics of the shape memory effect (SME) in polymers based on the Takayanagi principle. The molecular structural characteristics and elastic behavior of shape memory polymers (SMPs) with multi-phases are investigated in terms of the thermomechanical properties of the individual components, of which the contributions are combined by using Takayanagi’s series-parallel model and parallel-series model, respectively. After that, Boltzmann superposition principle is employed to couple the multi-SME, elastic modulus parameter (E) and temperature parameter (T) in SMPs. Furthermore, the extended Takayanagi model is proposed to separate the plasticizing effect and physical swelling effect on the thermo-/chemo-responsive SME in polymers and then compared with the available experimental data reported in the literature. This study is expected to provide a powerful simulation tool for modeling and experimental substantiation of the mechanics and working mechanism of SME in polymers.
Flickinger, Allison; Christensen, Eric D.
2017-01-01
The Little Blue River in Jackson County, Missouri, was listed as impaired in 2012 due to Escherichia coli (E. coli) from urban runoff and storm sewers. A study was initiated to characterize E. coli concentrations and loads to aid in the development of a total maximum daily load implementation plan. Longitudinal sampling along the stream revealed spatial and temporal variability in E. coli loads. Regression models were developed to better represent E. coli variability in the impaired reach using continuous hydrologic and water-quality parameters as predictive parameters. Daily loads calculated from main-stem samples were significantly higher downstream compared to upstream even though there was no significant difference between the upstream and downstream measured concentrations and no significant conclusions could be drawn from model-estimated loads due to model-associated uncertainty. Increasing sample frequency could decrease the bias and increase the accuracy of the modeled results.
Measurements in a separation bubble on an airfoil using laser velocimetry
NASA Technical Reports Server (NTRS)
Fitzgerald, Edward J.; Mueller, Thomas J.
1990-01-01
An experimental investigation was conducted to measure the reverse flow within the transitional separation bubble that forms on an airfoil at low Reynolds numbers. Measurements were used to determine the effect of the reverse flow on integrated boundary-layer parameters often used to model the bubble. Velocity profile data were obtained on an NACA 663-018 airfoil at angle of attack of 12 deg and a chord Reynolds number of 140,000 using laser Doppler and single-sensor hot-wire anemometry. A new correlation is proposed based on zero velocity position, since the Schmidt (1986) correlations fail in the turbulent portion of the bubble.
Evolutionary Calculations of Phase Separation in Crystallizing White Dwarf Stars
NASA Astrophysics Data System (ADS)
Montgomery, M. H.; Klumpe, E. W.; Winget, D. E.; Wood, M. A.
1999-11-01
We present an exploration of the significance of carbon/oxygen phase separation in white dwarf stars in the context of self-consistent evolutionary calculations. Because phase separation can potentially increase the calculated ages of the oldest white dwarfs, it can affect the age of the Galactic disk as derived from the downturn in the white dwarf luminosity function. We find that the largest possible increase in ages due to phase separation is ~1.5 Gyr, with a most likely value of approximately 0.6 Gyr, depending on the parameters of our white dwarf models. The most important factors influencing the size of this delay are the total stellar mass, the initial composition profile, and the phase diagram assumed for crystallization. We find a maximum age delay in models with masses of ~0.6 Msolar, which is near the peak in the observed white dwarf mass distribution. In addition, we note that the prescription that we have adopted for the mixing during crystallization provides an upper bound for the efficiency of this process, and hence a maximum for the age delays. More realistic treatments of the mixing process may reduce the size of this effect. We find that varying the opacities (via the metallicity) has little effect on the calculated age delays. In the context of Galactic evolution, age estimates for the oldest Galactic globular clusters range from 11.5 to 16 Gyr and depend on a variety of parameters. In addition, a 4-6 Gyr delay is expected between the formation of the globular clusters and the formation of the Galactic thin disk, while the observed white dwarf luminosity function gives an age estimate for the thin disk of 9.5+1.1-0.8 Gyr, without including the effect of phase separation. Using the above numbers, we see that phase separation could add between 0 and 3 Gyr to the white dwarf ages and still be consistent with the overall picture of Galaxy formation. Our calculated maximum value of <~1.5 Gyr fits within these bounds, as does our best-guess value of ~0.6 Gyr.
NASA Astrophysics Data System (ADS)
Riabkov, Dmitri
Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of identification that assumes that FDG blood input in the brain can be modeled as a function of time and several parameters (IFM) is analyzed also. Nonuniform sampling SLS (NSLS) is developed due to the rapid change of the FDG concentration in the blood during the early postinjection stage. Comparisons of accuracy of EVAM, SLS, NSLS and IFM identification techniques are made.
NASA Astrophysics Data System (ADS)
Kim, Kunhwi; Rutqvist, Jonny; Nakagawa, Seiji; Birkholzer, Jens
2017-11-01
This paper presents coupled hydro-mechanical modeling of hydraulic fracturing processes in complex fractured media using a discrete fracture network (DFN) approach. The individual physical processes in the fracture propagation are represented by separate program modules: the TOUGH2 code for multiphase flow and mass transport based on the finite volume approach; and the rigid-body-spring network (RBSN) model for mechanical and fracture-damage behavior, which are coupled with each other. Fractures are modeled as discrete features, of which the hydrological properties are evaluated from the fracture deformation and aperture change. The verification of the TOUGH-RBSN code is performed against a 2D analytical model for single hydraulic fracture propagation. Subsequently, modeling capabilities for hydraulic fracturing are demonstrated through simulations of laboratory experiments conducted on rock-analogue (soda-lime glass) samples containing a designed network of pre-existing fractures. Sensitivity analyses are also conducted by changing the modeling parameters, such as viscosity of injected fluid, strength of pre-existing fractures, and confining stress conditions. The hydraulic fracturing characteristics attributed to the modeling parameters are investigated through comparisons of the simulation results.
Moritz, Bernd; Locatelli, Valentina; Niess, Michele; Bathke, Andrea; Kiessig, Steffen; Entler, Barbara; Finkler, Christof; Wegele, Harald; Stracke, Jan
2017-12-01
CZE is a well-established technique for charge heterogeneity testing of biopharmaceuticals. It is based on the differences between the ratios of net charge and hydrodynamic radius. In an extensive intercompany study, it was recently shown that CZE is very robust and can be easily implemented in labs that did not perform it before. However, individual characteristics of some examined proteins resulted in suboptimal resolution. Therefore, enhanced method development principles were applied here to investigate possibilities for further method optimization. For this purpose, a high number of different method parameters was evaluated with the aim to improve CZE separation. For the relevant parameters, design of experiments (DoE) models were generated and optimized in several ways for different sets of responses like resolution, peak width and number of peaks. In spite of product specific DoE optimization it was found that the resulting combination of optimized parameters did result in significant improvement of separation for 13 out of 16 different antibodies and other molecule formats. These results clearly demonstrate generic applicability of the optimized CZE method. Adaptation to individual molecular properties may sometimes still be required in order to achieve optimal separation but the set screws discussed in this study [mainly pH, identity of the polymer additive (HPC versus HPMC) and the concentrations of additives like acetonitrile, butanolamine and TETA] are expected to significantly reduce the effort for specific optimization. 2017 The Authors. Electrophoresis published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Samlan, Robin A.; Story, Brad H.; Bunton, Kate
2014-01-01
Purpose To determine 1) how specific vocal fold structural and vibratory features relate to breathy voice quality and 2) the relation of perceived breathiness to four acoustic correlates of breathiness. Method A computational, kinematic model of the vocal fold medial surfaces was used to specify features of vocal fold structure and vibration in a manner consistent with breathy voice. Four model parameters were altered: vocal process separation, surface bulging, vibratory nodal point, and epilaryngeal constriction. Twelve naïve listeners rated breathiness of 364 samples relative to a reference. The degree of breathiness was then compared to 1) the underlying kinematic profile and 2) four acoustic measures: cepstral peak prominence (CPP), harmonics-to-noise ratio, and two measures of spectral slope. Results Vocal process separation alone accounted for 61.4% of the variance in perceptual rating. Adding nodal point ratio and bulging to the equation increased the explained variance to 88.7%. The acoustic measure CPP accounted for 86.7% of the variance in perceived breathiness, and explained variance increased to 92.6% with the addition of one spectral slope measure. Conclusions Breathiness ratings were best explained kinematically by the degree of vocal process separation and acoustically by CPP. PMID:23785184
Downscaling Smooth Tomographic Models: Separating Intrinsic and Apparent Anisotropy
NASA Astrophysics Data System (ADS)
Bodin, Thomas; Capdeville, Yann; Romanowicz, Barbara
2016-04-01
In recent years, a number of tomographic models based on full waveform inversion have been published. Due to computational constraints, the fitted waveforms are low pass filtered, which results in an inability to map features smaller than half the shortest wavelength. However, these tomographic images are not a simple spatial average of the true model, but rather an effective, apparent, or equivalent model that provides a similar 'long-wave' data fit. For example, it can be shown that a series of horizontal isotropic layers will be seen by a 'long wave' as a smooth anisotropic medium. In this way, the observed anisotropy in tomographic models is a combination of intrinsic anisotropy produced by lattice-preferred orientation (LPO) of minerals, and apparent anisotropy resulting from the incapacity of mapping discontinuities. Interpretations of observed anisotropy (e.g. in terms of mantle flow) requires therefore the separation of its intrinsic and apparent components. The "up-scaling" relations that link elastic properties of a rapidly varying medium to elastic properties of the effective medium as seen by long waves are strongly non-linear and their inverse highly non-unique. That is, a smooth homogenized effective model is equivalent to a large number of models with discontinuities. In the 1D case, Capdeville et al (GJI, 2013) recently showed that a tomographic model which results from the inversion of low pass filtered waveforms is an homogenized model, i.e. the same as the model computed by upscaling the true model. Here we propose a stochastic method to sample the ensemble of layered models equivalent to a given tomographic profile. We use a transdimensional formulation where the number of layers is variable. Furthermore, each layer may be either isotropic (1 parameter) or intrinsically anisotropic (2 parameters). The parsimonious character of the Bayesian inversion gives preference to models with the least number of parameters (i.e. least number of layers, and maximum number of isotropic layers). The non-uniqueness of the problem can be addressed by adding high frequency data such as receiver functions, able to map first order discontinuities. We show with synthetic tests that this method enables us to distinguish between intrinsic and apparent anisotropy in tomographic models, as layers with intrinsic anisotropy are only present when required by the data. A real data example is presented based on the latest global model produced at Berkeley.
Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...
2017-10-06
A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik
A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less
User Guide for VISION 3.4.7 (Verifiable Fuel Cycle Simulation) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern
2011-07-01
The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters and options; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating 'what if' scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level. The model is not intended as amore » tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., 'reactor types' not individual reactors and 'separation types' not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation or disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. VISION is comprised of several Microsoft Excel input files, a Powersim Studio core, and several Microsoft Excel output files. All must be co-located in the same folder on a PC to function. You must use Powersim Studio 8 or better. We have tested VISION with the Studio 8 Expert, Executive, and Education versions. The Expert and Education versions work with the number of reactor types of 3 or less. For more reactor types, the Executive version is currently required. The input files are Excel2003 format (xls). The output files are macro-enabled Excel2007 format (xlsm). VISION 3.4 was designed with more flexibility than previous versions, which were structured for only three reactor types - LWRs that can use only uranium oxide (UOX) fuel, LWRs that can use multiple fuel types (LWR MF), and fast reactors. One could not have, for example, two types of fast reactors concurrently. The new version allows 10 reactor types and any user-defined uranium-plutonium fuel is allowed. (Thorium-based fuels can be input but several features of the model would not work.) The user identifies (by year) the primary fuel to be used for each reactor type. The user can identify for each primary fuel a contingent fuel to use if the primary fuel is not available, e.g., a reactor designated as using mixed oxide fuel (MOX) would have UOX as the contingent fuel. Another example is that a fast reactor using recycled transuranic (TRU) material can be designated as either having or not having appropriately enriched uranium oxide as a contingent fuel. Because of the need to study evolution in recycling and separation strategies, the user can now select the recycling strategy and separation technology, by year.« less
NASA Astrophysics Data System (ADS)
Piao, Linfeng; Park, Hyungmin; Jo, Chris
2016-11-01
We present a theoretical model of the recovery rate of platelet and white blood cell in the process of centrifugal separation of platelet-rich plasma (PRP). For the practically used conditions in the field, the separation process is modeled as a one-dimensional particle sedimentation; a quasi-linear partial differential equation is derived based on the kinematic-wave theory. This is solved to determine the interface positions between supernatant-suspension and suspension-sediment, used to estimate the recovery rate of the plasma. While correcting the Brown's hypothesis (1989) claiming that the platelet recovery is linearly proportional to that of plasma, we propose a new correlation model for prediction of the platelet recovery, which is a function of the volume of whole blood, centrifugal acceleration and time. For a range of practical parameters, such as hematocrit, volume of whole blood and centrifugation (time and acceleration), the predicted recovery rate shows a good agreement with available clinical data. We propose that this model is further used to optimize the preparation method of PRP that satisfies the customized case. Supported by a Grant (MPSS-CG-2016-02) through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government.
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors.
Thipphavong, David P
2016-09-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors
Thipphavong, David P.
2017-01-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%. PMID:28684883
Top-of-Climb Matching Method for Reducing Aircraft Trajectory Prediction Errors
NASA Technical Reports Server (NTRS)
Thipphavong, David P.
2016-01-01
The inaccuracies of the aircraft performance models utilized by trajectory predictors with regard to takeoff weight, thrust, climb profile, and other parameters result in altitude errors during the climb phase that often exceed the vertical separation standard of 1000 feet. This study investigates the potential reduction in altitude trajectory prediction errors that could be achieved for climbing flights if just one additional parameter is made available: top-of-climb (TOC) time. The TOC-matching method developed and evaluated in this paper is straightforward: a set of candidate trajectory predictions is generated using different aircraft weight parameters, and the one that most closely matches TOC in terms of time is selected. This algorithm was tested using more than 1000 climbing flights in Fort Worth Center. Compared to the baseline trajectory predictions of a real-time research prototype (Center/TRACON Automation System), the TOC-matching method reduced the altitude root mean square error (RMSE) for a 5-minute prediction time by 38%. It also decreased the percentage of flights with absolute altitude error greater than the vertical separation standard of 1000 ft for the same look-ahead time from 55% to 30%.
Multiparameter Estimation in Networked Quantum Sensors
NASA Astrophysics Data System (ADS)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-01
We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.
Supermodeling by Synchronization of Alternative SPEEDO Models
NASA Astrophysics Data System (ADS)
Duane, Gregory; Selten, Frank
2016-04-01
The supermodeling approach, wherein different imperfect models of the same objective process are dynamically combined in run-time to reduce systematic error, is tested using SPEEDO - a primitive equation atmospheric model coupled to the CLIO ocean model. Three versions of SPEEDO are defined by parameters that differ in a range that arguably mimics differences among state-of-the-art climate models. A fourth model is taken to represent truth. The "true" ocean drives all three model atmospheres. The three models are also connected to one another at every level, with spatially uniform nudging coefficients that are trained so that the three models, which synchronize with one another, also synchronize with truth when data is continuously assimilated, as in weather prediction. The SPEEDO supermodel is evaluated in weather-prediction mode, with nudging to truth. It is found that the supemodel performs better than any of the three models and marginally better than the best weighted average of the outputs of the three models run separately. To evaluate the utility for climate projection, parameters corresponding to green house gas levels are changed in truth and in the three models. The supermodel formed with inter-model connections from the present-CO2 runs no longer give the optimal configuration for the supermodel in the doubled-CO2 realm, but the supermodel with the previously trained connections is still useful as compared to the separate models or averages of their outputs. In ongoing work, a training algorithm is examined that attempts to match the blocked-zonal index cycle of the SPEEDO model atmosphere to truth, rather than simply minimizing the RMS error in the various fields. Such an approach comes closer to matching the model attractor to the true attractor - the desired effect in climate projection - rather than matching instantaneous states. Gradient descent in a cost function defined over a finite temporal window can indeed be done efficiently. Preliminary results are presented for a crudely defined index cycle.
Unresolved Galaxy Classifier for ESA/Gaia mission: Support Vector Machines approach
NASA Astrophysics Data System (ADS)
Bellas-Velidis, Ioannis; Kontizas, Mary; Dapergolas, Anastasios; Livanou, Evdokia; Kontizas, Evangelos; Karampelas, Antonios
A software package Unresolved Galaxy Classifier (UGC) is being developed for the ground-based pipeline of ESA's Gaia mission. It aims to provide an automated taxonomic classification and specific parameters estimation analyzing Gaia BP/RP instrument low-dispersion spectra of unresolved galaxies. The UGC algorithm is based on a supervised learning technique, the Support Vector Machines (SVM). The software is implemented in Java as two separate modules. An offline learning module provides functions for SVM-models training. Once trained, the set of models can be repeatedly applied to unknown galaxy spectra by the pipeline's application module. A library of galaxy models synthetic spectra, simulated for the BP/RP instrument, is used to train and test the modules. Science tests show a very good classification performance of UGC and relatively good regression performance, except for some of the parameters. Possible approaches to improve the performance are discussed.
Crisanti, A; Leuzzi, L; Paoluzzi, M
2011-09-01
The interrelation of dynamic processes active on separated time-scales in glasses and viscous liquids is investigated using a model displaying two time-scale bifurcations both between fast and secondary relaxation and between secondary and structural relaxation. The study of the dynamics allows for predictions on the system relaxation above the temperature of dynamic arrest in the mean-field approximation, that are compared with the outcomes of the equations of motion directly derived within the Mode Coupling Theory (MCT) for under-cooled viscous liquids. By varying the external thermodynamic parameters, a wide range of phenomenology can be represented, from a very clear separation of structural and secondary peak in the susceptibility loss to excess wing structures.
NASA Astrophysics Data System (ADS)
Munyaneza, O.; Mukubwa, A.; Maskey, S.; Wenninger, J.; Uhlenbrook, S.
2013-12-01
In the last couple of years, different hydrological research projects were undertaken in the Migina catchment (243.2 km2), a tributary of the Kagera river in Southern Rwanda. These projects were aimed to understand hydrological processes of the catchment using analytical and experimental approaches and to build a pilot case whose experience can be extended to other catchments in Rwanda. In the present study, we developed a hydrological model of the catchment, which can be used to inform water resources planning and decision making. The semi-distributed hydrological model HEC-HMS (version 3.5) was used with its soil moisture accounting, unit hydrograph, liner reservoir (for base flow) and Muskingum-Cunge (river routing) methods. We used rainfall data from 12 stations and streamflow data from 5 stations, which were collected as part of this study over a period of two years (May 2009 and June 2011). The catchment was divided into five sub-catchments each represented by one of the five observed streamflow gauges. The model parameters were calibrated separately for each sub-catchment using the observed streamflow data. Calibration results obtained were found acceptable at four stations with a Nash-Sutcliffe Model Efficiency of 0.65 on daily runoff at the catchment outlet. Due to the lack of sufficient and reliable data for longer periods, a model validation (split sample test) was not undertaken. However, we used results from tracer based hydrograph separation from a previous study to compare our model results in terms of the runoff components. It was shown that the model performed well in simulating the total flow volume, peak flow and timing as well as the portion of direct runoff and base flow. We observed considerable disparities in the parameters (e.g. groundwater storage) and runoff components across the five sub-catchments, that provided insights into the different hydrological processes at sub-catchment scale. We conclude that such disparities justify the need to consider catchment subdivisions, if such parameters and components of the water cycle are to form the base for decision making in water resources planning in the Migina catchment.
Dai, Sheng-Yun; Xu, Bing; Zhang, Yi; Li, Jian-Yu; Sun, Fei; Shi, Xin-Yuan; Qiao, Yan-Jiang
2016-09-01
Coptis chinensis (Huanglian) is a commonly used traditional Chinese medicine (TCM) herb and alkaloids are the most important chemical constituents in it. In the present study, an isocratic reverse phase high performance liquid chromatography (RP-HPLC) method allowing the separation of six alkaloids in Huanglian was for the first time developed under the quality by design (QbD) principles. First, five chromatographic parameters were identified to construct a Plackett-Burman experimental design. The critical resolution, analysis time, and peak width were responses modeled by multivariate linear regression. The results showed that the percentage of acetonitrile, concentration of sodium dodecyl sulfate, and concentration of potassium phosphate monobasic were statistically significant parameters (P < 0.05). Then, the Box-Behnken experimental design was applied to further evaluate the interactions between the three parameters on selected responses. Full quadratic models were built and used to establish the analytical design space. Moreover, the reliability of design space was estimated by the Bayesian posterior predictive distribution. The optimal separation was predicted at 40% acetonitrile, 1.7 g·mL(-1) of sodium dodecyl sulfate and 0.03 mol·mL(-1) of potassium phosphate monobasic. Finally, the accuracy profile methodology was used to validate the established HPLC method. The results demonstrated that the QbD concept could be efficiently used to develop a robust RP-HPLC analytical method for Huanglian. Copyright © 2016 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.
Ru, Nan; Liu, Sean Shih-Yao; Zhuang, Li; Li, Song; Bai, Yuxing
2013-05-01
To observe the real-time microarchitecture changes of the alveolar bone and root resorption during orthodontic treatment. A 10 g force was delivered to move the maxillary left first molars mesially in twenty 10-week-old rats for 14 days. The first molar and adjacent alveolar bone were scanned using in vivo microcomputed tomography at the following time points: days 0, 3, 7, and 14. Microarchitecture parameters, including bone volume fraction, structure model index, trabecular thickness, trabecular number, and trabecular separation of alveolar bone, were measured on the compression and tension side. The total root volume was measured, and the resorption crater volume at each time point was calculated. Univariate repeated measures analysis of variance with Bonferroni corrections were performed to compare the differences in each parameter between time points with significance level at P < .05. From day 3 to day 7, bone volume fraction, structure model index, trabecular thickness, and trabecular separation decreased significantly on the compression side, but the same parameters increased significantly on the tension side from day 7 to day 14. Root resorption volume of the mesial root increased significantly on day 7 of orthodontic loading. Real-time root and bone resorption during orthodontic movement can be observed in 3 dimensions using in vivo micro-CT. Alveolar bone resorption and root resorption were observed mostly in the apical third on day 7 on the compression side; bone formation was observed on day 14 on the tension side during orthodontic tooth movement.
XMM-NEWTON MEASUREMENT OF THE GALACTIC HALO X-RAY EMISSION USING A COMPACT SHADOWING CLOUD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henley, David B.; Shelton, Robin L.; Cumbee, Renata S.
2015-02-01
Observations of interstellar clouds that cast shadows in the soft X-ray background can be used to separate the background Galactic halo emission from the local emission due to solar wind charge exchange (SWCX) and/or the Local Bubble (LB). We present an XMM-Newton observation of a shadowing cloud, G225.60–66.40, that is sufficiently compact that the on- and off-shadow spectra can be extracted from a single field of view (unlike previous shadowing observations of the halo with CCD-resolution spectrometers, which consisted of separate on- and off-shadow pointings). We analyzed the spectra using a variety of foreground models: one representing LB emission, andmore » two representing SWCX emission. We found that the resulting halo model parameters (temperature T {sub h} ≈ 2 × 10{sup 6} K, emission measure E{sub h}≈4×10{sup −3} cm{sup −6} pc) were not sensitive to the foreground model used. This is likely due to the relative faintness of the foreground emission in this observation. However, the data do favor the existence of a foreground. The halo parameters derived from this observation are in good agreement with those from previous shadowing observations, and from an XMM-Newton survey of the Galactic halo emission. This supports the conclusion that the latter results are not subject to systematic errors, and can confidently be used to test models of the halo emission.« less
NASA Astrophysics Data System (ADS)
Newell, P. T.; Liou, K.; Zhang, Y.; Paxton, L.; Sotirelis, T.; Mitchell, E. J.
2013-12-01
OVATION Prime is an auroral precipitation model parameterized by solar wind driving. Distinguishing features of the model include an optimized solar wind-magnetosphere coupling function (dΦMP/dt) which predicts auroral power far better than Kp or other traditional parameters, the separation of aurora into categories (diffuse aurora, monoenergetic, broadband, and ion), the inclusion of seasonal variations, and separate parameter fits for each MLATxMLT bin, thus permitting each type of aurora and each location to have differing responses to season and solar wind input (as indeed they do). We here introduce OVATION Prime-2013, an upgrade to the 2008 version currently widely available. The most notable advantage of OP-2013 is that it uses UV images from the GUVI instrument on the satellite TIMED for high disturbance levels (dΦMP/dt > 12,000 (nT2/3 (km/s)4/3 which roughly corresponds to Kp = 5+ or 6-). The range of validity is thought to be about 0 < dΦMP/dt = 30000 (say Kp = 8 or 8+). Other upgrades include a reduced susceptibility to salt and pepper noise, and smoother interpolation across the postmidnight data gap. We will also provide a comparison of the advantages and disadvantages of other current precipitation models, especially OVATION-SuperMAG, which produces particularly good estimates for total auroral power, at the expense of working best on an historical basis. OVATION Prime-2013, for high solar wind driving, as TIMED GUVI data takes over from DMSP
The appearance, motion, and disappearance of three-dimensional magnetic null points
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, Nicholas A., E-mail: namurphy@cfa.harvard.edu; Parnell, Clare E.; Haynes, Andrew L.
2015-10-15
While theoretical models and simulations of magnetic reconnection often assume symmetry such that the magnetic null point when present is co-located with a flow stagnation point, the introduction of asymmetry typically leads to non-ideal flows across the null point. To understand this behavior, we present exact expressions for the motion of three-dimensional linear null points. The most general expression shows that linear null points move in the direction along which the magnetic field and its time derivative are antiparallel. Null point motion in resistive magnetohydrodynamics results from advection by the bulk plasma flow and resistive diffusion of the magnetic field,more » which allows non-ideal flows across topological boundaries. Null point motion is described intrinsically by parameters evaluated locally; however, global dynamics help set the local conditions at the null point. During a bifurcation of a degenerate null point into a null-null pair or the reverse, the instantaneous velocity of separation or convergence of the null-null pair will typically be infinite along the null space of the Jacobian matrix of the magnetic field, but with finite components in the directions orthogonal to the null space. Not all bifurcating null-null pairs are connected by a separator. Furthermore, except under special circumstances, there will not exist a straight line separator connecting a bifurcating null-null pair. The motion of separators cannot be described using solely local parameters because the identification of a particular field line as a separator may change as a result of non-ideal behavior elsewhere along the field line.« less
NASA Astrophysics Data System (ADS)
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
Extraction and Separation Modeling of Orion Test Vehicles with ADAMS Simulation
NASA Technical Reports Server (NTRS)
Fraire, Usbaldo, Jr.; Anderson, Keith; Cuthbert, Peter A.
2013-01-01
The Capsule Parachute Assembly System (CPAS) project has increased efforts to demonstrate the performance of fully integrated parachute systems at both higher dynamic pressures and in the presence of wake fields using a Parachute Compartment Drop Test Vehicle (PCDTV) and a Parachute Test Vehicle (PTV), respectively. Modeling the extraction and separation events has proven challenging and an understanding of the physics is required to reduce the risk of separation malfunctions. The need for extraction and separation modeling is critical to a successful CPAS test campaign. Current PTV-alone simulations, such as Decelerator System Simulation (DSS), require accurate initial conditions (ICs) drawn from a separation model. Automatic Dynamic Analysis of Mechanical Systems (ADAMS), a Commercial off the Shelf (COTS) tool, was employed to provide insight into the multi-body six degree of freedom (DOF) interaction between parachute test hardware and external and internal forces. Components of the model include a composite extraction parachute, primary vehicle (PTV or PCDTV), platform cradle, a release mechanism, aircraft ramp, and a programmer parachute with attach points. Independent aerodynamic forces were applied to the mated test vehicle/platform cradle and the separated test vehicle and platform cradle. The aero coefficients were determined from real time lookup tables which were functions of both angle of attack ( ) and sideslip ( ). The atmospheric properties were also determined from a real time lookup table characteristic of the Yuma Proving Grounds (YPG) atmosphere relative to the planned test month. Representative geometries were constructed in ADAMS with measured mass properties generated for each independent vehicle. Derived smart separation parameters were included in ADAMS as sensors with defined pitch and pitch rate criteria used to refine inputs to analogous avionics systems for optimal separation conditions. Key design variables were dispersed in a Monte Carlo analysis to provide the maximum expected range of the state variables at programmer deployment to be used as ICs in DSS. Extensive comparisons were made with Decelerator System Simulation Application (DSSA) to validate the mated portion of the ADAMS extraction trajectory. Results of the comparisons improved the fidelity of ADAMS with a ramp pitch profile update from DSSA. Post-test reconstructions resulted in improvements to extraction parachute drag area knock-down factors, extraction line modeling, and the inclusion of ball-to-socket attachments used as a release mechanism on the PTV. Modeling of two Extraction parachutes was based on United States Air Force (USAF) tow test data and integrated into ADAMS for nominal and Monte Carlo trajectory assessments. Video overlay of ADAMS animations and actual C-12 chase plane test videos supported analysis and observation efforts of extraction and separation events. The COTS ADAMS simulation has been integrated with NASA based simulations to provide complete end to end trajectories with a focus on the extraction, separation, and programmer deployment sequence. The flexibility of modifying ADAMS inputs has proven useful for sensitivity studies and extraction/separation modeling efforts. 1
NASA Astrophysics Data System (ADS)
Dieterich, Sergio; Henry, Todd J.; Benedict, George Fritz; Jao, Wei-Chun; White, Russel; RECONS Team
2017-01-01
Mass is the most fundamental stellar parameter, and yet model independent dynamical masses can only be obtained for a small subset of closely separated binaries. The high angular resolution needed to characterize individual components of those systems means that little is known about the details of their atmospheric properties. We discuss the results of HST/STIS observations yielding spatially resolved optical spectra for six closely separated M dwarf systems, all of which have HST/FGS precision dynamical masses for the individual components ranging from 0.4 to 0.076 MSol. We assume coevality and equal metallicity for the components of each system and use those constraints to perform stringent tests of the leading atmospheric and evolutionary model families throughout the M dwarf mass range. We find the latest models to be in good agreement with observations. We discuss specific spectral diagnostic features such as the well-known gravity sensitive Na and K lines and address ways to break the temperature-metallicity-gravity degeneracy that often hinders the interpretation of these features. We single out a comparison between the systems GJ 469 AB and G 250-29 AB, which have nearly identical mass configurations but different metallicities, thus causing marked differences in atmospheric properties and overall luminosities.This work is funded by NASA grant HST-GO-12938. and By the NSF Astronomy and Astrophysics Postdoctoral Fellowship program through NSF grant AST-1400680.
Global and local threshold in a metapopulational SEIR model with quarantine
NASA Astrophysics Data System (ADS)
Gomes, Marcelo F. C.; Rossi, Luca; Pastore Y Piontti, Ana; Vespignani, Alessandro
2013-03-01
Diseases which have the possibility of transmission before the onset of symptoms pose a challenging threat to healthcare since it is hard to track spreaders and implement quarantine measures. More precisely, one main concerns regarding pandemic spreading of diseases is the prediction-and eventually control-of local outbreaks that will trigger a global invasion of a particular disease. We present a metapopulation disease spreading model with transmission from both symptomatic and asymptomatic agents and analyze the role of quarantine measures and mobility processes between subpopulations. We show that, depending on the disease parameters, it is possible to separate in the parameter space the local and global thresholds and study the system behavior as a function of the fraction of asymptomatic transmissions. This means that it is possible to have a range of parameters values where although we do not achieve local control of the outbreak it is possible to control the global spread of the disease. We validate the analytic picture in data-driven model that integrates commuting, air traffic flow and detailed information about population size and structure worldwide. Laboratory for the Modeling of Biological and Socio-Technical Systems (MoBS)
HYDRORECESSION: A toolbox for streamflow recession analysis
NASA Astrophysics Data System (ADS)
Arciniega, S.
2015-12-01
Streamflow recession curves are hydrological signatures allowing to study the relationship between groundwater storage and baseflow and/or low flows at the catchment scale. Recent studies have showed that streamflow recession analysis can be quite sensitive to the combination of different models, extraction techniques and parameter estimation methods. In order to better characterize streamflow recession curves, new methodologies combining multiple approaches have been recommended. The HYDRORECESSION toolbox, presented here, is a Matlab graphical user interface developed to analyse streamflow recession time series with the support of different tools allowing to parameterize linear and nonlinear storage-outflow relationships through four of the most useful recession models (Maillet, Boussinesq, Coutagne and Wittenberg). The toolbox includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error) and three different methods to extract hydrograph recessions segments (Vogel, Brutsaert and Aksoy). In addition, the toolbox has a module that separates the baseflow component from the observed hydrograph using the inverse reservoir algorithm. Potential applications provided by HYDRORECESSION include model parameter analysis, hydrological regionalization and classification, baseflow index estimates, catchment-scale recharge and low-flows modelling, among others. HYDRORECESSION is freely available for non-commercial and academic purposes.
Béquin, Ph; Castor, K; Herzog, Ph; Montembault, V
2007-04-01
This paper deals with the acoustic modeling and measurement of a needle-to-grid plasma loudspeaker using a negative Corona discharge. In the first part, we summarize the model described in previous papers, where the electrode gap is divided into a charged particle production region near the needle and a drift region which occupies most of the inter-electrode gap. In each region, interactions between charged and neutral particles in the ionized gas lead to a perturbation of the surrounding air, and thus generate an acoustic field. In each region, viewed as a separate acoustic source, an acoustical model requiring only a few parameters is proposed. In the second part of the paper, an experimental setup is presented for measuring acoustic pressures and directivities. This setup was developed and used to study the evolution of the parameters with physical properties, such as the geometrical and electrical configuration and the needle material. In the last part of this paper, a study on the electroacoustic efficiency of the plasma loudspeaker is described, and differences with respect to the design parameters are analyzed. Although this work is mainly aimed at understanding transduction phenomena, it may be found useful for the development of an audio loudspeaker.
Bentahir, Mostafa; Laduron, Frederic; Irenge, Leonid; Ambroise, Jérôme; Gala, Jean-Luc
2014-01-01
Separating CBRN mixed samples that contain both chemical and biological warfare agents (CB mixed sample) in liquid and solid matrices remains a very challenging issue. Parameters were set up to assess the performance of a simple filtration-based method first optimized on separate C- and B-agents, and then assessed on a model of CB mixed sample. In this model, MS2 bacteriophage, Autographa californica nuclear polyhedrosis baculovirus (AcNPV), Bacillus atrophaeus and Bacillus subtilis spores were used as biological agent simulants whereas ethyl methylphosphonic acid (EMPA) and pinacolyl methylphophonic acid (PMPA) were used as VX and soman (GD) nerve agent surrogates, respectively. Nanoseparation centrifugal devices with various pore size cut-off (30 kD up to 0.45 µm) and three RNA extraction methods (Invisorb, EZ1 and Nuclisens) were compared. RNA (MS2) and DNA (AcNPV) quantification was carried out by means of specific and sensitive quantitative real-time PCRs (qPCR). Liquid chromatography coupled to time-of-flight mass spectrometry (LC/TOFMS) methods was used for quantifying EMPA and PMPA. Culture methods and qPCR demonstrated that membranes with a 30 kD cut-off retain more than 99.99% of biological agents (MS2, AcNPV, Bacillus Atrophaeus and Bacillus subtilis spores) tested separately. A rapid and reliable separation of CB mixed sample models (MS2/PEG-400 and MS2/EMPA/PMPA) contained in simple liquid or complex matrices such as sand and soil was also successfully achieved on a 30 kD filter with more than 99.99% retention of MS2 on the filter membrane, and up to 99% of PEG-400, EMPA and PMPA recovery in the filtrate. The whole separation process turnaround-time (TAT) was less than 10 minutes. The filtration method appears to be rapid, versatile and extremely efficient. The separation method developed in this work constitutes therefore a useful model for further evaluating and comparing additional separation alternative procedures for a safe handling and preparation of CB mixed samples. PMID:24505375
Fourier polarimetry of the birefringence distribution of myocardium tissue
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Dubolazov, O. V.; Ushenko, V. O.; Gorsky, M. P.; Soltys, I. V.; Olar, O. V.
2015-11-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of causes of death due to coronary heart disease (CHD) and acute coronary insufficiency (ACI) were found.
Methods and means of laser polarimetry microscopy of optically anisotropic biological layers
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Dubolazov, A. V.; Ushenko, V. A.; Ushenko, Yu. A.; Sakhnovskiy, M. Y.; Olar, O. I.
2016-09-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of benign and malignant tissues of the women reproductive sphere were found.
Methods and means of Stokes-polarimetry microscopy of optically anisotropic biological layers
NASA Astrophysics Data System (ADS)
Ushenko, A. G.; Dubolazov, A. V.; Ushenko, V. A.; Ushenko, Yu. A.; Sakhnovskiy, M. Yu.; Sidor, M.; Prydiy, O. G.; Olar, O. I.; Lakusta, I. I.
2016-12-01
The results of optical modeling of biological tissues polycrystalline multilayer networks have been presented. Algorithms of reconstruction of parameter distributions were determined that describe the linear and circular birefringence. For the separation of the manifestations of these mechanisms we propose a method of space-frequency filtering. Criteria for differentiation of benign and malignant tissues of the women reproductive sphere were found.
A Study of the Interaction of Millimeter Wave Fields with Biological Systems.
1984-07-01
structurally complex proteins . The third issue is the relevance of the parameters used in previous modeling efforts. The strength of the exciton-phonon...modes of proteins in the millimeter and submillimeter regions of the electromagnetic spectrum. Specifically: o " Four separate groups of frequencies...Rhodopseudomonas Sphaeroides (4). In industrial or military environments a significant number of personnel are exposed to electromagnetic fields
Dou, Xiaorui; Su, Xin; Wang, Yue; Chen, Yadong; Shen, Weiyang
2015-11-01
Pidotimod, a synthetic dipeptide, has two chiral centers with biological and immunological activity. Its enantiomers were characterized by x-ray crystallographic analysis. A chiral stationary phase (CSP) Chiralpak-IA based on amylose derivatized with tris-(3, 5-dimethylphenyl carbamate) was used to separate pidotimod enantiomers. The mobile phase was prepared in a ratio of 35:65:0.2 of methyl-tert-butyl-ether and acetonitrile trifluoroaceticacid. In addition, thermodynamics and molecular docking methods were used to explain the enantioseparation mechanism by Chiralpak-IA. Thermodynamic studies were carried out from 10 to 45 °C. In general, both retention and enantioselectivity decreased as the temperature increased. Thermodynamic parameters indicate that the interaction force between the pidotimod enantiomer (4S, 2'R) and IA CSP is stronger and their complex model is more stable. According to GOLD molecular docking simulation, Van der Waals force is the leading cause of pidotimod enantiomers separation by IA CSP. © 2015 Wiley Periodicals, Inc.
A PREFERENCE-OPPORTUNITY-CHOICE FRAMEWORK WITH APPLICATIONS TO INTERGROUP FRIENDSHIP*
Zeng, Zhen; Xie, Yu
2009-01-01
A longstanding objective of friendship research is to identify the effects of personal preference and structural opportunity on intergroup friendship choice. Although past studies have used various methods to separate preference from opportunity, researchers have not yet systematically compared the properties and implications of these methods. We put forward a general framework for discrete choice, where choice probability is specified as proportional to the product of preference and opportunity. To implement this framework, we propose a modification to the conditional logit model for estimating preference parameters free from the influence of opportunity structure. We then compare our approach to several alternative methods for separating preference and opportunity used in the friendship choice literature. As an empirical example, we test hypotheses of homophily and status asymmetry in friendship choice using data from the National Longitudinal Study of Adolescent Health. The example also demonstrates the approach of conducting a sensitivity analysis to examine how parameter estimates vary by specification of the opportunity structure. PMID:19569394
About problematic peculiarities of Fault Tolerance digital regulation organization
NASA Astrophysics Data System (ADS)
Rakov, V. I.; Zakharova, O. V.
2018-05-01
The solution of problems concerning estimation of working capacity of regulation chains and possibilities of preventing situations of its violation in three directions are offered. The first direction is working out (creating) the methods of representing the regulation loop (circuit) by means of uniting (combining) diffuse components and forming algorithmic tooling for building predicates of serviceability assessment separately for the components and the for regulation loops (circuits, contours) in general. The second direction is creating methods of Fault Tolerance redundancy in the process of complex assessment of current values of control actions, closure errors and their regulated parameters. The third direction is creating methods of comparing the processes of alteration (change) of control actions, errors of closure and regulating parameters with their standard models or their surroundings. This direction allows one to develop methods and algorithmic tool means, aimed at preventing loss of serviceability and effectiveness of not only a separate digital regulator, but also the whole complex of Fault Tolerance regulation.
NASA Technical Reports Server (NTRS)
Badhwar, G. D.
1984-01-01
The techniques used initially for the identification of cultivated crops from Landsat imagery depended greatly on the iterpretation of film products by a human analyst. This approach was not very effective and objective. Since 1978, new methods for crop identification are being developed. Badhwar et al. (1982) showed that multitemporal-multispectral data could be reduced to a simple feature space of alpha and beta and that these features would separate corn and soybean very well. However, there are disadvantages related to the use of alpha and beta parameters. The present investigation is concerned with a suitable method for extracting the required features. Attention is given to a profile model for crop discrimination, corn-soybean separation using profile parameters, and an automatic labeling (target recognition) method. The developed technique is extended to obtain a procedure which makes it possible to estimate the crop proportion of corn and soybean from Landsat data early in the growing season.
Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks
Ruiz-Rizzo, Adriana L.; Neitzel, Julia; Müller, Hermann J.; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's “theory of visual attention” (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity. PMID:29662444
Ruiz-Rizzo, Adriana L; Neitzel, Julia; Müller, Hermann J; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's "theory of visual attention" (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity.
Dabbour, Essam; Easa, Said; Haider, Murtaza
2017-10-01
This study attempts to identify significant factors that affect the severity of drivers' injuries when colliding with trains at railroad-grade crossings by analyzing the individual-specific heterogeneity related to those factors over a period of 15 years. Both fixed-parameter and random-parameter ordered regression models were used to analyze records of all vehicle-train collisions that occurred in the United States from January 1, 2001 to December 31, 2015. For fixed-parameter ordered models, both probit and negative log-log link functions were used. The latter function accounts for the fact that lower injury severity levels are more probable than higher ones. Separate models were developed for heavy and light-duty vehicles. Higher train and vehicle speeds, female, and young drivers (below the age of 21 years) were found to be consistently associated with higher severity of drivers' injuries for both heavy and light-duty vehicles. Furthermore, favorable weather, light-duty trucks (including pickup trucks, panel trucks, mini-vans, vans, and sports-utility vehicles), and senior drivers (above the age of 65 years) were found be consistently associated with higher severity of drivers' injuries for light-duty vehicles only. All other factors (e.g. air temperature, the type of warning devices, darkness conditions, and highway pavement type) were found to be temporally unstable, which may explain the conflicting findings of previous studies related to those factors. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Johnson, H. R.; Krupp, B. M.
1975-01-01
An opacity sampling (OS) technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is presented. Tests were conducted and results show that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 500 frequency points is often adequate. The effects of atomic and molecular lines are separately studied. A test model computed by using the OS method agrees very well with a model having identical atmospheric parameters computed by the giant line (opacity distribution function) method.
Characterization of superconducting radiofrequency breakdown by two-mode excitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eremeev, Grigory V.; Palczewski, Ari D.
2014-01-14
We show that thermal and magnetic contributions to the breakdown of superconductivity in radiofrequency (RF) fields can be separated by applying two RF modes simultaneously to a superconducting surface. We develop a simple model that illustrates how mode-mixing RF data can be related to properties of the superconductor. Within our model the data can be described by a single parameter, which can be derived either from RF or thermometry data. Our RF and thermometry data are in good agreement with the model. We propose to use mode-mixing technique to decouple thermal and magnetic effects on RF breakdown of superconductors.
Cui, Lizhi; Poon, Josiah; Poon, Simon K; Chen, Hao; Gao, Junbin; Kwan, Paul; Fan, Kei; Ling, Zhihao
2014-01-01
The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective.
Sylvester-Hvid, Kristian O; Ratner, Mark A
2005-01-13
An extension of our two-dimensional working model for photovoltaic behavior in binary polymer and/or molecular photoactive blends is presented. The objective is to provide a more-realistic description of the charge generation and charge separation processes in the blend system. This is achieved by assigning an energy to each of the possible occupation states, describing the system according to a simple energy model for exciton and geminate electron-hole pair configurations. The energy model takes as primary input the ionization potential, electron affinity and optical gap of the components of the blend. The underlying photovoltaic model considers a nanoscopic subvolume of a photoactive blend and represents its p- and n-type domain morphology, in terms of a two-dimensional network of donor and acceptor sites. The nearest-neighbor hopping of charge carriers in the illuminated system is described in terms of transitions between different occupation states. The equations governing the dynamics of these states are cast into a linear master equation, which can be solved for arbitrary two-dimensional donor-acceptor networks, assuming stationary conditions. The implications of incorporating the energy model into the photovoltaic model are illustrated by simulations of the short circuit current versus thickness of the photoactive blend layer for different choices of energy parameters and donor-acceptor topology. The results suggest the existence of an optimal thickness of the photoactive film in bulk heterojunctions, based on kinetic considerations alone, and that this optimal thickness is very sensitive to the choice of energy parameters. The results also indicate space-charge limiting effects for interpenetrating donor-acceptor networks with characteristic domain sizes in the nanometer range and high driving force for the photoinduced electron transfer across the donor-acceptor internal interface.
Nava, Michele M; Raimondi, Manuela T; Pietrabissa, Riccardo
2013-11-01
The main challenge in engineered cartilage consists in understanding and controlling the growth process towards a functional tissue. Mathematical and computational modelling can help in the optimal design of the bioreactor configuration and in a quantitative understanding of important culture parameters. In this work, we present a multiphysics computational model for the prediction of cartilage tissue growth in an interstitial perfusion bioreactor. The model consists of two separate sub-models, one two-dimensional (2D) sub-model and one three-dimensional (3D) sub-model, which are coupled between each other. These sub-models account both for the hydrodynamic microenvironment imposed by the bioreactor, using a model based on the Navier-Stokes equation, the mass transport equation and the biomass growth. The biomass, assumed as a phase comprising cells and the synthesised extracellular matrix, has been modelled by using a moving boundary approach. In particular, the boundary at the fluid-biomass interface is moving with a velocity depending from the local oxygen concentration and viscous stress. In this work, we show that all parameters predicted, such as oxygen concentration and wall shear stress, by the 2D sub-model with respect to the ones predicted by the 3D sub-model are systematically overestimated and thus the tissue growth, which directly depends on these parameters. This implies that further predictive models for tissue growth should take into account of the three dimensionality of the problem for any scaffold microarchitecture.
Further studies of iron adhesion: ( 1 1 1 ) surfaces
NASA Astrophysics Data System (ADS)
Spencer, Michelle J. S.; Hung, Andrew; Snook, Ian K.; Yarovsky, Irene
2002-08-01
Adhesion between ideal bulk-terminated bcc Fe(1 1 1) match and mismatch interfaces was simulated using density functional theory (DFT) within the plane-wave pseudopotential representation. Interfaces were modelled using the supercell approach where the interfacial separation was varied by changing the size of the vacuum spacer between image cells in the z-direction. The adhesive energy values were calculated for discrete interfacial separations and the data was fitted to the universal binding energy relation (UBER) [Rose et al., Phys. Rev. B 28 (1983) 1835]. The parameters obtained from these fits allowed the work of separation ( Wsep) to be determined and a comparison to be made of the adhesion properties of the match and mismatch interfaces. The results were also compared to those obtained previously for the (1 0 0) and (1 1 0) surfaces.
A transverse separate-spin-evolution streaming instability
NASA Astrophysics Data System (ADS)
Iqbal, Z.; Andreev, Pavel A.; Murtaza, G.
2018-05-01
By using the separate spin evolution quantum hydrodynamical model, the instability of transverse mode due to electron streaming in a partially spin polarized magnetized degenerate plasma is studied. The electron spin polarization gives birth to a new spin-dependent wave (i.e., separate spin evolution streaming driven ordinary wave) in the real wave spectrum. It is shown that the spin polarization and streaming speed significantly affect the frequency of this new mode. Analyzing growth rate, it is found that the electron spin effects reduce the growth rate and shift the threshold of instability as well as its termination point towards higher values. Additionally, how the other parameters like electron streaming and Fermi pressure influence the growth rate is also investigated. Current study can help towards better understanding of the existence of new waves and streaming instability in the astrophysical plasmas.
NASA Astrophysics Data System (ADS)
Adak, Rama Prasad; Das, Supriya; Ghosh, Sanjay K.; Ray, Rajarshi; Samanta, Subhasis
2017-07-01
We estimate chemical freeze-out parameters in Hadron Resonance Gas (HRG) and Excluded Volume HRG (EVHRG) models by fitting the experimental information of net-proton and net-charge fluctuations measured in Au + Au collisions by the STAR Collaboration at the BNL Relativistic Heavy Ion Collider (RHIC). We observe that chemical freeze-out parameters obtained from lower and higher order fluctuations are almost the same for √{sNN}>27 GeV, but tend to deviate from each other at lower √{sNN}. Moreover, these separations increase with decrease of √{sNN}, and for a fixed √{sNN} increase towards central collisions. Furthermore, we observe an approximate scaling behavior of (μB/T ) /(μB/T)central with (Npart) /(Npart)central for the parameters estimated from lower order fluctuations for 11.5 ≤√{sNN}≤200 GeV. Scaling is violated for the parameters estimated from higher order fluctuations for √{sNN}=11.5 and 19.6 GeV. It is observed that the chemical freeze-out parameter, which can describe σ2/M of net protons very well in all energies and centralities, cannot describe the s σ equally well, and vice versa.
Bayes-Turchin analysis of x-ray absorption data above the Fe L{sub 2,3}-edges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossner, H. H.; Schmitz, D.; Imperia, P.
2006-10-01
Extended x-ray absorption fine structure (EXAFS) data and magnetic EXAFS (MEXAFS) data were measured at two temperatures (180 and 296 K) in the energy region of the overlapping L-edges of bcc Fe grown on a V(110) crystal surface. In combination with a Bayes-Turchin data analysis procedure these measurements enable the exploration of local crystallographic and magnetic structures. The analysis determined the atomic-like background together with the EXAFS parameters which consisted of ten shell radii, the Debye-Waller parameters, separated into structural and vibrational components, and the third cumulant of the first scattering path. The vibrational components for 97 different scattering pathsmore » were determined by a two parameter force-field model using a priori values adjusted to Born-von Karman parameters of inelastic neutron scattering data. The investigations of the system Fe/V(110) demonstrate that the simultaneous fitting of atomic background parameters and EXAFS parameters can be performed reliably. Using the L{sub 2}- and L{sub 3}-components extracted from the EXAFS analysis and the rigid-band model, the MEXAFS oscillations can only be described when the sign of the exchange energy is changed compared to the predictions of the Hedin Lundquist exchange and correlation functional.« less
NASA Astrophysics Data System (ADS)
Luo, H.; Zhang, H.; Gao, J.
2016-12-01
Seismic and magnetotelluric (MT) imaging methods are generally used to characterize subsurface structures at various scales. The two methods are complementary to each other and the integration of them is helpful for more reliably determining the resistivity and velocity models of the target region. Because of the difficulty in finding empirical relationship between resistivity and velocity parameters, Gallardo and Meju [2003] proposed a joint inversion method enforcing resistivity and velocity models consistent in structure, which is realized by minimizing cross gradients between two models. However, it is extremely challenging to combine two different inversion systems together along with the cross gradient constraints. For this reason, Gallardo [2007] proposed a joint inversion scheme that decouples the seismic and MT inversion systems by iteratively performing seismic and MT inversions as well as cross gradient minimization separately. This scheme avoids the complexity of combining two different systems together but it suffers the issue of balancing between data fitting and structure constraint. In this study, we have developed a new joint inversion scheme that avoids the problem encountered by the scheme of Gallardo [2007]. In the new scheme, seismic and MT inversions are still separately performed but the cross gradient minimization is also constrained by model perturbations from separate inversions. In this way, the new scheme still avoids the complexity of combining two different systems together and at the same time the balance between data fitting and structure consistency constraint can be enforced. We have tested our joint inversion algorithm for both 2D and 3D cases. Synthetic tests show that joint inversion better reconstructed the velocity and resistivity models than separate inversions. Compared to separate inversions, joint inversion can remove artifacts in the resistivity model and can improve the resolution for deeper resistivity structures. We will also show results applying the new joint seismic and MT inversion scheme to southwest China, where several MT profiles are available and earthquakes are very active.
NASA Astrophysics Data System (ADS)
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
NASA Astrophysics Data System (ADS)
Oni, S. K.; Futter, M. N.; Buttle, J. M.; Dillon, P.
2014-12-01
Urban sprawl and regional climate variability are major stresses on surface water resources in many places. The Lake Simcoe watershed (LSW) Ontario, Canada, is no exception. The LSW is predominantly agricultural but is experiencing rapid population growth due to its proximity to the greater Toronto area. This has led to extensive land use changes which have impacted its water resources and altered runoff patterns in some rivers draining to the lake. Here, we use a paired-catchment approach, hydrological change detection modelling and remote sensing analysis of satellite images to evaluate the impacts of land use change on the hydrology of the LSW (1994 to 2008). Results show that urbanization increased up to 16% in Lovers Creek, the most-urban impacted catchment. Annual runoff from Lovers Creek increased from 239 to 442 mm/yr in contrast to the reference catchment (Black River at Washago) where runoff was relatively stable with an annual mean of 474 mm/yr. Increased annual runoff from Lovers Creek was not accompanied by an increase in annual precipitation. Discriminant function analysis suggests that early (1992-1997; pre-major development) and late (2004-2009; fully urbanized) periods for Lovers Creek separated mainly based on model parameter sets related to runoff flashiness and evapotranspiration. As a result, parameterization in either period cannot be used interchangeably to produce credible runoff simulations in Lovers Creek due to greater scatter between the parameters in canonical space. Separation of early and late period parameter sets for the reference catchment was based on climate and snowmelt related processes. This suggests that regional climatic variability could be influencing hydrologic change in the reference catchment whereas urbanization amplified the regional natural hydrologic changes in urbanizing catchments of the LSW.
Reduced-order model for dynamic optimization of pressure swing adsorption processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, A.; Biegler, L.; Zitney, S.
2007-01-01
Over the past decades, pressure swing adsorption (PSA) processes have been widely used as energy-efficient gas and liquid separation techniques, especially for high purity hydrogen purification from refinery gases. The separation processes are based on solid-gas equilibrium and operate under periodic transient conditions. Models for PSA processes are therefore multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep concentrations and temperature fronts moving with time. As a result, the optimization of such systems for either designmore » or operation represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. The study develops a reduced-order model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. Initially, a representative ensemble of solutions of the dynamic PDE system is constructed by solving a higher-order discretization of the model using the method of lines, a two-stage approach that discretizes the PDEs in space and then integrates the resulting DAEs over time. Next, the ROM method applies the Karhunen-Loeve expansion to derive a small set of empirical eigenfunctions (POD modes) which are used as basis functions within a Galerkin's projection framework to derive a low-order DAE system that accurately describes the dominant dynamics of the PDE system. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization before and making optimization problem computationally-efficient. The method has been applied to the dynamic coupled PDE-based model of a two-bed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The gas-phase mole fraction, solid-state loading and temperature profiles from the low-order ROM and from the high-order simulations have been compared. Moreover, the profiles for a different set of inputs and parameter values fed to the same ROM were compared with the accurate profiles from the high-order simulations. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes. Moreover, deviations from the ROM for different set of inputs and parameters suggest that a recalibration of the model is required for the optimization studies. Results for these will also be presented with the aforementioned results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhu, R.; Usha Devi, A. R.; Inspire Institute Inc., McLean, Virginia 22101
2007-10-15
We employ conditional Tsallis q entropies to study the separability of symmetric one parameter W and GHZ multiqubit mixed states. The strongest limitation on separability is realized in the limit q{yields}{infinity}, and is found to be much superior to the condition obtained using the von Neumann conditional entropy (q=1 case). Except for the example of two qubit and three qubit symmetric states of GHZ family, the q-conditional entropy method leads to sufficient--but not necessary--conditions on separability.
Hu, Chuanpu; Randazzo, Bruce; Sharma, Amarnath; Zhou, Honghui
2017-10-01
Exposure-response modeling plays an important role in optimizing dose and dosing regimens during clinical drug development. The modeling of multiple endpoints is made possible in part by recent progress in latent variable indirect response (IDR) modeling for ordered categorical endpoints. This manuscript aims to investigate the level of improvement achievable by jointly modeling two such endpoints in the latent variable IDR modeling framework through the sharing of model parameters. This is illustrated with an application to the exposure-response of guselkumab, a human IgG1 monoclonal antibody in clinical development that blocks IL-23. A Phase 2b study was conducted in 238 patients with psoriasis for which disease severity was assessed using Psoriasis Area and Severity Index (PASI) and Physician's Global Assessment (PGA) scores. A latent variable Type I IDR model was developed to evaluate the therapeutic effect of guselkumab dosing on 75, 90 and 100% improvement of PASI scores from baseline and PGA scores, with placebo effect empirically modeled. The results showed that the joint model is able to describe the observed data better with fewer parameters compared with the common approach of separately modeling the endpoints.
NASA Astrophysics Data System (ADS)
Ma, H.
2016-12-01
Land surface parameters from remote sensing observations are critical in monitoring and modeling of global climate change and biogeochemical cycles. Current methods for estimating land surface parameters are generally parameter-specific algorithms and are based on instantaneous physical models, which result in spatial, temporal and physical inconsistencies in current global products. Besides, optical and Thermal Infrared (TIR) remote sensing observations are usually separated to use based on different models , and the Middle InfraRed (MIR) observations have received little attention due to the complexity of the radiometric signal that mixes both reflected and emitted fluxes. In this paper, we proposed a unified algorithm for simultaneously retrieving a total of seven land surface parameters, including Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), land surface albedo, Land Surface Temperature (LST), surface emissivity, downward and upward longwave radiation, by exploiting remote sensing observations from visible to TIR domain based on a common physical Radiative Transfer (RT) model and a data assimilation framework. The coupled PROSPECT-VISIR and 4SAIL RT model were used for canopy reflectance modeling. At first, LAI was estimated using a data assimilation method that combines MODIS daily reflectance observation and a phenology model. The estimated LAI values were then input into the RT model to simulate surface spectral emissivity and surface albedo. Besides, the background albedo and the transmittance of solar radiation, and the canopy albedo were also calculated to produce FAPAR. Once the spectral emissivity of seven MODIS MIR to TIR bands were retrieved, LST can be estimated from the atmospheric corrected surface radiance by exploiting an optimization method. At last, the upward longwave radiation were estimated using the retrieved LST, broadband emissivity (converted from spectral emissivity) and the downward longwave radiation (modeled by MODTRAN). These seven parameters were validated over several representative sites with different biome type, and compared with MODIS and GLASS product. Results showed that this unified inversion algorithm can retrieve temporally complete and physical consistent land surface parameters with high accuracy.
Development of Camera Model and Geometric Calibration/validation of Xsat IRIS Imagery
NASA Astrophysics Data System (ADS)
Kwoh, L. K.; Huang, X.; Tan, W. J.
2012-07-01
XSAT, launched on 20 April 2011, is the first micro-satellite designed and built in Singapore. It orbits the Earth at altitude of 822 km in a sun synchronous orbit. The satellite carries a multispectral camera IRIS with three spectral bands - 0.52~0.60 mm for Green, 0.63~0.69 mm for Red and 0.76~0.89 mm for NIR at 12 m resolution. In the design of IRIS camera, the three bands were acquired by three lines of CCDs (NIR, Red and Green). These CCDs were physically separated in the focal plane and their first pixels not absolutely aligned. The micro-satellite platform was also not stable enough to allow for co-registration of the 3 bands with simple linear transformation. In the camera model developed, this platform stability was compensated with 3rd to 4th order polynomials for the satellite's roll, pitch and yaw attitude angles. With the camera model, the camera parameters such as the band to band separations, the alignment of the CCDs relative to each other, as well as the focal length of the camera can be validated or calibrated. The results of calibration with more than 20 images showed that the band to band along-track separation agreed well with the pre-flight values provided by the vendor (0.093° and 0.046° for the NIR vs red and for green vs red CCDs respectively). The cross-track alignments were 0.05 pixel and 5.9 pixel for the NIR vs red and green vs red CCDs respectively. The focal length was found to be shorter by about 0.8%. This was attributed to the lower operating temperature which XSAT is currently operating. With the calibrated parameters and the camera model, a geometric level 1 multispectral image with RPCs can be generated and if required, orthorectified imagery can also be produced.
Using dry and wet year hydroclimatic extremes to guide future hydrologic projections
NASA Astrophysics Data System (ADS)
Oni, Stephen; Futter, Martyn; Ledesma, Jose; Teutschbein, Claudia; Buttle, Jim; Laudon, Hjalmar
2016-07-01
There are growing numbers of studies on climate change impacts on forest hydrology, but limited attempts have been made to use current hydroclimatic variabilities to constrain projections of future climatic conditions. Here we used historical wet and dry years as a proxy for expected future extreme conditions in a boreal catchment. We showed that runoff could be underestimated by at least 35 % when dry year parameterizations were used for wet year conditions. Uncertainty analysis showed that behavioural parameter sets from wet and dry years separated mainly on precipitation-related parameters and to a lesser extent on parameters related to landscape processes, while uncertainties inherent in climate models (as opposed to differences in calibration or performance metrics) appeared to drive the overall uncertainty in runoff projections under dry and wet hydroclimatic conditions. Hydrologic model calibration for climate impact studies could be based on years that closely approximate anticipated conditions to better constrain uncertainty in projecting extreme conditions in boreal and temperate regions.
Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward
2013-09-01
Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.
Separation of tartronic and glyceric acids by simulated moving bed chromatography.
Coelho, Lucas C D; Filho, Nelson M L; Faria, Rui P V; Ferreira, Alexandre F P; Ribeiro, Ana M; Rodrigues, Alírio E
2018-08-17
The SMB unit developed by the Laboratory of Separation and Reaction Engineering (FlexSMB-LSRE ® ) was used to perform tartronic acid (TTA) and glyceric acid (GCA) separation and to validate the mathematical model in order to determine the optimum operating parameters of an industrial unit. The purity of the raffinate and extract streams in the experiments performed were 80% and 100%, respectively. The TTA and GCA productivities were 79 and 115 kg per liter of adsorbent per day, respectively and only 0.50 cubic meters of desorbent were required per kilogram of products. Under the optimum operating conditions, which were determined through an extensive simulation study based on the mathematical model developed to predict the performance of a real SMB unit, it was possible to achieve a productivity of 86 kg of TTA and 176 kg of GCA per cubic meter of adsorbent per day (considering the typical commercial purity value of 97% for both compounds) with an eluent consumption of 0.30 cubic meters per kilogram of products. Copyright © 2018 Elsevier B.V. All rights reserved.
Visual Basic, Excel-based fish population modeling tool - The pallid sturgeon example
Moran, Edward H.; Wildhaber, Mark L.; Green, Nicholas S.; Albers, Janice L.
2016-02-10
The model presented in this report is a spreadsheet-based model using Visual Basic for Applications within Microsoft Excel (http://dx.doi.org/10.5066/F7057D0Z) prepared in cooperation with the U.S. Army Corps of Engineers and U.S. Fish and Wildlife Service. It uses the same model structure and, initially, parameters as used by Wildhaber and others (2015) for pallid sturgeon. The difference between the model structure used for this report and that used by Wildhaber and others (2015) is that variance is not partitioned. For the model of this report, all variance is applied at the iteration and time-step levels of the model. Wildhaber and others (2015) partition variance into parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level and temporal variance (uncertainty caused by random environmental fluctuations with time) applied at the time-step level. They included implicit individual variance (uncertainty caused by differences between individuals) within the time-step level.The interface developed for the model of this report is designed to allow the user the flexibility to change population model structure and parameter values and uncertainty separately for every component of the model. This flexibility makes the modeling tool potentially applicable to any fish species; however, the flexibility inherent in this modeling tool makes it possible for the user to obtain spurious outputs. The value and reliability of the model outputs are only as good as the model inputs. Using this modeling tool with improper or inaccurate parameter values, or for species for which the structure of the model is inappropriate, could lead to untenable management decisions. By facilitating fish population modeling, this modeling tool allows the user to evaluate a range of management options and implications. The goal of this modeling tool is to be a user-friendly modeling tool for developing fish population models useful to natural resource managers to inform their decision-making processes; however, as with all population models, caution is needed, and a full understanding of the limitations of a model and the veracity of user-supplied parameters should always be considered when using such model output in the management of any species.
NASA Astrophysics Data System (ADS)
Matusov, Jozef; Gavlas, Stanislav
2016-06-01
One way how is possible to separate the solid particulate pollutants from the flue gas is use the cyclone separators. The cyclone separators are very frequently used separators due to the simplicity of their design and their low operating costs. Separation of pollutants in the form of solids is carried out using three types of forces: inertia force, centrifugal force, gravity force. The main advantage is that cyclone consist of the parts which are resistant to wear and have long life time, e.g. various rotating and sliding parts. Mostly are used as pre-separators, because they have low efficiency in the separation of small particles. Their function is to separate larger particles from the flue gases which are subsequently cleaned in the other device which is capable of removing particles smaller than 1 µm, which is limiting size of particle separation. The article will deal with the issue of calculating the basic dimensions and main parameters of the cyclone separator from flue gas produced during the smelting of secondary aluminum.
Fellinger, Michael R.; Hector, Louis G.; Trinkle, Dallas R.
2016-10-28
Here, we present an efficient methodology for computing solute-induced changes in lattice parameters and elastic stiffness coefficients Cij of single crystals using density functional theory. We also introduce a solute strain misfit tensor that quantifies how solutes change lattice parameters due to the stress they induce in the host crystal. Solutes modify the elastic stiffness coefficients through volumetric changes and by altering chemical bonds. We compute each of these contributions to the elastic stiffness coefficients separately, and verify that their sum agrees with changes in the elastic stiffness coefficients computed directly using fully optimized supercells containing solutes. Computing the twomore » elastic stiffness contributions separately is more computationally efficient and provides more information on solute effects than the direct calculations. We compute the solute dependence of polycrystalline averaged shear and Young's moduli from the solute dependence of the single-crystal Cij. We then apply this methodology to substitutional Al, B, Cu, Mn, Si solutes and octahedral interstitial C and N solutes in bcc Fe. Comparison with experimental data indicates that our approach accurately predicts solute-induced changes in the lattice parameter and elastic coefficients. The computed data can be used to quantify solute-induced changes in mechanical properties such as strength and ductility, and can be incorporated into mesoscale models to improve their predictive capabilities.« less
Characterization and recycling of cadmium from waste nickel-cadmium batteries.
Huang, Kui; Li, Jia; Xu, Zhenming
2010-11-01
A severe threat was posed due to improper and inefficient recycling of waste batteries in China. The present work considered the fundamental aspects of the recycling of cadmium from waste nickel-cadmium batteries by means of vacuum metallurgy separation in scale-up. In the first stage of this work, the characterization of waste nickel-cadmium batteries was carried out. Five types of batteries from different brands and models were selected and their components were characterized in relation to their elemental chemical composition and main phase. In the second stage of this work, the parameters affecting the recycling of cadmium by means of vacuum metallurgy separation were investigated and a L(16) (4(4)) orthogonal design was applied to optimize the parameters. With the thermodynamics theory and numerical analysis, it can be seen that the orthogonal design is an effective tool for investigating the parameters affecting the recycling of cadmium. The optimum operating parameters for the recycling of cadmium obtained by orthogonal design and verification test were 1073 K (temperature), 2.5h (heating time), 2 wt.% (the addition of carbon powder), and 30 mm (the loaded height), respectively, with recycling efficiency approaching 99.98%. The XRD and ICP-AES analyzed results show that the condensed product was characterized as metallic cadmium, and cadmium purity was 99.99% under the optimum condition. Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hou, Fang
With the extensive application of fiber-reinforced composite laminates in industry, research on the fracture mechanisms of this type of materials have drawn more and more attentions. A variety of fracture theories and models have been developed. Among them, the linear elastic fracture mechanics (LEFM) and cohesive-zone model (CZM) are two widely-accepted fracture models, which have already shown applicability in the fracture analysis of fiber-reinforced composite laminates. However, there remain challenges which prevent further applications of the two fracture models, such as the experimental measurement of fracture resistance. This dissertation primarily focused on the study of the applicability of LEFM and CZM for the fracture analysis of translaminar fracture in fibre-reinforced composite laminates. The research for each fracture model consisted of two sections: the analytical characterization of crack-tip fields and the experimental measurement of fracture resistance parameters. In the study of LEFM, an experimental investigation based on full-field crack-tip displacement measurements was carried out as a way to characterize the subcritical and steady-state crack advances in translaminar fracture of fiber-reinforced composite laminates. Here, the fiber-reinforced composite laminates were approximated as anisotropic solids. The experimental investigation relied on the LEFM theory with a modification with respect to the material anisotropy. Firstly, the full-field crack-tip displacement fields were measured by Digital Image Correlation (DIC). Then two methods, separately based on the stress intensity approach and the energy approach, were developed to measure the crack-tip field parameters from crack-tip displacement fields. The studied crack-tip field parameters included the stress intensity factor, energy release rate and effective crack length. Moreover, the crack-growth resistance curves (R-curves) were constructed with the measured crack-tip field parameters. In addition, an error analysis was carried out with an emphasis on the influence of out-of-plane rotation of specimen. In the study of CZM, two analytical inverse methods, namely the field projection method (FPM) and the separable nonlinear least-squares method, were developed for the extraction of cohesive fracture properties from crack-tip full-field displacements. Firstly, analytical characterizations of the elastic fields around a crack-tip cohesive zone and the cohesive variables within the cohesive zone were derived in terms of an eigenfunction expansion. Then both of the inverse methods were developed based on the analytical characterization. With the analytical inverse methods, the cohesive-zone law (CZL), cohesive-zone size and position can be inversely computed from the cohesive-crack-tip displacement fields. In the study, comprehensive numerical tests were carried out to investigate the applicability and robustness of two inverse methods. From the numerical tests, it was found that the field projection method was very sensitive to noise and thus had limited applicability in practice. On the other hand, the separable nonlinear least-squares method was found to be more noise-resistant and less ill-conditioned. Subsequently, the applicability of separable nonlinear least-squares method was validated with the same translaminar fracture experiment for the study of LEFM. Eventually, it was found that the experimental measurements of R-curves and CZL showed a great agreement, in both of the fracture energy and the predicted load carrying capability. It thus demonstrated the validity of present research for the translaminar fracture of fiber-reinforced composite laminates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanemoto, S.; Andoh, Y.; Sandoz, S.A.
1984-10-01
A method for evaluating reactor stability in boiling water reactors has been developed. The method is based on multivariate autoregressive (M-AR) modeling of steady-state neutron and process noise signals. In this method, two kinds of power spectral densities (PSDs) for the measured neutron signal and the corresponding noise source signal are separately identified by the M-AR modeling. The closed- and open-loop stability parameters are evaluated from these PSDs. The method is applied to actual plant noise data that were measured together with artificial perturbation test data. Stability parameters identified from noise data are compared to those from perturbation test data,more » and it is shown that both results are in good agreement. In addition to these stability estimations, driving noise sources for the neutron signal are evaluated by the M-AR modeling. Contributions from void, core flow, and pressure noise sources are quantitatively evaluated, and the void noise source is shown to be the most dominant.« less
Modelling the transitional boundary layer
NASA Technical Reports Server (NTRS)
Narasimha, R.
1990-01-01
Recent developments in the modelling of the transition zone in the boundary layer are reviewed (the zone being defined as extending from the station where intermittency begins to depart from zero to that where it is nearly unity). The value of using a new non-dimensional spot formation rate parameter, and the importance of allowing for so-called subtransitions within the transition zone, are both stressed. Models do reasonably well in constant pressure 2-dimensional flows, but in the presence of strong pressure gradients further improvements are needed. The linear combination approach works surprisingly well in most cases, but would not be so successful in situations where a purely laminar boundary layer would separate but a transitional one would not. Intermittency-weighted eddy viscosity methods do not predict peak surface parameters well without the introduction of an overshooting transition function whose connection with the spot theory of transition is obscure. Suggestions are made for further work that now appears necessary for developing improved models of the transition zone.
Zgheib, Sara; Méquinion, Mathieu; Lucas, Stéphanie; Leterme, Damien; Ghali, Olfa; Tolle, Virginie; Zizzari, Philippe; Bellefontaine, Nicole; Legroux-Gérot, Isabelle; Hardouin, Pierre; Broux, Odile; Viltart, Odile; Chauveau, Christophe
2014-01-01
Anorexia nervosa is a primary psychiatric disorder, with non-negligible rates of mortality and morbidity. Some of the related alterations could participate in a vicious cycle limiting the recovery. Animal models mimicking various physiological alterations related to anorexia nervosa are necessary to provide better strategies of treatment. To explore physiological alterations and recovery in a long-term mouse model mimicking numerous consequences of severe anorexia nervosa. C57Bl/6 female mice were submitted to a separation-based anorexia protocol combining separation and time-restricted feeding for 10 weeks. Thereafter, mice were housed in standard conditions for 10 weeks. Body weight, food intake, body composition, plasma levels of leptin, adiponectin, IGF-1, blood levels of GH, reproductive function and glucose tolerance were followed. Gene expression of several markers of lipid and energy metabolism was assayed in adipose tissues. Mimicking what is observed in anorexia nervosa patients, and despite a food intake close to that of control mice, separation-based anorexia mice displayed marked alterations in body weight, fat mass, lean mass, bone mass acquisition, reproductive function, GH/IGF-1 axis, and leptinemia. mRNA levels of markers of lipogenesis, lipolysis, and the brown-like adipocyte lineage in subcutaneous adipose tissue were also changed. All these alterations were corrected during the recovery phase, except for the hypoleptinemia that persisted despite the full recovery of fat mass. This study strongly supports the separation-based anorexia protocol as a valuable model of long-term negative energy balance state that closely mimics various symptoms observed in anorexia nervosa, including metabolic adaptations. Interestingly, during a recovery phase, mice showed a high capacity to normalize these parameters with the exception of plasma leptin levels. It will be interesting therefore to explore further the central and peripheral effects of the uncorrected hypoleptinemia during recovery from separation-based anorexia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schimpe, Michael; von Kuepach, M. E.; Naumann, M.
For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately. For parameterization, a lifetime test study is conducted includingmore » storage and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. Tests for validation are continued for up to 114 days after the longest parametrization tests. In conclusion, the model error for the cell capacity loss in the application-based tests is at the end of testing below 1% of the original cell capacity and the maximum relative model error is below 21%.« less
Schimpe, Michael; von Kuepach, M. E.; Naumann, M.; ...
2018-01-12
For reliable lifetime predictions of lithium-ion batteries, models for cell degradation are required. A comprehensive semi-empirical model based on a reduced set of internal cell parameters and physically justified degradation functions for the capacity loss is developed and presented for a commercial lithium iron phosphate/graphite cell. One calendar and several cycle aging effects are modeled separately. Emphasis is placed on the varying degradation at different temperatures. Degradation mechanisms for cycle aging at high and low temperatures as well as the increased cycling degradation at high state of charge are calculated separately. For parameterization, a lifetime test study is conducted includingmore » storage and cycle tests. Additionally, the model is validated through a dynamic current profile based on real-world application in a stationary energy storage system revealing the accuracy. Tests for validation are continued for up to 114 days after the longest parametrization tests. In conclusion, the model error for the cell capacity loss in the application-based tests is at the end of testing below 1% of the original cell capacity and the maximum relative model error is below 21%.« less
Hot kinetic model as a guide to improve organic photovoltaic materials.
Sosorev, Andrey Yu; Godovsky, Dmitry Yu; Paraschuk, Dmitry Yu
2018-01-31
The modeling of organic solar cells (OSCs) can provide a roadmap for their further improvement. Many OSC models have been proposed in recent years; however, the impact of the key intermediates from photons to electricity-hot charge-transfer (CT) states-on the OSC efficiency is highly ambiguous. In this study, we suggest an analytical kinetic model for OSC that considers a two-step charge generation via hot CT states. This hot kinetic model allowed us to evaluate the impact of different material parameters on the OSC performance: the driving force for charge separation, optical bandgap, charge mobility, geminate recombination rate, thermalization rate, average electron-hole separation distance in the CT state, dielectric permittivity, reorganization energy and charge delocalization. In contrast to a widespread trend of lowering the material bandgap, the model predicts that this approach is only efficient along with improvement of the other material properties. The most promising ways to increase the OSC performance are decreasing the reorganization energy, i.e., an energy change accompanying CT from the donor molecule to the acceptor, increasing the dielectric permittivity and charge delocalization. The model suggests that there are no fundamental limitations that can prevent achieving the OSC efficiency above 20%.
Improved parameter extraction and classification for dynamic contrast enhanced MRI of prostate
NASA Astrophysics Data System (ADS)
Haq, Nandinee Fariah; Kozlowski, Piotr; Jones, Edward C.; Chang, Silvia D.; Goldenberg, S. Larry; Moradi, Mehdi
2014-03-01
Magnetic resonance imaging (MRI), particularly dynamic contrast enhanced (DCE) imaging, has shown great potential in prostate cancer diagnosis and prognosis. The time course of the DCE images provides measures of the contrast agent uptake kinetics. Also, using pharmacokinetic modelling, one can extract parameters from the DCE-MR images that characterize the tumor vascularization and can be used to detect cancer. A requirement for calculating the pharmacokinetic DCE parameters is estimating the Arterial Input Function (AIF). One needs an accurate segmentation of the cross section of the external femoral artery to obtain the AIF. In this work we report a semi-automatic method for segmentation of the cross section of the femoral artery, using circular Hough transform, in the sequence of DCE images. We also report a machine-learning framework to combine pharmacokinetic parameters with the model-free contrast agent uptake kinetic parameters extracted from the DCE time course into a nine-dimensional feature vector. This combination of features is used with random forest and with support vector machine classi cation for cancer detection. The MR data is obtained from patients prior to radical prostatectomy. After the surgery, wholemount histopathology analysis is performed and registered to the DCE-MR images as the diagnostic reference. We show that the use of a combination of pharmacokinetic parameters and the model-free empirical parameters extracted from the time course of DCE results in improved cancer detection compared to the use of each group of features separately. We also validate the proposed method for calculation of AIF based on comparison with the manual method.
Zhang, Daqing; Xiao, Jianfeng; Zhou, Nannan; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian
2015-01-01
Blood-brain barrier (BBB) is a highly complex physical barrier determining what substances are allowed to enter the brain. Support vector machine (SVM) is a kernel-based machine learning method that is widely used in QSAR study. For a successful SVM model, the kernel parameters for SVM and feature subset selection are the most important factors affecting prediction accuracy. In most studies, they are treated as two independent problems, but it has been proven that they could affect each other. We designed and implemented genetic algorithm (GA) to optimize kernel parameters and feature subset selection for SVM regression and applied it to the BBB penetration prediction. The results show that our GA/SVM model is more accurate than other currently available log BB models. Therefore, to optimize both SVM parameters and feature subset simultaneously with genetic algorithm is a better approach than other methods that treat the two problems separately. Analysis of our log BB model suggests that carboxylic acid group, polar surface area (PSA)/hydrogen-bonding ability, lipophilicity, and molecular charge play important role in BBB penetration. Among those properties relevant to BBB penetration, lipophilicity could enhance the BBB penetration while all the others are negatively correlated with BBB penetration. PMID:26504797
Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.
2012-01-01
Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225
Shankle, William R; Pooley, James P; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D
2013-01-01
Determining how cognition affects functional abilities is important in Alzheimer disease and related disorders. A total of 280 patients (normal or Alzheimer disease and related disorders) received a total of 1514 assessments using the functional assessment staging test (FAST) procedure and the MCI Screen. A hierarchical Bayesian cognitive processing model was created by embedding a signal detection theory model of the MCI Screen-delayed recognition memory task into a hierarchical Bayesian framework. The signal detection theory model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the 6 FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. Hierarchical Bayesian cognitive processing models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition into a continuous measure of functional severity for both individuals and FAST groups. Such a translation links 2 levels of brain information processing and may enable more accurate correlations with other levels, such as those characterized by biomarkers.
Modelling of hydrogen permeability of membranes for high-purity hydrogen production
NASA Astrophysics Data System (ADS)
Zaika, Yury V.; Rodchenkova, Natalia I.
2017-11-01
High-purity hydrogen is required for clean energy and a variety of chemical technology processes. Different alloys, which may be well-suited for use in gas-separation plants, were investigated by measuring specific hydrogen permeability. One had to estimate the parameters of diffusion and sorption to numerically model the different scenarios and experimental conditions of the material usage (including extreme ones), and identify the limiting factors. This paper presents a nonlinear mathematical model taking into account the dynamics of sorption-desorption processes and reversible capture of diffusing hydrogen by inhomogeneity of the material’s structure, and also modification of the model when the transport rate is high. The results of numerical modelling allow to obtain information about output data sensitivity with respect to variations of the material’s hydrogen permeability parameters. Furthermore, it is possible to analyze the dynamics of concentrations and fluxes that cannot be measured directly. Experimental data for Ta77Nb23 and V85Ni15 alloys were used to test the model. This work is supported by the Russian Foundation for Basic Research (Project No. 15-01-00744).
Teunis, P F M; Ogden, I D; Strachan, N J C
2008-06-01
The infectivity of pathogenic microorganisms is a key factor in the transmission of an infectious disease in a susceptible population. Microbial infectivity is generally estimated from dose-response studies in human volunteers. This can only be done with mildly pathogenic organisms. Here a hierarchical Beta-Poisson dose-response model is developed utilizing data from human outbreaks. On the lowest level each outbreak is modelled separately and these are then combined at a second level to produce a group dose-response relation. The distribution of foodborne pathogens often shows strong heterogeneity and this is incorporated by introducing an additional parameter to the dose-response model, accounting for the degree of overdispersion relative to Poisson distribution. It was found that heterogeneity considerably influences the shape of the dose-response relationship and increases uncertainty in predicted risk. This uncertainty is greater than previously reported surrogate and outbreak models using a single level of analysis. Monte Carlo parameter samples (alpha, beta of the Beta-Poisson model) can be readily incorporated in risk assessment models built using tools such as S-plus and @ Risk.
Reboussin, Beth A.; Ialongo, Nicholas S.
2011-01-01
Summary Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder which is most often diagnosed in childhood with symptoms often persisting into adulthood. Elevated rates of substance use disorders have been evidenced among those with ADHD, but recent research focusing on the relationship between subtypes of ADHD and specific drugs is inconsistent. We propose a latent transition model (LTM) to guide our understanding of how drug use progresses, in particular marijuana use, while accounting for the measurement error that is often found in self-reported substance use data. We extend the LTM to include a latent class predictor to represent empirically derived ADHD subtypes that do not rely on meeting specific diagnostic criteria. We begin by fitting two separate latent class analysis (LCA) models by using second-order estimating equations: a longitudinal LCA model to define stages of marijuana use, and a cross-sectional LCA model to define ADHD subtypes. The LTM model parameters describing the probability of transitioning between the LCA-defined stages of marijuana use and the influence of the LCA-defined ADHD subtypes on these transition rates are then estimated by using a set of first-order estimating equations given the LCA parameter estimates. A robust estimate of the LTM parameter variance that accounts for the variation due to the estimation of the two sets of LCA parameters is proposed. Solving three sets of estimating equations enables us to determine the underlying latent class structures independently of the model for the transition rates and simplifying assumptions about the correlation structure at each stage reduces the computational complexity. PMID:21461139
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Dong, E-mail: radon.han@gmail.com; Williamson, Jeffrey F.; Siebers, Jeffrey V.
2016-01-15
Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracymore » of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type and proton energy range. The BVM is slightly more vulnerable to CT image intensity uncertainties than the tPFM models. Both the BVM and tPFM prediction accuracies were robust to uncertainties of tissue composition and independent of the choice of reference values. This reported accuracy does not include the impacts of I-value uncertainties and imaging artifacts and may not be achievable on current clinical CT scanners. Conclusions: The proton stopping power estimation accuracy of the proposed linear, separable BVM model is comparable to or better than that of the nonseparable tPFM models proposed by other groups. In contrast to the tPFM, the BVM does not require an iterative solving for effective atomic number and electron density at every voxel; this improves the computational efficiency of DECT imaging when iterative, model-based image reconstruction algorithms are used to minimize noise and systematic imaging artifacts of CT images.« less
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras
Morris, Mark; Sellers, William I.
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.
Peyer, Kathrin E; Morris, Mark; Sellers, William I
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.
ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.
Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less