Baillet, S.; Mosher, J. C.; Jerbi, K.; Leahy, R. M.
2001-01-01
Reliable estimation of the local spatial extent of neural activity is a key to the quantitative analysis of MEG sources across subjects and conditions. In association with an understanding of the temporal dynamics among multiple areas, this would represent a major advance in electrophysiological source imaging. Parametric current dipole approaches to MEG (and EEG) source localization can rapidly generate a physical model of neural current generators using a limited number of parameters. However, physiological interpretation of these models is often difficult, especially in terms of the spatial extent of the true cortical activity. In new approaches using multipolar source models [3, 5], similar problems remain in the analysis of the higher-order source moments as parameters of cortical extent. Image-based approaches to the inverse problem provide a direct estimate of cortical current generators, but computationally expensive nonlinear methods are required to produce focal sources [1,4]. Recent efforts describe how a cortical patch can be grown until a best fit to the data is reached in the least-squares sense [6], but computational considerations necessitate that the growth be seeded in predefined regions of interest. In a previous study [2], a source obtained using a parametric model was remapped onto the cortex by growing a patch of cortical dipoles in the vicinity of the parametric source until the forward MEG or EEG fields of the parametric and cortical sources matched. The source models were dipoles and first-order multipoles. We propose to combine the parametric and imaging methods for MEG source characterization to take advantage of (i) the parsimonious and computationally efficient nature of parametric source localization methods and (ii) the anatomical and physiological consistency of imaging techniques that use relevant a priori information. By performing the cortical remapping imaging step by matching the multipole expansions of the original parametric
Two parametric voice source models and their asymptotic analysis
NASA Astrophysics Data System (ADS)
Leonov, A. S.; Sorokin, V. N.
2014-05-01
The paper studies the asymptotic behavior of the function for the area of the glottis near moments of its opening and closing for two mathematical voice source models. It is shown that in the first model, the asymptotics of the area function obeys a power law with an exponent of no less that 1. Detailed analysis makes it possible to refine these limits depending on the relative sizes of the intervals of a closed and open glottis. This work also studies another parametric model of the area of the glottis, which is based on a simplified physical-geometrical representation of vocal-fold vibration processes. This is a special variant of the well-known two-mass model and contains five parameters: the period of the main tone, equivalent masses on the lower and upper edge of vocal folds, the coefficient of elastic resistance of the lower vocal fold, and the delay time between openings of the upper and lower folds. It is established that the asymptotics of the obtained function for the area of the glottis obey a power law with an exponent of 1 both for opening and closing.
Parametric Modeling of Electron Beam Loss in Synchrotron Light Sources
Sayyar-Rodsari, B.; Schweiger, C.; Hartman, E.; Corbett, J.; Lee, M.; Lui, P.; Paterson, E.; /SLAC
2007-11-28
Synchrotron light is used for a wide variety of scientific disciplines ranging from physical chemistry to molecular biology and industrial applications. As the electron beam circulates, random single-particle collisional processes lead to decay of the beam current in time. We report a simulation study in which a combined neural network (NN) and first-principles (FP) model is used to capture the decay in beam current due to Touschek, Bremsstrahlung, and Coulomb effects. The FP block in the combined model is a parametric description of the beam current decay where model parameters vary as a function of beam operating conditions (e.g. vertical scraper position, RF voltage, number of the bunches, and total beam current). The NN block provides the parameters of the FP model and is trained (through constrained nonlinear optimization) to capture the variation in model parameters as operating condition of the beam changes. Simulation results will be presented to demonstrate that the proposed combined framework accurately models beam decay as well as variation to model parameters without direct access to parameter values in the model.
Parametric Explosion Spectral Model
Ford, S R; Walter, W R
2012-01-19
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. Explosion spectra can be fit with similar spectral models whose parameters are then correlated with near-source geology and containment conditions. We observe a correlation of high gas-porosity (low-strength) with increased spectral slope. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.
Model and parametric uncertainty in source-based kinematic models of earthquake ground motion
Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur
2011-01-01
Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.
Modeling and optimization of photon pair sources based on spontaneous parametric down-conversion
Kolenderski, Piotr; Banaszek, Konrad; Wasilewski, Wojciech
2009-07-15
We address the problem of efficient modeling of photon pairs generated in spontaneous parametric down-conversion and coupled into single-mode fibers. It is shown that when the range of relevant transverse wave vectors is restricted by the pump and fiber modes, the computational complexity can be reduced substantially with the help of the paraxial approximation, while retaining the full spectral characteristics of the source. This approach can serve as a basis for efficient numerical calculations or can be combined with analytically tractable approximations of the phase-matching function. We introduce here a cosine-Gaussian approximation of the phase-matching function that works for a broader range of parameters than the Gaussian model used previously. The developed modeling tools are used to evaluate characteristics of the photon pair sources such as the pair production rate and the spectral purity quantifying frequency correlations. Strategies to generate spectrally uncorrelated photons, necessary in multiphoton interference experiments, are analyzed with respect to trade-offs between parameters of the source.
Wey, Andrew; Connett, John; Rudser, Kyle
2015-07-01
For estimating conditional survival functions, non-parametric estimators can be preferred to parametric and semi-parametric estimators due to relaxed assumptions that enable robust estimation. Yet, even when misspecified, parametric and semi-parametric estimators can possess better operating characteristics in small sample sizes due to smaller variance than non-parametric estimators. Fundamentally, this is a bias-variance trade-off situation in that the sample size is not large enough to take advantage of the low bias of non-parametric estimation. Stacked survival models estimate an optimally weighted combination of models that can span parametric, semi-parametric, and non-parametric models by minimizing prediction error. An extensive simulation study demonstrates that stacked survival models consistently perform well across a wide range of scenarios by adaptively balancing the strengths and weaknesses of individual candidate survival models. In addition, stacked survival models perform as well as or better than the model selected through cross-validation. Finally, stacked survival models are applied to a well-known German breast cancer study.
Parametric Modeling for Fluid Systems
NASA Technical Reports Server (NTRS)
Pizarro, Yaritzmar Rosario; Martinez, Jonathan
2013-01-01
Fluid Systems involves different projects that require parametric modeling, which is a model that maintains consistent relationships between elements as is manipulated. One of these projects is the Neo Liquid Propellant Testbed, which is part of Rocket U. As part of Rocket U (Rocket University), engineers at NASA's Kennedy Space Center in Florida have the opportunity to develop critical flight skills as they design, build and launch high-powered rockets. To build the Neo testbed; hardware from the Space Shuttle Program was repurposed. Modeling for Neo, included: fittings, valves, frames and tubing, between others. These models help in the review process, to make sure regulations are being followed. Another fluid systems project that required modeling is Plant Habitat's TCUI test project. Plant Habitat is a plan to develop a large growth chamber to learn the effects of long-duration microgravity exposure to plants in space. Work for this project included the design and modeling of a duct vent for flow test. Parametric Modeling for these projects was done using Creo Parametric 2.0.
Parametric models for samples of random functions
Grigoriu, M.
2015-09-15
A new class of parametric models, referred to as sample parametric models, is developed for random elements that match sample rather than the first two moments and/or other global properties of these elements. The models can be used to characterize, e.g., material properties at small scale in which case their samples represent microstructures of material specimens selected at random from a population. The samples of the proposed models are elements of finite-dimensional vector spaces spanned by samples, eigenfunctions of Karhunen–Loève (KL) representations, or modes of singular value decompositions (SVDs). The implementation of sample parametric models requires knowledge of the probability laws of target random elements. Numerical examples including stochastic processes and random fields are used to demonstrate the construction of sample parametric models, assess their accuracy, and illustrate how these models can be used to solve efficiently stochastic equations.
Parametric modelling for electrical impedance spectroscopy system.
Lu, L; Hamzaoui, L; Brown, B H; Rigaud, B; Smallwood, R H; Barber, D C; Morucci, J P
1996-03-01
Three parametric modelling approaches based on the Cole-Cole model are introduced. Comparison between modelling only the real part and modelling both the real and imaginary parts is carried out by simulations, in which random and systematic noise are considered, respectively. The results of modelling the in vitro data collected from sheep are given to reach the conclusions.
Towards an Empirically Based Parametric Explosion Spectral Model
Ford, S R; Walter, W R; Ruppert, S; Matzel, E; Hauk, T; Gok, R
2009-08-31
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any prior explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.
Modeling personnel turnover in the parametric organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.
Parametric modelling of a knee joint prosthesis.
Khoo, L P; Goh, J C; Chow, S L
1993-01-01
This paper presents an approach for the establishment of a parametric model of knee joint prosthesis. Four different sizes of a commercial prosthesis are used as an example in the study. A reverse engineering technique was employed to reconstruct the prosthesis on CATIA, a CAD (computer aided design) system. Parametric models were established as a result of the analysis. Using the parametric model established and the knee data obtained from a clinical study on 21 pairs of cadaveric Asian knees, the development of a prototype prosthesis that suits a patient with a very small knee joint is presented. However, it was found that modification to certain parameters may be inevitable due to the uniqueness of the Asian knee. An avenue for rapid modelling and eventually economical production of a customized knee joint prosthesis for patients is proposed and discussed.
Incorporating parametric uncertainty into population viability analysis models
McGowan, Conor P.; Runge, Michael C.; Larson, Michael A.
2011-01-01
Uncertainty in parameter estimates from sampling variation or expert judgment can introduce substantial uncertainty into ecological predictions based on those estimates. However, in standard population viability analyses, one of the most widely used tools for managing plant, fish and wildlife populations, parametric uncertainty is often ignored in or discarded from model projections. We present a method for explicitly incorporating this source of uncertainty into population models to fully account for risk in management and decision contexts. Our method involves a two-step simulation process where parametric uncertainty is incorporated into the replication loop of the model and temporal variance is incorporated into the loop for time steps in the model. Using the piping plover, a federally threatened shorebird in the USA and Canada, as an example, we compare abundance projections and extinction probabilities from simulations that exclude and include parametric uncertainty. Although final abundance was very low for all sets of simulations, estimated extinction risk was much greater for the simulation that incorporated parametric uncertainty in the replication loop. Decisions about species conservation (e.g., listing, delisting, and jeopardy) might differ greatly depending on the treatment of parametric uncertainty in population models.
Realization of an omnidirectional source of sound using parametric loudspeakers.
Sayin, Umut; Artís, Pere; Guasch, Oriol
2013-09-01
Parametric loudspeakers are often used in beam forming applications where a high directivity is required. Withal, in this paper it is proposed to use such devices to build an omnidirectional source of sound. An initial prototype, the omnidirectional parametric loudspeaker (OPL), consisting of a sphere with hundreds of ultrasonic transducers placed on it has been constructed. The OPL emits audible sound thanks to the parametric acoustic array phenomenon, and the close proximity and the large number of transducers results in the generation of a highly omnidirectional sound field. Comparisons with conventional dodecahedron loudspeakers have been made in terms of directivity, frequency response, and in applications such as the generation of diffuse acoustic fields in reverberant chambers. The OPL prototype has performed better than the conventional loudspeaker especially for frequencies higher than 500 Hz, its main drawback being the difficulty to generate intense pressure levels at low frequencies.
THz-wave parametric sources and imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-12-01
We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We have also developed a novel basic technology for THz imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral trasillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
Practical quantum repeaters with parametric down-conversion sources
NASA Astrophysics Data System (ADS)
Krovi, Hari; Guha, Saikat; Dutton, Zachary; Slater, Joshua A.; Simon, Christoph; Tittel, Wolfgang
2016-03-01
Conventional wisdom suggests that realistic quantum repeaters will require quasi-deterministic sources of entangled photon pairs. In contrast, we here study a quantum repeater architecture that uses simple parametric down-conversion sources, as well as frequency-multiplexed multimode quantum memories and photon-number-resolving detectors. We show that this approach can significantly extend quantum communication distances compared to direct transmission. This shows that important trade-offs are possible between the different components of quantum repeater architectures.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
Parametric Model of an Aerospike Rocket Engine
NASA Technical Reports Server (NTRS)
Korte, J. J.
2000-01-01
A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.
Parametric Model of an Aerospike Rocket Engine
NASA Technical Reports Server (NTRS)
Korte, J. J.
2000-01-01
A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.
Modeling Personnel Turnover in the Parametric Organization
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1991-01-01
A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.
Acoustic intensity in the interaction region of a parametric source
NASA Astrophysics Data System (ADS)
Lauchle, G. C.; Gabrielson, T. B.; van Tol, D. J.; Kottke, N. F.; McConnell, J. A.
2003-10-01
The goal of this project was to measure acoustic intensity in the strong interaction region of a parametric source in order to obtain a clear definition of the source-generation region and to separate the local generation (the reactive field) from propagation (the real or active field). The acoustic intensity vector was mapped in the interaction region of a parametric projector at Lake Seneca. The source was driven with primary signals at 22 kHz and 27 kHz. Receiving sensors were located 8.5 meters from the projector. At that range, the secondary at 5 kHz was between 40 and 45 dB below either primary. For the primary levels used, the plane-wave shock inception distance would have been at least 14 meters. Furthermore, the Rayleigh distance for the projector was about 4 meters so the measurements at 8.5 meters were in the strong interaction region but not in saturation. Absorption was negligible over these ranges. The intensity measurements were made at fixed range but varying azimuth angle and varying depth thus developing a two-dimensional cross-section of the secondary beam. Measurements of both the active and reactive intensity vectors will be presented along with a discussion of measurement error. [Work supported by ONR Code 321SS.
Model Comparison of Bayesian Semiparametric and Parametric Structural Equation Models
ERIC Educational Resources Information Center
Song, Xin-Yuan; Xia, Ye-Mao; Pan, Jun-Hao; Lee, Sik-Yum
2011-01-01
Structural equation models have wide applications. One of the most important issues in analyzing structural equation models is model comparison. This article proposes a Bayesian model comparison statistic, namely the "L[subscript nu]"-measure for both semiparametric and parametric structural equation models. For illustration purposes, we consider…
Analysis of surface parametrizations for modern photometric stereo modeling
NASA Astrophysics Data System (ADS)
Mecca, Roberto; Rodolà, Emanuele; Cremers, Daniel
2015-04-01
Tridimensional shape recovery based on Photometric Stereo (PS) recently received a strong improvement due to new mathematical models based on partial differential irradiance equation ratios.1 This modern approach to PS faces more realistic physical effects among which light attenuation and radial light propagation from a point light source. Since the approximation of the surface is performed with single step method, accurate reconstruction is prevented by sensitiveness to noise. In this paper we analyse a well-known parametrization2 of the tridimensional surface extending it on any auxiliary convex projection functions. Experiments on synthetic data show preliminary results where more accurate reconstruction can be achieved using more suitable parametrization specially in case of noisy input images.
uvmcmcfit: Parametric models to interferometric data fitter
NASA Astrophysics Data System (ADS)
Bussmann, Shane; Leung, Tsz Kuk (Daisy); Conley, Alexander
2016-06-01
Uvmcmcfit fits parametric models to interferometric data. It is ideally suited to extract the maximum amount of information from marginally resolved observations with interferometers like the Atacama Large Millimeter Array (ALMA), Submillimeter Array (SMA), and Plateau de Bure Interferometer (PdBI). uvmcmcfit uses emcee (ascl:1303.002) to do Markov Chain Monte Carlo (MCMC) and can measure the goodness of fit from visibilities rather than deconvolved images, an advantage when there is strong gravitational lensing and in other situations. uvmcmcfit includes a pure-Python adaptation of Miriad’s (ascl:1106.007) uvmodel task to generate simulated visibilities given observed visibilities and a model image and a simple ray-tracing routine that allows it to account for both strongly lensed systems (where multiple images of the lensed galaxy are detected) and weakly lensed systems (where only a single image of the lensed galaxy is detected).
Mathematically trivial control of sound using a parametric beam focusing source.
Tanaka, Nobuo; Tanaka, Motoki
2011-01-01
By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology. PMID:21302999
Parametric System Model for a Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.
2015-01-01
A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of the Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 Wth) modules as the thermal building block from which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass, and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component- and system-level trades.
Parametric System Model for a Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.
2014-01-01
A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 watt thermal) modules as the thermal building block around which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component and system level trades.
Using a Parametric Solid Modeler as an Instructional Tool
ERIC Educational Resources Information Center
Devine, Kevin L.
2008-01-01
This paper presents the results of a quasi-experimental study that brought 3D constraint-based parametric solid modeling technology into the high school mathematics classroom. This study used two intact groups; a control group and an experimental group, to measure the extent to which using a parametric solid modeler during instruction affects…
Parametric Testing of Launch Vehicle FDDR Models
NASA Technical Reports Server (NTRS)
Schumann, Johann; Bajwa, Anupa; Berg, Peter; Thirumalainambi, Rajkumar
2011-01-01
For the safe operation of a complex system like a (manned) launch vehicle, real-time information about the state of the system and potential faults is extremely important. The on-board FDDR (Failure Detection, Diagnostics, and Response) system is a software system to detect and identify failures, provide real-time diagnostics, and to initiate fault recovery and mitigation. The ERIS (Evaluation of Rocket Integrated Subsystems) failure simulation is a unified Matlab/Simulink model of the Ares I Launch Vehicle with modular, hierarchical subsystems and components. With this model, the nominal flight performance characteristics can be studied. Additionally, failures can be injected to see their effects on vehicle state and on vehicle behavior. A comprehensive test and analysis of such a complicated model is virtually impossible. In this paper, we will describe, how parametric testing (PT) can be used to support testing and analysis of the ERIS failure simulation. PT uses a combination of Monte Carlo techniques with n-factor combinatorial exploration to generate a small, yet comprehensive set of parameters for the test runs. For the analysis of the high-dimensional simulation data, we are using multivariate clustering to automatically find structure in this high-dimensional data space. Our tools can generate detailed HTML reports that facilitate the analysis.
Energy scaling of terahertz-wave parametric sources.
Tang, Guanqi; Cong, Zhenhua; Qin, Zengguang; Zhang, Xingyu; Wang, Weitao; Wu, Dong; Li, Ning; Fu, Qiang; Lu, Qingming; Zhang, Shaojun
2015-02-23
Terahertz-wave parametric oscillators (TPOs) have advantages of room temperature operation, wide tunable range, narrow line-width, good coherence. They have also disadvantage of small pulse energy. In this paper, several factors preventing TPOs from generating high-energy THz pulses and the corresponding solutions are analyzed. A scheme to generate high-energy THz pulses by using the combination of a TPO and a Stokes-pulse-injected terahertz-wave parametric generator (spi-TPG) is proposed and demonstrated. A TPO is used as a source to generate a seed pulse for the surface-emitted spi-TPG. The time delay between the pump and Stokes pulses is adjusted to guarantee they have good temporal overlap. The pump pulses have a large pulse energy and a large beam size. The Stokes beam is enlarged to make its size be larger than the pump beam size to have a large effective interaction volume. The experimental results show that the generated THz pulse energy from the spi-TPG is 1.8 times as large as that obtained from the TPO for the same pumping pulse energy density of 0.90 J/cm(2) and the same pumping beam size of 3.0 mm. When the pumping beam sizes are 5.0 and 7.0 mm, the enhancement times are 3.7 and 7.5, respectively. The spi-TPG here is similar to a difference frequency generator; it can also be used as a Stokes pulse amplifier.
Energy scaling of terahertz-wave parametric sources.
Tang, Guanqi; Cong, Zhenhua; Qin, Zengguang; Zhang, Xingyu; Wang, Weitao; Wu, Dong; Li, Ning; Fu, Qiang; Lu, Qingming; Zhang, Shaojun
2015-02-23
Terahertz-wave parametric oscillators (TPOs) have advantages of room temperature operation, wide tunable range, narrow line-width, good coherence. They have also disadvantage of small pulse energy. In this paper, several factors preventing TPOs from generating high-energy THz pulses and the corresponding solutions are analyzed. A scheme to generate high-energy THz pulses by using the combination of a TPO and a Stokes-pulse-injected terahertz-wave parametric generator (spi-TPG) is proposed and demonstrated. A TPO is used as a source to generate a seed pulse for the surface-emitted spi-TPG. The time delay between the pump and Stokes pulses is adjusted to guarantee they have good temporal overlap. The pump pulses have a large pulse energy and a large beam size. The Stokes beam is enlarged to make its size be larger than the pump beam size to have a large effective interaction volume. The experimental results show that the generated THz pulse energy from the spi-TPG is 1.8 times as large as that obtained from the TPO for the same pumping pulse energy density of 0.90 J/cm(2) and the same pumping beam size of 3.0 mm. When the pumping beam sizes are 5.0 and 7.0 mm, the enhancement times are 3.7 and 7.5, respectively. The spi-TPG here is similar to a difference frequency generator; it can also be used as a Stokes pulse amplifier. PMID:25836452
Mixing parametrizations for ocean climate modelling
NASA Astrophysics Data System (ADS)
Gusev, Anatoly; Moshonkin, Sergey; Diansky, Nikolay; Zalesny, Vladimir
2016-04-01
The algorithm is presented of splitting the total evolutionary equations for the turbulence kinetic energy (TKE) and turbulence dissipation frequency (TDF), which is used to parameterize the viscosity and diffusion coefficients in ocean circulation models. The turbulence model equations are split into the stages of transport-diffusion and generation-dissipation. For the generation-dissipation stage, the following schemes are implemented: the explicit-implicit numerical scheme, analytical solution and the asymptotic behavior of the analytical solutions. The experiments were performed with different mixing parameterizations for the modelling of Arctic and the Atlantic climate decadal variability with the eddy-permitting circulation model INMOM (Institute of Numerical Mathematics Ocean Model) using vertical grid refinement in the zone of fully developed turbulence. The proposed model with the split equations for turbulence characteristics is similar to the contemporary differential turbulence models, concerning the physical formulations. At the same time, its algorithm has high enough computational efficiency. Parameterizations with using the split turbulence model make it possible to obtain more adequate structure of temperature and salinity at decadal timescales, compared to the simpler Pacanowski-Philander (PP) turbulence parameterization. Parameterizations with using analytical solution or numerical scheme at the generation-dissipation step of the turbulence model leads to better representation of ocean climate than the faster parameterization using the asymptotic behavior of the analytical solution. At the same time, the computational efficiency left almost unchanged relative to the simple PP parameterization. Usage of PP parametrization in the circulation model leads to realistic simulation of density and circulation with violation of T,S-relationships. This error is majorly avoided with using the proposed parameterizations containing the split turbulence model
Global Nonlinear Parametric Modeling with Application to F-16 Aerodynamics
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A global nonlinear parametric modeling technique is described and demonstrated. The technique uses multivariate orthogonal modeling functions generated from the data to determine nonlinear model structure, then expands each retained modeling function into an ordinary multivariate polynomial. The final model form is a finite multivariate power series expansion for the dependent variable in terms of the independent variables. Partial derivatives of the identified models can be used to assemble globally valid linear parameter varying models. The technique is demonstrated by identifying global nonlinear parametric models for nondimensional aerodynamic force and moment coefficients from a subsonic wind tunnel database for the F-16 fighter aircraft. Results show less than 10% difference between wind tunnel aerodynamic data and the nonlinear parameterized model for a simulated doublet maneuver at moderate angle of attack. Analysis indicated that the global nonlinear parametric models adequately captured the multivariate nonlinear aerodynamic functional dependence.
Global Nonlinear Parametric Modeling with Application to F-16 Aerodynamics
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1998-01-01
A global nonlinear parametric modeling technique is described and demonstrated. The technique uses multivariate orthogonal modeling functions generated from the data to determine nonlinear model structure, then expands each retained modeling function into an ordinary multivariate polynomial. The final model form is a finite multivariate power series expansion for the dependent variable in terms of the independent variables. Partial derivatives of the identified models can be used to assemble globally valid linear parameter varying models. The technique is demonstrated by identifying global nonlinear parametric models for nondimensional aerodynamic force and moment coefficients from a subsonic wind tunnel database for the F-16 fighter aircraft. Results show less than 10% difference between wind tunnel aerodynamic data and the nonlinear parameterized model for a simulated doublet maneuver at moderate angle of attack. Analysis indicated that the global nonlinear parametric models adequately captured the multivariate nonlinear aerodynamic functional dependence.
Incident duration modeling using flexible parametric hazard-based models.
Li, Ruimin; Shang, Pan
2014-01-01
Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.
Incident duration modeling using flexible parametric hazard-based models.
Li, Ruimin; Shang, Pan
2014-01-01
Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time. PMID:25530753
Parametric Modeling of Transverse Phase Space of an RF Photoinjector
Hartman, E.; Sayyar-Rodsari, B.; Schweiger, C.A.; Lee, M.J.; Lui, P.; Paterson, Ewan; Schmerge, J.F.; /SLAC
2008-01-24
High brightness electron beam sources such as rf photo-injectors as proposed for SASE FELs must consistently produce the desired beam quality. We report the results of a study in which a combined neural network (NN) and first-principles (FP) model is used to model the transverse phase space of the beam as a function of quadrupole strength, while beam charge, solenoid field, accelerator gradient, and linac voltage and phase are kept constant. The parametric transport matrix between the exit of the linac section and the spectrometer screen constitutes the FP component of the combined model. The NN block provides the parameters of the transport matrix as functions of quad current. Using real data from SLAC Gun Test Facility, we will highlight the significance of the constrained training of the NN block and show that the phase space of the beam is accurately modeled by the combined NN and FP model, while variations of beam matrix parameters with the quad current are correctly captured. We plan to extend the combined model in the future to capture the effects of variations in beam charge, solenoid field, and accelerator voltage and phase.
THz-wave parametric source and its imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-08-01
Widely tunable coherent terahertz (THz) wave generation has been demonstrated based on the parametric oscillation using MgO doped LiNbO3 crystal pumped by a Q-switched Nd:YAG laser. This method exhibits multiple advantages like wide tunability, coherency and compactness of its system. We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
Parametric models of reflectance spectra for dyed fabrics
NASA Astrophysics Data System (ADS)
Aiken, Daniel C.; Ramsey, Scott; Mayo, Troy; Lambrakos, Samuel G.; Peak, Joseph
2016-05-01
This study examines parametric modeling of NIR reflectivity spectra for dyed fabrics, which provides for both their inverse and direct modeling. The dye considered for prototype analysis is triarylamine dye. The fabrics considered are camouflage textiles characterized by color variations. The results of this study provide validation of the constructed parametric models, within reasonable error tolerances for practical applications, including NIR spectral characteristics in camouflage textiles, for purposes of simulating NIR spectra corresponding to various dye concentrations in host fabrics, and potentially to mixtures of dyes.
The parametrization of radio source coordinates in VLBI and its impact on the CRF
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2016-04-01
dramatically in time. Hence, each source would have to be modeled individually. Considering this, the shear amount of sources, in our study more than 600 are included, sets practical limitations. We decided to use the multivariate adaptive regression splines (MARS) procedure to parametrize the source coordinates, as they allow a great deal of automation as it combines recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and thus the best number of polynomial pieces to fit the data. We investigate linear and cubic splines determined by MARS to "human" determined linear splines and their impact on the CRF. Within this work we try to answer the following questions: How can we find optimal criteria for the definition of the defining and unstable sources? What are the best polynomials for the individual categories? How much can we improve the CRF by extending the parametrization of the sources?
Parametric Model for Astrophysical Proton-Proton Interactions and Applications
Karlsson, Niklas
2007-01-01
Observations of gamma-rays have been made from celestial sources such as active galaxies, gamma-ray bursts and supernova remnants as well as the Galactic ridge. The study of gamma rays can provide information about production mechanisms and cosmic-ray acceleration. In the high-energy regime, one of the dominant mechanisms for gamma-ray production is the decay of neutral pions produced in interactions of ultra-relativistic cosmic-ray nuclei and interstellar matter. Presented here is a parametric model for calculations of inclusive cross sections and transverse momentum distributions for secondary particles--gamma rays, e^{±}, v_{e}, $\\bar{v}$_{e}, v_{μ} and $\\bar{μ}$_{e}--produced in proton-proton interactions. This parametric model is derived on the proton-proton interaction model proposed by Kamae et al.; it includes the diffraction dissociation process, Feynman-scaling violation and the logarithmically rising inelastic proton-proton cross section. To improve fidelity to experimental data for lower energies, two baryon resonance excitation processes were added; one representing the Δ(1232) and the other multiple resonances with masses around 1600 MeV/c^{2}. The model predicts the power-law spectral index for all secondary particle to be about 0.05 lower in absolute value than that of the incident proton and their inclusive cross sections to be larger than those predicted by previous models based on the Feynman-scaling hypothesis. The applications of the presented model in astrophysics are plentiful. It has been implemented into the Galprop code to calculate the contribution due to pion decays in the Galactic plane. The model has also been used to estimate the cosmic-ray flux in the Large Magellanic Cloud based on HI, CO and gamma-ray observations. The transverse momentum distributions enable calculations when the proton distribution is anisotropic. It is shown that the gamma-ray spectrum and flux due to a
Kang, Jiqiang; Wei, Xiaoming; Li, Bowen; Wang, Xie; Yu, Luoqin; Tan, Sisi; Jinata, Chandra; Wong, Kenneth K. Y.
2016-01-01
We proposed a sensitivity enhancement method of the interference-based signal detection approach and applied it on a swept-source optical coherence tomography (SS-OCT) system through all-fiber optical parametric amplifier (FOPA) and parametric balanced detector (BD). The parametric BD was realized by combining the signal and phase conjugated idler band that was newly-generated through FOPA, and specifically by superimposing these two bands at a photodetector. The sensitivity enhancement by FOPA and parametric BD in SS-OCT were demonstrated experimentally. The results show that SS-OCT with FOPA and SS-OCT with parametric BD can provide more than 9 dB and 12 dB sensitivity improvement, respectively, when compared with the conventional SS-OCT in a spectral bandwidth spanning over 76 nm. To further verify and elaborate their sensitivity enhancement, a bio-sample imaging experiment was conducted on loach eyes by conventional SS-OCT setup, SS-OCT with FOPA and parametric BD at different illumination power levels. All these results proved that using FOPA and parametric BD could improve the sensitivity significantly in SS-OCT systems. PMID:27446655
Bayesian non-parametrics and the probabilistic approach to modelling
Ghahramani, Zoubin
2013-01-01
Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman’s coalescent, Dirichlet diffusion trees and Wishart processes. PMID:23277609
Update on Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl. H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Since the June 2010 Astronomy Conference, an independent review of our cost data base discovered some inaccuracies and inconsistencies which can modify our previously reported results. This paper will review changes to the data base, our confidence in those changes and their effect on various parametric cost models
New approach in bats' sonar signals parametrization and modelling
NASA Astrophysics Data System (ADS)
Herman, Krzysztof; Gudra, Tadeusz
2010-01-01
Parameterization of bats' echolocation signal is essentially based on determination of spectral power density by means of the classic Fourier transform FFT. This study presents an alternative solution in this area of research, that is parametric and non-parametric modelling of short-time signals. The above mentioned methods are based on modelling of white noise with the use of digital filters the transmission of which was set in a way that allows the output signal to be as close to the modelled signal as possible. Proper selection of parameterization method - MA (Moving Average), AR (Autoregressive), ARMA (Autoregressive Moving Average), in respect of the character of signal spectrum (line spectrum, noise) maximally reduces the number of filter coefficients and improves the accuracy of bat's signal modelling. The work also presents the possibility of using the suggested parameterization methods in automatic species identification.
Parametric Modeling and Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Wu, N. Eva; Ju, Jianhong
2000-01-01
Fault tolerant control is considered for a nonlinear aircraft model expressed as a linear parameter-varying system. By proper parameterization of foreseeable faults, the linear parameter-varying system can include fault effects as additional varying parameters. A recently developed technique in fault effect parameter estimation allows us to assume that estimates of the fault effect parameters are available on-line. Reconfigurability is calculated for this model with respect to the loss of control effectiveness to assess the potentiality of the model to tolerate such losses prior to control design. The control design is carried out by applying a polytopic method to the aircraft model. An error bound on fault effect parameter estimation is provided, within which the Lyapunov stability of the closed-loop system is robust. Our simulation results show that as long as the fault parameter estimates are sufficiently accurate, the polytopic controller can provide satisfactory fault-tolerance.
Source finding, parametrization, and classification for the extragalactic Effelsberg-Bonn H i Survey
NASA Astrophysics Data System (ADS)
Flöer, L.; Winkel, B.; Kerp, J.
2014-09-01
Context. Source extraction for large-scale H i surveys currently involves large amounts of manual labor. For data volumes expected from future H i surveys with upcoming facilities, this approach is not feasible any longer. Aims: We describe the implementation of a fully automated source finding, parametrization, and classification pipeline for the Effelsberg-Bonn H i Survey (EBHIS). With future radio astronomical facilities in mind, we want to explore the feasibility of a completely automated approach to source extraction for large-scale H i surveys. Methods: Source finding is implemented using wavelet denoising methods, which previous studies show to be a powerful tool, especially in the presence of data defects. For parametrization, we automate baseline fitting, mask optimization, and other tasks based on well-established algorithms, currently used interactively. For the classification of candidates, we implement an artificial neural network, which is trained on a candidate set comprised of false positives from real data and simulated sources. Using simulated data, we perform a thorough analysis of the algorithms implemented. Results: We compare the results from our simulations to the parametrization accuracy of the H i Parkes All-Sky Survey (HIPASS) survey. Even though HIPASS is more sensitive than EBHIS in its current state, the parametrization accuracy and classification reliability match or surpass the manual approach used for HIPASS data.
Shutts, Glenn; Pallarès, Alfons Callado
2014-06-28
The need to represent uncertainty resulting from model error in ensemble weather prediction systems has spawned a variety of ad hoc stochastic algorithms based on plausible assumptions about sub-grid-scale variability. Currently, few studies have been carried out to prove the veracity of such schemes and it seems likely that some implementations of stochastic parametrization are misrepresentations of the true source of model uncertainty. This paper describes an attempt to quantify the uncertainty in physical parametrization tendencies in the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System with respect to horizontal resolution deficiency. High-resolution truth forecasts are compared with matching target forecasts at much lower resolution after coarse-graining to a common spatial and temporal resolution. In this way, model error is defined and its probability distribution function is examined as a function of tendency magnitude. It is found that the temperature tendency error associated with convection parametrization and explicit water phase changes behaves like a Poisson process for which the variance grows in proportion to the mean, which suggests that the assumptions underpinning the Craig and Cohen statistical model of convection might also apply to parametrized convection. By contrast, radiation temperature tendency errors have a very different relationship to their mean value. These findings suggest that the ECMWF stochastic perturbed parametrization tendency scheme could be improved since it assumes that the standard deviation of the tendency error is proportional to the mean. Using our finding that the variance error is proportional to the mean, a prototype stochastic parametrization scheme is devised for convective and large-scale condensation temperature tendencies and tested within the ECMWF Ensemble Prediction System. Significant impact on forecast skill is shown, implying its potential for further development.
A parametric vocal fold model based on magnetic resonance imaging.
Wu, Liang; Zhang, Zhaoyan
2016-08-01
This paper introduces a parametric three-dimensional body-cover vocal fold model based on magnetic resonance imaging (MRI) of the human larynx. Major geometric features that are observed in the MRI images but missing in current vocal fold models are discussed, and their influence on vocal fold vibration is evaluated using eigenmode analysis. Proper boundary conditions for the model are also discussed. Based on control parameters corresponding to anatomic landmarks that can be easily measured, this model can be adapted toward a subject-specific vocal fold model for voice production research and clinical applications. PMID:27586774
ERIC Educational Resources Information Center
Maydeu-Olivares, Albert
2005-01-01
Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…
Cosmic star formation probed via parametric stack-fitting of known sources to radio imaging
NASA Astrophysics Data System (ADS)
Roseboom, I. G.; Best, P. N.
2014-04-01
The promise of multiwavelength astronomy has been tempered by the large disparity in sensitivity and resolution between different wavelength regimes. Here, we present a statistical approach which attempts to overcome this by fitting parametric models directly to image data. Specifically, we fit a model for the radio luminosity function (LF) of star-forming galaxies to pixel intensity distributions at 1.4 GHz coincident with near-IR selected sources in COSMOS. Taking a mass-limited sample in redshift bins across the range 0 < z < 4, we are able to fit the radio LF with ˜0.2 dex precision in the key parameters (e.g. Φ*,L*). Good agreement is seen between our results and those using standard methods at radio and other wavelengths. Integrating our LFs to get the star formation rate density, we find that galaxies with M* > 109.5 M⊙ contribute ≳50 per cent of cosmic star formation at 0 < z < 4. The scalability of our approach is empirically estimated, with the precision in LF parameter estimates found to scale with the number of sources in the stack, Ns, as ∝ √{N_s}. This type of approach will be invaluable in the multiwavelength analysis of upcoming surveys with the Square Kilometre Array pathfinder facilities: LOFAR, ASKAP and MeerKAT.
Modeling of autoresonant control of a parametrically excited screen machine
NASA Astrophysics Data System (ADS)
Abolfazl Zahedi, S.; Babitsky, Vladimir
2016-10-01
Modelling of nonlinear dynamic response of a screen machine described by the nonlinear coupled differential equations and excited by the system of autoresonant control is presented. The displacement signal of the screen is fed to the screen excitation directly by means of positive feedback. Negative feedback is used to fix the level of screen amplitude response within the expected range. The screen is anticipated to vibrate with a parametric resonance and the excitation, stabilization and control response of the system are studied in the stable mode. Autoresonant control is thoroughly investigated and output tracking is reported. The control developed provides the possibility of self-tuning and self-adaptation mechanisms that allow the screen machine to maintain a parametric resonant mode of oscillation under a wide range of uncertainty of mass and viscosity.
Compact model for parametric instability under arbitrary stress waveform
NASA Astrophysics Data System (ADS)
Alagi, Filippo; Rossetti, Mattia; Stella, Roberto; Viganò, Emanuele; Raynaud, Philippe
2015-11-01
A deterministic compact model of the parametric instability of elementary devices is further developed. The model addresses the device instability class that can be traced back to microscopic reactions obeying reversible first-order kinetics. It can describe the response to any periodic stimulus waveform and it is suitable for the implementation in commercial electronic circuit simulators (Eldo UDRM). The methodology is applied to model the negative-bias-temperature threshold voltage instability of a p-channel MOSFET. A simple circuital example is shown where the simulation of threshold voltage recovery is crucial for circuit design.
Multivariable Parametric Cost Model for Ground Optical: Telescope Assembly
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature were examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter were derived.
Multivariable Parametric Cost Model for Ground Optical Telescope Assembly
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes; Reese, Gayle; Byberg, Alicia
2005-01-01
A parametric cost model for ground-based telescopes is developed using multivariable statistical analysis of both engineering and performance parameters. While diameter continues to be the dominant cost driver, diffraction-limited wavelength is found to be a secondary driver. Other parameters such as radius of curvature are examined. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e., multi-telescope phased-array systems). Additionally, single variable models Based on aperture diameter are derived.
DPpackage: Bayesian Non- and Semi-parametric Modelling in R.
Jara, Alejandro; Hanson, Timothy E; Quintana, Fernando A; Müller, Peter; Rosner, Gary L
2011-04-01
Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian non- and semi-parametric models in R, DPpackage. Currently DPpackage includes models for marginal and conditional density estimation, ROC curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison, and for eliciting the precision parameter of the Dirichlet process prior. To maximize computational efficiency, the actual sampling for each model is carried out using compiled FORTRAN. PMID:21796263
On the influence of model parametrization in elastic full waveform tomography
NASA Astrophysics Data System (ADS)
Köhn, D.; De Nil, D.; Kurzmann, A.; Przebindowska, A.; Bohlen, T.
2012-10-01
Elastic Full Waveform Tomography (FWT) aims to reduce the misfit between recorded and modelled data, to deduce a very detailed model of elastic material parameters in the underground. The choice of the elastic model parameters to be inverted affects the convergence and quality of the reconstructed subsurface model. Using the Cross-Triangle-Squares (CTS) model three elastic parametrizations, Lamé parameters m1 = [λ, μ, ρ], seismic velocities m2 = [Vp, Vs, ρ] and seismic impedances m3 = [Ip, Is, ρ] for far-offset reflection seismic acquisition geometries with explosive point sources and free-surface condition are studied. In each CTS model the three elastic parameters are assigned to three different geometrical objects that are spatially separated. The results of the CTS model study reveal a strong requirement of a sequential frequency inversion from low to high frequencies to reconstruct the density model. Using only high-frequency data, cross-talk artefacts have an influence on the quantitative reconstruction of the material parameters, while for a sequential frequency inversion only structural artefacts, representing the boundaries of different model parameters, are present. During the inversion, the Lamé parameters, seismic velocities and impedances could be reconstructed well. However, using the Lamé parametrization ?-artefacts are present in the λ model, while similar artefacts are suppressed when using seismic velocities or impedances. The density inversion shows the largest ambiguity for all parametrizations. However, the artefacts are again more dominant, when using the Lamé parameters and suppressed for seismic velocity and impedance parametrization. The afore mentioned results are confirmed for a geologically more realistic modified Marmousi-II model. Using a conventional streamer acquisition geometry the P-velocity, S-velocity and density models of the subsurface were reconstructed successfully and are compared with the results of the Lam
A review of parametric modelling techniques for EEG analysis.
Pardey, J; Roberts, S; Tarassenko, L
1996-01-01
This review provides an introduction to the use of parametric modelling techniques for time series analysis, and in particular the application of autoregressive modelling to the analysis of physiological signals such as the human electroencephalogram. The concept of signal stationarity is considered and, in the light of this, both adaptive models, and non-adaptive models employing fixed or adaptive segmentation, are discussed. For non-adaptive autoregressive models, the Yule-Walker equations are derived and the popular Levinson-Durbin and Burg algorithms are introduced. The interpretation of an autoregressive model as a recursive digital filter and its use in spectral estimation are considered, and the important issues of model stability and model complexity are discussed.
Parametric uncertainty modeling for application to robust control
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1993-01-01
Viewgraphs and a paper on parametric uncertainty modeling for application to robust control are included. Advanced robust control system analysis and design is based on the availability of an uncertainty description which separates the uncertain system elements from the nominal system. Although this modeling structure is relatively straightforward to obtain for multiple unstructured uncertainties modeled throughout the system, it is difficult to formulate for many problems involving real parameter variations. Furthermore, it is difficult to ensure that the uncertainty model is formulated such that the dimension of the resulting model is minimal. A procedure for obtaining an uncertainty model for real uncertain parameter problems in which the uncertain parameters can be represented in a multilinear form is presented. Furthermore, the procedure is formulated such that the resulting uncertainty model is minimal (or near minimal) relative to a given state space realization of the system. The approach is demonstrated for a multivariable third-order example problem having four uncertain parameters.
Two-parametric model of electron beam in computational dosimetry for radiation processing
NASA Astrophysics Data System (ADS)
Lazurik, V. M.; Lazurik, V. T.; Popov, G.; Zimek, Z.
2016-07-01
Computer simulation of irradiation process of various materials with electron beam (EB) can be applied to correct and control the performances of radiation processing installations. Electron beam energy measurements methods are described in the international standards. The obtained results of measurements can be extended by implementation computational dosimetry. Authors have developed the computational method for determination of EB energy on the base of two-parametric fitting of semi-empirical model for the depth dose distribution initiated by mono-energetic electron beam. The analysis of number experiments show that described method can effectively consider random displacements arising from the use of aluminum wedge with a continuous strip of dosimetric film and minimize the magnitude uncertainty value of the electron energy evaluation, calculated from the experimental data. Two-parametric fitting method is proposed for determination of the electron beam model parameters. These model parameters are as follow: E0 - energy mono-energetic and mono-directional electron source, X0 - the thickness of the aluminum layer, located in front of irradiated object. That allows obtain baseline data related to the characteristic of the electron beam, which can be later on applied for computer modeling of the irradiation process. Model parameters which are defined in the international standards (like Ep- the most probably energy and Rp - practical range) can be linked with characteristics of two-parametric model (E0, X0), which allows to simulate the electron irradiation process. The obtained data from semi-empirical model were checked together with the set of experimental results. The proposed two-parametric model for electron beam energy evaluation and estimation of accuracy for computational dosimetry methods on the base of developed model are discussed.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
Modification of the method of parametric estimation of atmospheric distortion in MODTRAN model
NASA Astrophysics Data System (ADS)
Belov, A. M.
2015-12-01
The paper presents a modification of the method of parametric estimation of atmospheric distortion in MODTRAN model as well as experimental research of the method. The experimental research showed that the base method does not take into account physical meaning of atmospheric spherical albedo parameter and presence of outliers in source data that results to overall atmospheric correction accuracy decreasing. Proposed modification improves the accuracy of atmospheric correction in comparison with the base method. The modification consists in the addition of nonnegativity constraint on the atmospheric spherical albedo estimated value and the addition of preprocessing stage aimed to adjust source data.
Pixel-based parametric source depth map for Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Altabella, L.; Boschi, F.; Spinelli, A. E.
2016-01-01
Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.
NASA Astrophysics Data System (ADS)
Schaefer, J. F.; Boschi, L.; Kissling, E.
2011-09-01
In this study, we aim to close the gap between regional and global traveltime tomography in the context of surface wave tomography of the upper mantle implementing the principle of adaptive parametrization. Observations of seismic surface waves are a very powerful tool to constrain the 3-D structure of the Earth's upper mantle, including its anisotropy, because they sample this volume efficiently due to their sensitivity over a wide depth range along the ray path. On a global scale, surface wave tomography models are often parametrized uniformly, without accounting for inhomogeneities in data coverage and, as a result, in resolution, that are caused by effective under- or overparametrization in many areas. If the local resolving power of seismic data is not taken into account when parametrizing the model, features will be smeared and distorted in tomographic maps, with subsequent misinterpretation. Parametrization density has to change locally, for models to be robustly constrained without losing any accurate information available in the best sampled regions. We have implemented a new algorithm for upper mantle surface wave tomography, based on adaptive-voxel parametrization, with voxel size defined by both the 'hit count' (number of observations sampling the voxel) and 'azimuthal coverage' (how well different azimuths with respect to the voxel are covered by the source-station distribution). High image resolution is achieved in regions with dense data coverage, while lower image resolution is kept in regions where data coverage is poorer. This way, parametrization is everywhere tuned to optimal resolution, minimizing both the computational costs, and the non-uniqueness of the solution. The spacing of our global grid is locally as small as ˜50 km. We apply our method to identify a new global model of vertically and horizontally polarized shear velocity, with resolution particularly enhanced in the European lithosphere and upper mantle. We find our new model to
Parametric Thermal Soak Model for Earth Entry Vehicles
NASA Technical Reports Server (NTRS)
Agrawal, Parul; Samareh, Jamshid; Doan, Quy D.
2013-01-01
The analysis and design of an Earth Entry Vehicle (EEV) is multidisciplinary in nature, requiring the application many disciplines. An integrated tool called Multi Mission System Analysis for Planetary Entry Descent and Landing or M-SAPE is being developed as part of Entry Vehicle Technology project under In-Space Technology program. Integration of a multidisciplinary problem is a challenging task. Automation of the execution process and data transfer among disciplines can be accomplished to provide significant benefits. Thermal soak analysis and temperature predictions of various interior components of entry vehicle, including the impact foam and payload container are part of the solution that M-SAPE will offer to spacecraft designers. The present paper focuses on the thermal soak analysis of an entry vehicle design based on the Mars Sample Return entry vehicle geometry and discusses a technical approach to develop parametric models for thermal soak analysis that will be integrated into M-SAPE. One of the main objectives is to be able to identify the important parameters and to develop correlation coefficients so that, for a given trajectory, can estimate the peak payload temperature based on relevant trajectory parameters and vehicle geometry. The models are being developed for two primary thermal protection (TPS) materials: 1) carbon phenolic that was used for Galileo and Pioneer Venus probes and, 2) Phenolic Impregnated Carbon Ablator (PICA), TPS material for Mars Science Lab mission. Several representative trajectories were selected from a very large trade space to include in the thermal analysis in order to develop an effective parametric thermal soak model. The selected trajectories covered a wide range of heatload and heatflux combinations. Non-linear, fully transient, thermal finite element simulations were performed for the selected trajectories to generate the temperature histories at the interior of the vehicle. Figure 1 shows the finite element model
Lumped parametric model of the human ear for sound transmission.
Feng, Bin; Gan, Rong Z
2004-09-01
A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model. PMID:15300453
Lumped parametric model of the human ear for sound transmission.
Feng, Bin; Gan, Rong Z
2004-09-01
A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.
Parametric Thermal Models of the Transient Reactor Test Facility (TREAT)
Bradley K. Heath
2014-03-01
This work supports the restart of transient testing in the United States using the Department of Energy’s Transient Reactor Test Facility at the Idaho National Laboratory. It also supports the Global Threat Reduction Initiative by reducing proliferation risk of high enriched uranium fuel. The work involves the creation of a nuclear fuel assembly model using the fuel performance code known as BISON. The model simulates the thermal behavior of a nuclear fuel assembly during steady state and transient operational modes. Additional models of the same geometry but differing material properties are created to perform parametric studies. The results show that fuel and cladding thermal conductivity have the greatest effect on fuel temperature under the steady state operational mode. Fuel density and fuel specific heat have the greatest effect for transient operational model. When considering a new fuel type it is recommended to use materials that decrease the specific heat of the fuel and the thermal conductivity of the fuel’s cladding in order to deal with higher density fuels that accompany the LEU conversion process. Data on the latest operating conditions of TREAT need to be attained in order to validate BISON’s results. BISON’s models for TREAT (material models, boundary convection models) are modest and need additional work to ensure accuracy and confidence in results.
Modeling Frequency Comb Sources
NASA Astrophysics Data System (ADS)
Li, Feng; Yuan, Jinhui; Kang, Zhe; Li, Qian; Wai, P. K. A.
2016-06-01
Frequency comb sources have revolutionized metrology and spectroscopy and found applications in many fields. Stable, low-cost, high-quality frequency comb sources are important to these applications. Modeling of the frequency comb sources will help the understanding of the operation mechanism and optimization of the design of such sources. In this paper,we review the theoretical models used and recent progress of the modeling of frequency comb sources.
Linear-optical qubit amplification with spontaneous parametric down-conversion source
NASA Astrophysics Data System (ADS)
Ou-Yang, Yang; Feng, Zhao-Feng; Zhou, Lan; Sheng, Yu-Bo
2016-01-01
A single photon is the basic building block in quantum communication. However, it is sensitive to photon loss. In this paper, we discuss a linear-optical amplification protocol for protecting a single photon with a practical spontaneous parametric down-conversion (SPDC) source. Our protocol revealed that in a practical experimental condition, the amplification using entanglement as an auxiliary is more powerful than the amplification using a single photon as an auxiliary, for the vacuum state in the SPDC source does not disturb the amplification and can be eliminated automatically. Moreover, the weak SPDC source will become another advantage to benefit the amplification, as the double-pair emission error can be decreased. Our protocol may be useful in future quantum cryptography, especially in the device-independent quantum key distribution.
Kaneda, Fumihiro; Garay-Palmett, Karina; U'Ren, Alfred B; Kwiat, Paul G
2016-05-16
We report on the generation of an indistinguishable heralded single-photon state, using highly nondegenerate spontaneous parametric downconversion (SPDC). Spectrally factorable photon pairs can be generated by incorporating a broadband pump pulse and a group-velocity matching (GVM) condition in a periodically-poled potassium titanyl phosphate (PPKTP) crystal. The heralding photon is in the near IR, close to the peak detection efficiency of off-the-shelf Si single-photon detectors; meanwhile, the heralded photon is in the telecom L-band where fiber losses are at a minimum. We observe spectral factorability of the SPDC source and consequently high purity (90%) of the produced heralded single photons by several different techniques. Because this source can also realize a high heralding efficiency (> 90%), it would be suitable for time-multiplexing techniques, enabling a pseudo-deterministic single-photon source, a critical resource for optical quantum information and communication technology. PMID:27409894
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood
A Bayesian non-parametric Potts model with application to pre-surgical FMRI data.
Johnson, Timothy D; Liu, Zhuqing; Bartsch, Andreas J; Nichols, Thomas E
2013-08-01
The Potts model has enjoyed much success as a prior model for image segmentation. Given the individual classes in the model, the data are typically modeled as Gaussian random variates or as random variates from some other parametric distribution. In this article, we present a non-parametric Potts model and apply it to a functional magnetic resonance imaging study for the pre-surgical assessment of peritumoral brain activation. In our model, we assume that the Z-score image from a patient can be segmented into activated, deactivated, and null classes, or states. Conditional on the class, or state, the Z-scores are assumed to come from some generic distribution which we model non-parametrically using a mixture of Dirichlet process priors within the Bayesian framework. The posterior distribution of the model parameters is estimated with a Markov chain Monte Carlo algorithm, and Bayesian decision theory is used to make the final classifications. Our Potts prior model includes two parameters, the standard spatial regularization parameter and a parameter that can be interpreted as the a priori probability that each voxel belongs to the null, or background state, conditional on the lack of spatial regularization. We assume that both of these parameters are unknown, and jointly estimate them along with other model parameters. We show through simulation studies that our model performs on par, in terms of posterior expected loss, with parametric Potts models when the parametric model is correctly specified and outperforms parametric models when the parametric model in misspecified. PMID:22627277
Numerical model of solar dynamic radiator for parametric analysis
NASA Technical Reports Server (NTRS)
Rhatigan, Jennifer L.
1989-01-01
Growth power requirements for Space Station Freedom will be met through addition of 25 kW solar dynamic (SD) power modules. The SD module rejects waste heat from the power conversion cycle to space through a pumped-loop, multi-panel, deployable radiator. The baseline radiator configuration was defined during the Space Station conceptual design phase and is a function of the state point and heat rejection requirements of the power conversion unit. Requirements determined by the overall station design such as mass, system redundancy, micrometeoroid and space debris impact survivability, launch packaging, costs, and thermal and structural interaction with other station components have also been design drivers for the radiator configuration. Extensive thermal and power cycle modeling capabilities have been developed which are powerful tools in Station design and analysis, but which prove cumbersome and costly for simple component preliminary design studies. In order to aid in refining the SD radiator to the mature design stage, a simple and flexible numerical model was developed. The model simulates heat transfer and fluid flow performance of the radiator and calculates area mass and impact survivability for many combinations of flow tube and panel configurations, fluid and material properties, and environmental and cycle variations. A brief description and discussion of the numerical model, it's capabilities and limitations, and results of the parametric studies performed is presented.
Testing wave-function-collapse models using parametric heating of a trapped nanosphere
NASA Astrophysics Data System (ADS)
Goldwater, Daniel; Paternostro, Mauro; Barker, P. F.
2016-07-01
We propose a mechanism for testing the theory of collapse models such as continuous spontaneous localization (CSL) by examining the parametric heating rate of a trapped nanosphere. The random localizations of the center of mass for a given particle predicted by the CSL model can be understood as a stochastic force embodying a source of heating for the nanosphere. We show that by utilizing a Paul trap to levitate the particle and optical cooling, it is possible to reduce environmental decoherence to such a level that CSL dominates the dynamics and contributes the main source of heating. We show that this approach allows measurements to be made on the time scale of seconds and that the free parameter λcsl which characterizes the model ought to be testable to values as low as 10-12 Hz.
User's manual for heat-pump seasonal-performance model (SPM) with selected parametric examples
Not Available
1982-06-30
The Seasonal Performance Model (SPM) was developed to provide an accurate source of seasonal energy consumption and cost predictions for the evaluation of heat pump design options. The program uses steady state heat pump performance data obtained from manufacturers' or Computer Simulation Model runs. The SPM was originally developed in two forms - a cooling model for central air conditioners and heat pumps and a heating model for heat pumps. The original models have undergone many modifications, which are described, to improve the accuracy of predictions and to increase flexibility for use in parametric evaluations. Insights are provided into the theory and construction of the major options, and into the use of the available options and output variables. Specific investigations provide examples of the possible applications of the model. (LEW)
2013-01-01
Background: The validity of the entire renal function tests as a diagnostic tool depends substantially on the Biological Reference Interval (BRI) of urea. Establishment of BRI of urea is difficult partly because exclusion criteria for selection of reference data are quite rigid and partly due to the compartmentalization considerations regarding age and sex of the reference individuals. Moreover, construction of Biological Reference Curve (BRC) of urea is imperative to highlight the partitioning requirements. Materials and Methods: This a priori study examines the data collected by measuring serum urea of 3202 age and sex matched individuals, aged between 1 and 80 years, by a kinetic UV Urease/GLDH method on a Roche Cobas 6000 auto-analyzer. Results: Mann-Whitney U test of the reference data confirmed the partitioning requirement by both age and sex. Further statistical analysis revealed the incompatibility of the data for a proposed parametric model. Hence the data was non-parametrically analysed. BRI was found to be identical for both sexes till the 2nd decade, and the BRI for males increased progressively 6th decade onwards. Four non-parametric models were postulated for construction of BRC: Gaussian kernel, double kernel, local mean and local constant, of which the last one generated the best-fitting curves. Conclusion: Clinical decision making should become easier and diagnostic implications of renal function tests should become more meaningful if this BRI is followed and the BRC is used as a desktop tool in conjunction with similar data for serum creatinine.
Parametric plate-bridge dynamic filter model of violin radiativity.
Bissinger, George
2012-07-01
A hybrid, deterministic-statistical, parametric "dynamic filter" model of the violin's radiativity profile [characterized by an averaged-over-sphere, mean-square radiativity (R(ω)(2))] is developed based on the premise that acoustic radiation depends on (1) how strongly it vibrates [characterized by the averaged-over-corpus, mean-square mobility (Y(ω)(2))] and (2) how effectively these vibrations are turned into sound, characterized by the radiation efficiency, which is proportional to (R(ω)(2))/(Y(ω)(2)). Two plate mode frequencies were used to compute 1st corpus bending mode frequencies using empirical trend lines; these corpus bending modes in turn drive cavity volume flows to excite the two lowest cavity modes A0 and A1. All widely-separated, strongly-radiating corpus and cavity modes in the low frequency deterministic region are then parameterized in a dual-Helmholtz resonator model. Mid-high frequency statistical regions are parameterized with the aid of a distributed-excitation statistical mobility function (no bridge) to help extract bridge filter effects associated with (a) bridge rocking mode frequency changes and (b) bridge-corpus interactions from 14-violin-average, excited-via-bridge (Y(ω)(2)) and (R(ω)(2)). Deterministic-statistical regions are rejoined at ~630 Hz in a mobility-radiativity "trough" where all violin quality classes had a common radiativity. Simulations indicate that typical plate tuning has a significantly weaker effect on radiativity profile trends than bridge tuning.
Update to single-variable parametric cost models for space telescopes
NASA Astrophysics Data System (ADS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2013-09-01
Parametric cost models are an important tool routinely used to plan missions, compare concepts, and justify technology investments. In 2010, the article, "Single-variable parametric cost models for space telescopes," was published [H. P. Stahl et al., Opt. Eng. 49(7), 073006 (2010)]. That paper presented new single-variable cost models for space telescope optical telescope assembly. These models were created by applying standard statistical methods to data collected from 30 different space telescope missions. The results were compared with previously published models. A postpublication independent review of that paper's database identified several inconsistencies. To correct these inconsistencies, a two-year effort was undertaken to reconcile our database with source documents. This paper updates and revises the findings of our 2010 paper. As a result of the review, some telescopes' data were removed, some were revised, and data for a few new telescopes were added to the database. As a consequence, there have been changes to the 2010 published results. But our two most important findings remain unchanged: aperture diameter is the primary cost driver for large space telescopes, and it costs more per kilogram to build a low-areal-density low-stiffness telescope than a more massive high-stiffness telescope. One significant difference is that we now report telescope cost to vary linearly from 5% to 30% of total mission cost, instead of the previously reported average of 20%. To fully understand the content of this update, the authors recommend that one also read the 2010 paper.
Modeling neuron-glia interactions: from parametric model to neuromorphic hardware.
Ghaderi, Viviane S; Allam, Sushmita L; Ambert, N; Bouteiller, J-M C; Choma, J; Berger, T W
2011-01-01
Recent experimental evidence suggests that glial cells are more than just supporting cells to neurons - they play an active role in signal transmission in the brain. We herein propose to investigate the importance of these mechanisms and model neuron-glia interactions at synapses using three approaches: A parametric model that takes into account the underlying mechanisms of the physiological system, a non-parametric model that extracts its input-output properties, and an ultra-low power, fast processing, neuromorphic hardware model. We use the EONS (Elementary Objects of the Nervous System) platform, a highly elaborate synaptic modeling platform to investigate the influence of astrocytic glutamate transporters on postsynaptic responses in the detailed micro-environment of a tri-partite synapse. The simulation results obtained using EONS are then used to build a non-parametric model that captures the essential features of glutamate dynamics. The structure of the non-parametric model we use is specifically designed for efficient hardware implementation using ultra-low power subthreshold CMOS building blocks. The utilization of the approach described allows us to build large-scale models of neuron/glial interaction and consequently provide useful insights on glial modulation during normal and pathological neural function. PMID:22255113
Parametric Dielectric Model of Comet Churyumov-Gerasimenko
NASA Astrophysics Data System (ADS)
Heggy, E.; Palmer, E. M.; Kofman, W. W.; Clifford, S. M.; Righter, K.; Herique, A.
2012-12-01
In 2014, the European Space Agency's Rosetta mission is scheduled to rendezvous with Comet 67P/Churyumov-Gerasimenko (Comet 67P). Rosetta's CONSERT experiment aims to explore the cometary nucleus' geophysical properties using radar tomography. The expected scientific return and inversion algorithms are mainly dependent on our understanding of the dielectric properties of the comet nucleus and how they vary with the spatial distribution of geophysical parameters. Using observations of comets 9P/Tempel 1 and 81P/Wild 2 in combination with dielectric laboratory measurements of temperature, porosity, and dust-to-ice mass ratio dependencies for cometary analog material, we have constructed two hypothetical three-dimensional parametric dielectric models of Comet 67P's nucleus to assess different dielectric scenarios of the inner structure. Our models suggest that dust-to-ice mass ratios and porosity variations generate the most significant measurable dielectric contrast inside the comet nucleus, making it possible to explore the structural and compositional hypotheses of cometary nuclei. Surface dielectric variations, resulting from temperature changes induced by solar illumination of the comet's faces, have also been modeled and suggest that the real part of the dielectric constant varies from 1.9 to 3.0, hence changing the surface radar reflectivity. For CONSERT, this variation could be significant at low incidence angles, when the signal propagates through a length of dust mantle comparable to the wavelength. The overall modeled dielectric permittivity spatial and temporal variations are therefore consistent with the expected deep penetration of CONSERT's transmitted wave through the nucleus. It is also clear that changes in the physical properties of the nucleus induce sufficient variation in the dielectric properties of cometary material to allow their inversion from radar tomography.
Surface differentiation by parametric modeling of infrared intensity scans
NASA Astrophysics Data System (ADS)
Aytac, Tayfun; Barshan, Billur
2005-06-01
We differentiate surfaces with different properties with simple low-cost IR emitters and detectors in a location-invariant manner. The intensity readings obtained with such sensors are highly dependent on the location and properties of the surface, which complicates the differentiation and localization process. Our approach, which models IR intensity scans parametrically, can distinguish different surfaces independent of their positions. Once the surface type is identified, its position (r,θ) can also be estimated. The method is verified experimentally with wood; Styrofoam packaging material; white painted matte wall; white and black cloth; and white, brown, and violet paper. A correct differentiation rate of 100% is achieved for six surfaces, and the surfaces are localized within absolute range and azimuth errors of 0.2 cm and 1.1 deg, respectively. The differentiation rate decreases to 86% for seven surfaces and to 73% for eight surfaces. The method demonstrated shows that simple IR sensors, when coupled with appropriate signal processing, can be used to recognize different types of surfaces in a location-invariant manner.
Je, Yub; Lee, Haksue; Park, Jongkyu; Moon, Wonkyu
2010-06-01
An ultrasonic radiator is developed to generate a difference frequency sound from two frequencies of ultrasound in air with a parametric array. A design method is proposed for an ultrasonic radiator capable of generating highly directive, high-amplitude ultrasonic sound beams at two different frequencies in air based on a modification of the stepped-plate ultrasonic radiator. The stepped-plate ultrasonic radiator was introduced by Gallego-Juarez et al. [Ultrasonics 16, 267-271 (1978)] in their previous study and can effectively generate highly directive, large-amplitude ultrasonic sounds in air, but only at a single frequency. Because parametric array sources must be able to generate sounds at more than one frequency, a design modification is crucial to the application of a stepped-plate ultrasonic radiator as a parametric array source in air. The aforementioned method was employed to design a parametric radiator for use in air. A prototype of this design was constructed and tested to determine whether it could successfully generate a difference frequency sound with a parametric array. The results confirmed that the proposed single small-area transducer was suitable as a parametric radiator in air.
Free-form geometric modeling by integrating parametric and implicit PDEs.
Du, Haixia; Qin, Hong
2007-01-01
Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.
Open Source Molecular Modeling
Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan
2016-01-01
The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. PMID:27631126
Parametric plate-bridge dynamic filter model of violin radiativity.
Bissinger, George
2012-07-01
A hybrid, deterministic-statistical, parametric "dynamic filter" model of the violin's radiativity profile [characterized by an averaged-over-sphere, mean-square radiativity (R(ω)(2))] is developed based on the premise that acoustic radiation depends on (1) how strongly it vibrates [characterized by the averaged-over-corpus, mean-square mobility (Y(ω)(2))] and (2) how effectively these vibrations are turned into sound, characterized by the radiation efficiency, which is proportional to (R(ω)(2))/(Y(ω)(2)). Two plate mode frequencies were used to compute 1st corpus bending mode frequencies using empirical trend lines; these corpus bending modes in turn drive cavity volume flows to excite the two lowest cavity modes A0 and A1. All widely-separated, strongly-radiating corpus and cavity modes in the low frequency deterministic region are then parameterized in a dual-Helmholtz resonator model. Mid-high frequency statistical regions are parameterized with the aid of a distributed-excitation statistical mobility function (no bridge) to help extract bridge filter effects associated with (a) bridge rocking mode frequency changes and (b) bridge-corpus interactions from 14-violin-average, excited-via-bridge (Y(ω)(2)) and (R(ω)(2)). Deterministic-statistical regions are rejoined at ~630 Hz in a mobility-radiativity "trough" where all violin quality classes had a common radiativity. Simulations indicate that typical plate tuning has a significantly weaker effect on radiativity profile trends than bridge tuning. PMID:22779493
Parametric model of ventilators simulated in OpenFOAM and Elmer
NASA Astrophysics Data System (ADS)
Čibera, Václav; Matas, Richard; Sedláček, Jan
2016-03-01
The main goal of presented work was to develop parametric model of a ventilator for CFD and structural analysis. The whole model was designed and scripted in freely available open source programmes in particular in OpenFOAM and Elmer. The main script, which runs or generates other scripts and further control the course of simulation, was written in bash scripting language in Linux environment. Further, the scripts needed for a mesh generation and running of a simulation were prepared using m4 word pre-processor. The use of m4 allowed comfortable set up of the higher amount of scripts. Consequently, the mesh was generated for fluid and solid part of the ventilator within OpenFOAM. Although OpenFOAM offers also a few tools for structural analysis, the mesh of solid parts was transferred into Elmer mesh format with the aim to perform structural analysis in this software. This submitted paper deals namely with part concerning fluid flow through parametrized geometry with different initial conditions. As an example, two simulations were conducted for the same geometric parameters and mesh but for different angular velocity of ventilator rotation.
APT cost scaling: Preliminary indications from a Parametric Costing Model (PCM)
Krakowski, R.A.
1995-02-03
A Parametric Costing Model has been created and evaluate as a first step in quantitatively understanding important design options for the Accelerator Production of Tritium (APT) concept. This model couples key economic and technical elements of APT in a two-parameter search of beam energy and beam power that minimizes costs within a range of operating constraints. The costing and engineering depth of the Parametric Costing Model is minimal at the present {open_quotes}entry level{close_quotes}, and is intended only to demonstrate a potential for a more-detailed, cost-based integrating design tool. After describing the present basis of the Parametric Costing Model and giving an example of a single parametric scaling run derived therefrom, the impacts of choices related to resistive versus superconducting accelerator structures and cost of electricity versus plant availability ({open_quotes}load curve{close_quotes}) are reported. Areas of further development and application are suggested.
Identification of the 1PL Model with Guessing Parameter: Parametric and Semi-Parametric Results
ERIC Educational Resources Information Center
San Martin, Ernesto; Rolin, Jean-Marie; Castro, Luis M.
2013-01-01
In this paper, we study the identification of a particular case of the 3PL model, namely when the discrimination parameters are all constant and equal to 1. We term this model, 1PL-G model. The identification analysis is performed under three different specifications. The first specification considers the abilities as unknown parameters. It is…
Open source molecular modeling.
Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan
2016-09-01
The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io.
Open source molecular modeling.
Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan
2016-09-01
The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. PMID:27631126
Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael
2016-01-01
into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.
Accelerated Hazards Model based on Parametric Families Generalized with Bernstein Polynomials
Chen, Yuhui; Hanson, Timothy; Zhang, Jiajia
2015-01-01
Summary A transformed Bernstein polynomial that is centered at standard parametric families, such as Weibull or log-logistic, is proposed for use in the accelerated hazards model. This class provides a convenient way towards creating a Bayesian non-parametric prior for smooth densities, blending the merits of parametric and non-parametric methods, that is amenable to standard estimation approaches. For example optimization methods in SAS or R can yield the posterior mode and asymptotic covariance matrix. This novel nonparametric prior is employed in the accelerated hazards model, which is further generalized to time-dependent covariates. The proposed approach fares considerably better than previous approaches in simulations; data on the effectiveness of biodegradable carmustine polymers on recurrent brain malignant gliomas is investigated. PMID:24261450
NASA Astrophysics Data System (ADS)
Daneshkhah, Alireza; Remesan, Renji; Chatrabgoun, Omid; Holman, Ian P.
2016-09-01
This paper highlights the usefulness of the minimum information and parametric pair-copula construction (PCC) to model the joint distribution of flood event properties. Both of these models outperform other standard multivariate copula in modeling multivariate flood data that exhibiting complex patterns of dependence, particularly in the tails. In particular, the minimum information pair-copula model shows greater flexibility and produces better approximation of the joint probability density and corresponding measures have capability for effective hazard assessments. The study demonstrates that any multivariate density can be approximated to any degree of desired precision using minimum information pair-copula model and can be practically used for probabilistic flood hazard assessment.
A convolution model for computing the far-field directivity of a parametric loudspeaker array.
Shi, Chuang; Kajikawa, Yoshinobu
2015-02-01
This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.
Parametric uncertainties in global model simulations of black carbon column mass concentration
NASA Astrophysics Data System (ADS)
Pearce, Hana; Lee, Lindsay; Reddington, Carly; Carslaw, Ken; Mann, Graham
2016-04-01
Previous studies have deduced that the annual mean direct radiative forcing from black carbon (BC) aerosol may regionally be up to 5 W m-2 larger than expected due to underestimation of global atmospheric BC absorption in models. We have identified the magnitude and important sources of parametric uncertainty in simulations of BC column mass concentration from a global aerosol microphysics model (GLOMAP-Mode). A variance-based uncertainty analysis of 28 parameters has been performed, based on statistical emulators trained on model output from GLOMAP-Mode. This is the largest number of uncertain model parameters to be considered in a BC uncertainty analysis to date and covers primary aerosol emissions, microphysical processes and structural parameters related to the aerosol size distribution. We will present several recommendations for further research to improve the fidelity of simulated BC. In brief, we find that the standard deviation around the simulated mean annual BC column mass concentration varies globally between 2.5 x 10-9 g cm-2 in remote marine regions and 1.25 x 10-6 g cm-2 near emission sources due to parameter uncertainty Between 60 and 90% of the variance over source regions is due to uncertainty associated with primary BC emission fluxes, including biomass burning, fossil fuel and biofuel emissions. While the contributions to BC column uncertainty from microphysical processes, for example those related to dry and wet deposition, are increased over remote regions, we find that emissions still make an important contribution in these areas. It is likely, however, that the importance of structural model error, i.e. differences between models, is greater than parametric uncertainty. We have extended our analysis to emulate vertical BC profiles at several locations in the mid-Pacific Ocean and identify the parameters contributing to uncertainty in the vertical distribution of black carbon at these locations. We will present preliminary comparisons of
Modeling of finite-amplitude sound beams: second order fields generated by a parametric loudspeaker.
Yang, Jun; Sha, Kan; Gan, Woon-Seng; Tian, Jing
2005-04-01
The nonlinear interaction of sound waves in air has been applied to sound reproduction for audio applications. A directional audible sound can be generated by amplitude-modulating the ultrasound carrier with an audio signal, then transmitting it from a parametric loudspeaker. This brings the need of a computationally efficient model to describe the propagation of finite-amplitude sound beams for the system design and optimization. A quasilinear analytical solution capable of fast numerical evaluation is presented for the second-order fields of the sum-, difference-frequency and second harmonic components. It is based on a virtual-complex-source approach, wherein the source field is treated as an aggregation of a set of complex virtual sources located in complex distance, then the corresponding fundamental sound field is reduced to the computation of sums of simple functions by exploiting the integrability of Gaussian functions. By this result, the five-dimensional integral expressions for the second-order sound fields are simplified to one-dimensional integrals. Furthermore, a substantial analytical reduction to sums of single integrals also is derived for an arbitrary source distribution when the basis functions are expressible as a sum of products of trigonometric functions. The validity of the proposed method is confirmed by a comparison of numerical results with experimental data previously published for the rectangular ultrasonic transducer.
Modeling of finite-amplitude sound beams: second order fields generated by a parametric loudspeaker.
Yang, Jun; Sha, Kan; Gan, Woon-Seng; Tian, Jing
2005-04-01
The nonlinear interaction of sound waves in air has been applied to sound reproduction for audio applications. A directional audible sound can be generated by amplitude-modulating the ultrasound carrier with an audio signal, then transmitting it from a parametric loudspeaker. This brings the need of a computationally efficient model to describe the propagation of finite-amplitude sound beams for the system design and optimization. A quasilinear analytical solution capable of fast numerical evaluation is presented for the second-order fields of the sum-, difference-frequency and second harmonic components. It is based on a virtual-complex-source approach, wherein the source field is treated as an aggregation of a set of complex virtual sources located in complex distance, then the corresponding fundamental sound field is reduced to the computation of sums of simple functions by exploiting the integrability of Gaussian functions. By this result, the five-dimensional integral expressions for the second-order sound fields are simplified to one-dimensional integrals. Furthermore, a substantial analytical reduction to sums of single integrals also is derived for an arbitrary source distribution when the basis functions are expressible as a sum of products of trigonometric functions. The validity of the proposed method is confirmed by a comparison of numerical results with experimental data previously published for the rectangular ultrasonic transducer. PMID:16060510
Parametric Modeling in the CAE Process: Creating a Family of Models
NASA Technical Reports Server (NTRS)
Brown, Christopher J.
2011-01-01
This Presentation meant as an example - Give ideas of approaches to use - The significant benefit of PARAMETRIC geometry based modeling The importance of planning before you build Showcase some NX capabilities - Mesh Controls - Associativity - Divide Face - Offset Surface Reminder - This only had to be done once! - Can be used for any cabinet in that "family" Saves a lot of time if pre-planned Allows re-use in the future
Efficient parametric analysis of the chemical master equation through model order reduction
2012-01-01
Background Stochastic biochemical reaction networks are commonly modelled by the chemical master equation, and can be simulated as first order linear differential equations through a finite state projection. Due to the very high state space dimension of these equations, numerical simulations are computationally expensive. This is a particular problem for analysis tasks requiring repeated simulations for different parameter values. Such tasks are computationally expensive to the point of infeasibility with the chemical master equation. Results In this article, we apply parametric model order reduction techniques in order to construct accurate low-dimensional parametric models of the chemical master equation. These surrogate models can be used in various parametric analysis task such as identifiability analysis, parameter estimation, or sensitivity analysis. As biological examples, we consider two models for gene regulation networks, a bistable switch and a network displaying stochastic oscillations. Conclusions The results show that the parametric model reduction yields efficient models of stochastic biochemical reaction networks, and that these models can be useful for systems biology applications involving parametric analysis problems such as parameter exploration, optimization, estimation or sensitivity analysis. PMID:22748204
Adaptivity Assessment of Regional Semi-Parametric VTEC Modeling to Different Data Distributions
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Onur Karslıoǧlu, Mahmut
2014-05-01
Semi-parametric modelling of Vertical Total Electron Content (VTEC) combines parametric and non-parametric models into a single regression model for estimating the parameters and functions from Global Positioning System (GPS) observations. The parametric part is related to the Differential Code Biases (DCBs), which are fixed unknown parameters of the geometry-free linear combination (or the so called ionospheric observable). On the other hand, the non-parametric component is referred to the spatio-temporal distribution of VTEC which is estimated by applying the method of Multivariate Adaptive Regression B-Splines (BMARS). BMARS algorithm builds an adaptive model by using tensor product of univariate B-splines that are derived from the data. The algorithm searches for best fitting B-spline basis functions in a scale by scale strategy, where it starts adding large scale B-splines to the model and adaptively decreases the scale for including smaller scale features through a modified Gram-Schmidt ortho-normalization process. Then, the algorithm is extended to include the receiver DCBs where the estimates of the receiver DCBs and the spatio-temporal VTEC distribution can be obtained together in an adaptive semi-parametric model. In this work, the adaptivity of regional semi-parametric modelling of VTEC based on BMARS is assessed in different ground-station and data distribution scenarios. To evaluate the level of adaptivity the resulting DCBs and VTEC maps from different scenarios are compared not only with each other but also with CODE distributed GIMs and DCB estimates .
Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M
2015-10-30
Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. PMID:26078035
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.; Yool, A.
2014-09-01
Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to compensate for missing biological complexity. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA) coupled with a widely-used global ocean model (NEMO). A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of target model output. In general, chlorophyll records at a representative array of oceanic sites
Choosing a 'best' global aerosol model: Can observations constrain parametric uncertainty?
NASA Astrophysics Data System (ADS)
Browse, Jo; Reddington, Carly; Pringle, Kirsty; Regayre, Leighton; Lee, Lindsay; Schmidt, Anja; Field, Paul; Carslaw, Kenneth
2015-04-01
Anthropogenic aerosol has been shown to contribute to climate change via direct radiative forcing and cloud-aerosol interactions. While the role of aerosol as a climate agent is likely to diminish as CO2 emissions increase, recent studies suggest that uncertainty in modelled aerosol is likely to dominate uncertainty in future forcing projections. Uncertainty in modelled aerosol derives from uncertainty in the representation of emissions and aerosol processes (parametric uncertainty) as well as structural error. Here we utilise Latin hyper-cube sampling methods to produce an ensemble model (composed of 280 runs) of a global model of aerosol processes (GLOMAP) spanning 31 parametric ranges. Using an unprecedented number of observations made available by the GASSP project we have evaluated our ensemble model against a multi-variable (CCN, BC mass, PM2.5) data-set to determine if 'an ideal' aerosol model exists. Ignoring structural errors, optimization of a global model against multiple data-sets to within a factor of 2 is possible, with multiple model runs identified. However, (even regionally) the parametric range of our 'best' model runs is very wide with the same model skill arising from multiple parameter settings. Our results suggest that 'traditional' in-situ measurements are insufficient to constrain parametric uncertainty. Thus, to constrain aerosol in climate models, future evaluations must include process based observations.
Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data
Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao
2012-01-01
Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions. PMID:23645976
SAMPLE AOR CALCULATION USING ANSYS FULL PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS parametric 360-degree model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric full model for the single shell tank (SST) SX to deal with asymmetry loading conditions and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS SLICE PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS slice parametric model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for the single shell tank (SST) SX, and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
SAMPLE AOR CALCULATION USING ANSYS AXISYMMETRIC PARAMETRIC MODEL FOR TANK SST-SX
JULYK, L.J.; MACKEY, T.C.
2003-06-19
This document documents the ANSYS axisymmetric parametric model for single-shell tank SX and provides sample calculation for analysis-of-record mechanical load conditions. The purpose of this calculation is to develop a parametric model for single shell tank (SST) SX, and provide a sample analysis of the SST-SX tank based on analysis of record (AOR) loads. The SST-SX model is based on buyer-supplied as-built drawings and information for the AOR for SSTs, encompassing the existing tank load conditions, and evaluates stresses and deformations throughout the tank and surrounding soil mass.
NASA Astrophysics Data System (ADS)
Vu, H. X.; Bezzerides, B.; Dubois, D. F.
1998-11-01
A fully kinetic, reduced-description particle-in-cell (RPIC) model is presented in which deviations from quasineutrality, electron and ion kinetic effects, and nonlinear interactions between low-frequency and high-frequency parametric instabilities are modeled correctly. The model is based on a reduced description where the electromagnetic field is represented by three separate temporal WKB envelopes in order to model low-frequency and high-frequency parametric instabilities. Because temporal WKB approximations are invoked, the simulation can be performed on the electron time scale instead of the time scale of the light waves. The electrons and ions are represented by discrete finite-size particles, permitting electron and ion kinetic effects to be modeled properly. The Poisson equation is utilized to ensure that space-charge effects are included. Although RPIC is fully three dimensional, it has been implemented in only two dimensions on a CRAY-T3D with 512 processors and on the Accelerated Strategic Computing Initiative (ASCI) parallel computer at Los Alamos National Laboratory, and the resulting simulation code has been named ASPEN. Given the current computers available to the authors, one and two dimensional simulations are feasible to, and have been, performed. Three dimensional simulations are much more expensive, and are not feasible at this time. However, with rapidly advancing computer technologies, three dimensional simulations may be feasible in the near future. We believe this code is the first PIC code capable of simulating the interaction between low-frequency and high-frequency parametric instabilites in multiple dimensions. Test simulations of stimulated Raman scattering (SRS), stimulated Brillouin scattering (SBS), and Langmuir decay instability (LDI), are presented.
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
NASA Astrophysics Data System (ADS)
Venkatesan, K.; Ramanujam, R.; Kuppan, P.
2016-04-01
This paper presents a parametric effect, microstructure, micro-hardness and optimization of laser scanning parameters (LSP) on heating experiments during laser assisted machining of Inconel 718 alloy. The laser source used for experiments is a continuous wave Nd:YAG laser with maximum power of 2 kW. The experimental parameters in the present study are cutting speed in the range of 50-100 m/min, feed rate of 0.05-0.1 mm/rev, laser power of 1.25-1.75 kW and approach angle of 60-90°of laser beam axis to tool. The plan of experiments are based on central composite rotatable design L31 (43) orthogonal array. The surface temperature is measured via on-line measurement using infrared pyrometer. Parametric significance on surface temperature is analysed using response surface methodology (RSM), analysis of variance (ANOVA) and 3D surface graphs. The structural change of the material surface is observed using optical microscope and quantitative measurement of heat affected depth that are analysed by Vicker's hardness test. The results indicate that the laser power and approach angle are the most significant parameters to affect the surface temperature. The optimum ranges of laser power and approach angle was identified as 1.25-1.5 kW and 60-65° using overlaid contour plot. The developed second order regression model is found to be in good agreement with experimental values with R2 values of 0.96 and 0.94 respectively for surface temperature and heat affected depth.
Discrete K-valued Logic for Multi-parametrical Modeling of a Robotic Agent
NASA Astrophysics Data System (ADS)
Bykovsky, A. Yu.
K-valued Allen-Givone algebra is potentially a good tool for multi-parametric modeling of robotic and multi-agent systems, because a multiple-valued truth table can be directly applied for the accumulation of expert knowledge and the reconstruction of switching functions. The computational cost for their minimization will limit the real information capacity of such a model.
ERIC Educational Resources Information Center
Dyehouse, Melissa A.
2009-01-01
This study compared the model-data fit of a parametric item response theory (PIRT) model to a nonparametric item response theory (NIRT) model to determine the best-fitting model for use with ordinal-level alternate assessment ratings. The PIRT Generalized Graded Unfolding Model (GGUM) was compared to the NIRT Mokken model. Chi-square statistics…
Bayesian non-parametric inference for stochastic epidemic models using Gaussian Processes
Xu, Xiaoguang; Kypraios, Theodore; O'Neill, Philip D.
2016-01-01
This paper considers novel Bayesian non-parametric methods for stochastic epidemic models. Many standard modeling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. To relax these assumptions, we develop a Bayesian non-parametric approach using Gaussian Processes, specifically to estimate the infection process. The methods are illustrated with both simulated and real data sets, the former illustrating that the methods can recover the true infection process quite well in practice, and the latter illustrating that the methods can be successfully applied in different settings. PMID:26993062
NASA Astrophysics Data System (ADS)
Ngo, Dung A.; Koshiba, David A.; Moses, Paul L.
1993-04-01
Detailed structural analysis/optimization is required in the conceptual design stage because of the combination of aerodynamic and aerothermodynamic environment. This is a time and manpower consuming activity which is exasperated by constant vehicle moldline changes as a configuration matures. A simple parametric math model is presented that takes into consideration static loads and the geometry and structural weight of a baseline hypersonic vehicle in predicting the structural weight of a new configuration scaled from the baseline. The approach in developing the math model was to consider a generic parametric cross-sectional geometry that could be used to approximate the baseline geometry and to predict the behavior of this baselne when it is scaled to provide performance and design benefits. This mathematical model, calibrated to finite element analysis and structural optimization sizing results, provides accurate weight prediction for a new configuration which has been moderately scaled from a thoroughly analyzed baseline configuration. This paper will present the structural optimization weight results and the math model weight predictions for a baseline configuration and 15 scaled configurations.
Zhang, Yu; Manjavacas, Alejandro; Hogan, Nathaniel J; Zhou, Linan; Ayala-Orozco, Ciceron; Dong, Liangliang; Day, Jared K; Nordlander, Peter; Halas, Naomi J
2016-05-11
Active optical processes such as amplification and stimulated emission promise to play just as important a role in nanoscale optics as they have in mainstream modern optics. The ability of metallic nanostructures to enhance optical nonlinearities at the nanoscale has been shown for a number of nonlinear and active processes; however, one important process yet to be seen is optical parametric amplification. Here, we report the demonstration of surface plasmon-enhanced difference frequency generation by integration of a nonlinear optical medium, BaTiO3, in nanocrystalline form within a plasmonic nanocavity. These nanoengineered composite structures support resonances at pump, signal, and idler frequencies, providing large enhancements of the confined fields and efficient coupling of the wavelength-converted idler radiation to the far-field. This nanocomplex works as a nanoscale tunable infrared light source and paves the way for the design and fabrication of a surface plasmon-enhanced optical parametric amplifier. PMID:27089276
Zhang, Yu; Manjavacas, Alejandro; Hogan, Nathaniel J; Zhou, Linan; Ayala-Orozco, Ciceron; Dong, Liangliang; Day, Jared K; Nordlander, Peter; Halas, Naomi J
2016-05-11
Active optical processes such as amplification and stimulated emission promise to play just as important a role in nanoscale optics as they have in mainstream modern optics. The ability of metallic nanostructures to enhance optical nonlinearities at the nanoscale has been shown for a number of nonlinear and active processes; however, one important process yet to be seen is optical parametric amplification. Here, we report the demonstration of surface plasmon-enhanced difference frequency generation by integration of a nonlinear optical medium, BaTiO3, in nanocrystalline form within a plasmonic nanocavity. These nanoengineered composite structures support resonances at pump, signal, and idler frequencies, providing large enhancements of the confined fields and efficient coupling of the wavelength-converted idler radiation to the far-field. This nanocomplex works as a nanoscale tunable infrared light source and paves the way for the design and fabrication of a surface plasmon-enhanced optical parametric amplifier.
Data-based stochastic subgrid-scale parametrization: an approach using cluster-weighted modelling.
Kwasniok, Frank
2012-03-13
A new approach for data-based stochastic parametrization of unresolved scales and processes in numerical weather and climate prediction models is introduced. The subgrid-scale model is conditional on the state of the resolved scales, consisting of a collection of local models. A clustering algorithm in the space of the resolved variables is combined with statistical modelling of the impact of the unresolved variables. The clusters and the parameters of the associated subgrid models are estimated simultaneously from data. The method is implemented and explored in the framework of the Lorenz '96 model using discrete Markov processes as local statistical models. Performance of the cluster-weighted Markov chain scheme is investigated for long-term simulations as well as ensemble prediction. It clearly outperforms simple parametrization schemes and compares favourably with another recently proposed subgrid modelling scheme also based on conditional Markov chains.
Shafieloo, Arman
2012-05-01
By introducing Crossing functions and hyper-parameters I show that the Bayesian interpretation of the Crossing Statistics [1] can be used trivially for the purpose of model selection among cosmological models. In this approach to falsify a cosmological model there is no need to compare it with other models or assume any particular form of parametrization for the cosmological quantities like luminosity distance, Hubble parameter or equation of state of dark energy. Instead, hyper-parameters of Crossing functions perform as discriminators between correct and wrong models. Using this approach one can falsify any assumed cosmological model without putting priors on the underlying actual model of the universe and its parameters, hence the issue of dark energy parametrization is resolved. It will be also shown that the sensitivity of the method to the intrinsic dispersion of the data is small that is another important characteristic of the method in testing cosmological models dealing with data with high uncertainties.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
Crash risk analysis for Shanghai urban expressways: A Bayesian semi-parametric modeling approach.
Yu, Rongjie; Wang, Xuesong; Yang, Kui; Abdel-Aty, Mohamed
2016-10-01
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded.
Crash risk analysis for Shanghai urban expressways: A Bayesian semi-parametric modeling approach.
Yu, Rongjie; Wang, Xuesong; Yang, Kui; Abdel-Aty, Mohamed
2016-10-01
Urban expressway systems have been developed rapidly in recent years in China; it has become one key part of the city roadway networks as carrying large traffic volume and providing high traveling speed. Along with the increase of traffic volume, traffic safety has become a major issue for Chinese urban expressways due to the frequent crash occurrence and the non-recurrent congestions caused by them. For the purpose of unveiling crash occurrence mechanisms and further developing Active Traffic Management (ATM) control strategies to improve traffic safety, this study developed disaggregate crash risk analysis models with loop detector traffic data and historical crash data. Bayesian random effects logistic regression models were utilized as it can account for the unobserved heterogeneity among crashes. However, previous crash risk analysis studies formulated random effects distributions in a parametric approach, which assigned them to follow normal distributions. Due to the limited information known about random effects distributions, subjective parametric setting may be incorrect. In order to construct more flexible and robust random effects to capture the unobserved heterogeneity, Bayesian semi-parametric inference technique was introduced to crash risk analysis in this study. Models with both inference techniques were developed for total crashes; semi-parametric models were proved to provide substantial better model goodness-of-fit, while the two models shared consistent coefficient estimations. Later on, Bayesian semi-parametric random effects logistic regression models were developed for weekday peak hour crashes, weekday non-peak hour crashes, and weekend non-peak hour crashes to investigate different crash occurrence scenarios. Significant factors that affect crash risk have been revealed and crash mechanisms have been concluded. PMID:26847949
Can we construct back parametrizations of a given model structure using large sample hydrology?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Gupta, Hoshin; Hrachowitz, Markus; Fenicia, Fabrizio; Savenije, Hubert
2015-04-01
A unified strategy for measurement of information content in hierarchal model building seems lacking. Firstly the model structure is built by its building blocks (control volumes or state variables) as well as interconnecting fluxes (formation of control volumes and fluxes). Secondly, parameterizations of model are designed, as an example the effect of a specific type of stage-discharge relation for a control volume can be explored. At the final stage the parameter values are quantified. In each step and based on assumptions made, more and more information is added to the model. In this study we try to relax our assumption of shape of parameterization. We try to construct parametrizations of a hydrological model, by relaxing the assumptions, given a specific model structure and various forcing data from different catchments across various climatic conditions. This study helps us to find out whether there is a general pattern exist for parametrization of a given model structure.
Preliminary Multi-Variable Parametric Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
This slide presentation reviews creating a preliminary multi-variable cost model for the contract costs of making a space telescope. There is discussion of the methodology for collecting the data, definition of the statistical analysis methodology, single variable model results, testing of historical models and an introduction of the multi variable models.
Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana
2015-03-01
The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology.
Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana
2015-03-01
The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. PMID:25524862
Bosch-Bayard, J; Valdés-Sosa, P; Virues-Alba, T; Aubert-Vázquez, E; John, E R; Harmony, T; Riera-Díaz, J; Trujillo-Barreto, N
2001-04-01
This article describes a new method for 3D QEEG tomography in the frequency domain. A variant of Statistical Parametric Mapping is presented for source log spectra. Sources are estimated by means of a Discrete Spline EEG inverse solution known as Variable Resolution Electromagnetic Tomography (VARETA). Anatomical constraints are incorporated by the use of the Montreal Neurological Institute (MNI) probabilistic brain atlas. Efficient methods are developed for frequency domain VARETA in order to estimate the source spectra for the set of 10(3)-10(5) voxels that comprise an EEG/MEG inverse solution. High resolution source Z spectra are then defined with respect to the age dependent mean and standard deviations of each voxel, which are summarized as regression equations calculated from the Cuban EEG normative database. The statistical issues involved are addressed by the use of extreme value statistics. Examples are shown that illustrate the potential clinical utility of the methods herein developed.
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
Evaluation of whole-animal data using the ion parametric resonance model
Blanchard, J.P.; House, D.E.; Blackman, C.F.
1995-09-01
Changes observed in the behavioral response of land snails from exposure to parallel ac and dc magnetic fields demonstrate limited agreement with the predictions of an interaction model proposed by Lednev and the predictions of a recently proposed ion parametric resonance (IPR) model. However, the inadequate number of reported data points, particularly in a critical exposure range, prevents unambiguous application of either the Lednev or the IPR model.
Moore, Julia L
2014-01-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and nonlinear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear vs. non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development. PMID:24443079
Logistic distributed activation energy model--Part 1: Derivation and numerical parametric study.
Cai, Junmeng; Jin, Chuan; Yang, Songyuan; Chen, Yong
2011-01-01
A new distributed activation energy model is presented using the logistic distribution to mathematically represent the pyrolysis kinetics of complex solid fuels. A numerical parametric study of the logistic distributed activation energy model is conducted to evaluate the influences of the model parameters on the numerical results of the model. The parameters studied include the heating rate, reaction order, frequency factor, mean of the logistic activation energy distribution, standard deviation of the logistic activation energy distribution. The parametric study addresses the dependence on the forms of the calculated α-T and dα/dT-T curves (α: reaction conversion, T: temperature). The study results would be very helpful to the application of the logistic distributed activation energy model, which is the main subject of the next part of this series.
Parametric bootstrap methods for testing multiplicative terms in GGE and AMMI models.
Forkman, Johannes; Piepho, Hans-Peter
2014-09-01
The genotype main effects and genotype-by-environment interaction effects (GGE) model and the additive main effects and multiplicative interaction (AMMI) model are two common models for analysis of genotype-by-environment data. These models are frequently used by agronomists, plant breeders, geneticists and statisticians for analysis of multi-environment trials. In such trials, a set of genotypes, for example, crop cultivars, are compared across a range of environments, for example, locations. The GGE and AMMI models use singular value decomposition to partition genotype-by-environment interaction into an ordered sum of multiplicative terms. This article deals with the problem of testing the significance of these multiplicative terms in order to decide how many terms to retain in the final model. We propose parametric bootstrap methods for this problem. Models with fixed main effects, fixed multiplicative terms and random normally distributed errors are considered. Two methods are derived: a full and a simple parametric bootstrap method. These are compared with the alternatives of using approximate F-tests and cross-validation. In a simulation study based on four multi-environment trials, both bootstrap methods performed well with regard to Type I error rate and power. The simple parametric bootstrap method is particularly easy to use, since it only involves repeated sampling of standard normally distributed values. This method is recommended for selecting the number of multiplicative terms in GGE and AMMI models. The proposed methods can also be used for testing components in principal component analysis.
Numerical Models of Broad Bandwidth Nanosecond Optical Parametric Oscillators
Bowers, M.S.; Gehr, R.J.; Smith, A.V.
1998-10-14
We describe results from three new methods of numerically modeling broad-bandwidth, nanosecond OPO's in the plane-wave approximate ion. They account for differences in group velocities among the three mixing waves, and also include a qutt~ttun noise model.
Evaluation of wave runup predictions from numerical and parametric models
Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.
2014-01-01
Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.
Parametric reduced models for the nonlinear Schrödinger equation
NASA Astrophysics Data System (ADS)
Harlim, John; Li, Xiantao
2015-05-01
Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables.
Scalability of the muscular action in a parametric 3D model of the index finger.
Sancho-Bru, Joaquín L; Vergara, Margarita; Rodríguez-Cervantes, Pablo-Jesús; Giurintano, David J; Pérez-González, Antonio
2008-01-01
A method for scaling the muscle action is proposed and used to achieve a 3D inverse dynamic model of the human finger with all its components scalable. This method is based on scaling the physiological cross-sectional area (PCSA) in a Hill muscle model. Different anthropometric parameters and maximal grip force data have been measured and their correlations have been analyzed and used for scaling the PCSA of each muscle. A linear relationship between the normalized PCSA and the product of the length and breadth of the hand has been finally used for scaling, with a slope of 0.01315 cm(-2), with the length and breadth of the hand expressed in centimeters. The parametric muscle model has been included in a parametric finger model previously developed by the authors, and it has been validated reproducing the results of an experiment in which subjects from different population groups exerted maximal voluntary forces with their index finger in a controlled posture.
Geometric Model for a Parametric Study of the Blended-Wing-Body Airplane
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne; Smith, Robert E.; Sadrehaghighi, Ideen; Wiese, Micharl R.
1996-01-01
A parametric model is presented for the blended-wing-body airplane, one concept being proposed for the next generation of large subsonic transports. The model is defined in terms of a small set of parameters which facilitates analysis and optimization during the conceptual design process. The model is generated from a preliminary CAD geometry. From this geometry, airfoil cross sections are cut at selected locations and fitted with analytic curves. The airfoils are then used as boundaries for surfaces defined as the solution of partial differential equations. Both the airfoil curves and the surfaces are generated with free parameters selected to give a good representation of the original geometry. The original surface is compared with the parametric model, and solutions of the Euler equations for compressible flow are computed for both geometries. The parametric model is a good approximation of the CAD model and the computed solutions are qualitatively similar. An optimal NURBS approximation is constructed and can be used by a CAD model for further refinement or modification of the original geometry.
Brayton power conversion system parametric design modelling for nuclear electric propulsion
NASA Astrophysics Data System (ADS)
Ashe, Thomas L.; Otting, William D.
1993-11-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
Bifurcation analysis of parametrically excited bipolar disorder model
NASA Astrophysics Data System (ADS)
Nana, Laurent
2009-02-01
Bipolar II disorder is characterized by alternating hypomanic and major depressive episode. We model the periodic mood variations of a bipolar II patient with a negatively damped harmonic oscillator. The medications administrated to the patient are modeled via a forcing function that is capable of stabilizing the mood variations and of varying their amplitude. We analyze analytically, using perturbation method, the amplitude and stability of limit cycles and check this analysis with numerical simulations.
Testing goodness of fit of parametric models for censored data.
Nysen, Ruth; Aerts, Marc; Faes, Christel
2012-09-20
We propose and study a goodness-of-fit test for left-censored, right-censored, and interval-censored data assuming random censorship. Main motivation comes from dietary exposure assessment in chemical risk assessment, where the determination of an appropriate distribution for concentration data is of major importance. We base the new goodness-of-fit test procedure proposed in this paper on the order selection test. As part of the testing procedure, we extend the null model to a series of nested alternative models for censored data. Then, we use a modified AIC model selection to select the best model to describe the data. If a model with one or more extra parameters is selected, then we reject the null hypothesis. As an alternative to the use of the asymptotic null distribution of the test statistic, we define a bootstrap-based procedure. We illustrate the applicability of the test procedure on data of cadmium concentrations and on data from the Signal Tandmobiel study and demonstrate its performance characteristics through simulation studies. PMID:22714389
Modelling and validation of magnetorheological brake responses using parametric approach
NASA Astrophysics Data System (ADS)
Z, Zainordin A.; A, Abdullah M.; K, Hudha
2013-12-01
Magnetorheological brake (MR Brake) is one x-by-wire systems which performs better than conventional brake systems. MR brake consists of a rotating disc that is immersed with Magnetorheological Fluid (MR Fluid) in an enclosure of an electromagnetic coil. The applied magnetic field will increase the yield strength of the MR fluid where this fluid was used to decrease the speed of the rotating shaft. The purpose of this paper is to develop a mathematical model to represent MR brake with a test rig. The MR brake model is developed based on actual torque characteristic which is coupled with motion of a test rig. Next, the experimental are performed using MR brake test rig and obtained three output responses known as angular velocity response, torque response and load displacement response. Furthermore, the MR brake was subjected to various current. Finally, the simulation results of MR brake model are then verified with experimental results.
NASA Astrophysics Data System (ADS)
Vu, H. X.; Bezzerides, B.; DuBois, D. F.
1999-11-01
A fully kinetic, reduced-description particle-in-cell (RPIC) model is presented in which deviations from quasineutrality, electron and ion kinetic effects, and nonlinear interactions between low-frequency and high-frequency parametric instabilities are modeled correctly. The model is based on a reduced description where the electromagnetic field is represented by three separate temporal envelopes in order to model parametric instabilities with low-frequency and high-frequency daughter waves. Because temporal envelope approximations are invoked, the simulation can be performed on the electron time scale instead of the time scale of the light waves. The electrons and ions are represented by discrete finite-size particles, permitting electron and ion kinetic effects to be modeled properly. The Poisson equation is utilized to ensure that space-charge effects are included. The RPIC model is fully three dimensional and has been implemented in two dimensions on the Accelerated Strategic Computing Initiative (ASCI) parallel computer at Los Alamos National Laboratory, and the resulting simulation code has been named ASPEN. We believe this code is the first particle-in-cell code capable of simulating the interaction between low-frequency and high-frequency parametric instabilites in multiple dimensions. Test simulations of stimulated Raman scattering, stimulated Brillouin scattering, and Langmuir decay instability are presented.
Parametric Modeling as a Technology of Rapid Prototyping in Light Industry
NASA Astrophysics Data System (ADS)
Tomilov, I. N.; Grudinin, S. N.; Frolovsky, V. D.; Alexandrov, A. A.
2016-04-01
The paper deals with the parametric modeling method of virtual mannequins for the purposes of design automation in clothing industry. The described approach includes the steps of generation of the basic model on the ground of the initial one (obtained in 3D-scanning process), its parameterization and deformation. The complex surfaces are presented by the wireframe model. The modeling results are evaluated with the set of similarity factors. Deformed models are compared with their virtual prototypes. The results of modeling are estimated by the standard deviation factor.
Framework for the Parametric System Modeling of Space Exploration Architectures
NASA Technical Reports Server (NTRS)
Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II
2008-01-01
This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.
Spectrally pure RF photonic source based on a resonant optical hyper-parametric oscillator
NASA Astrophysics Data System (ADS)
Liang, W.; Eliyahu, D.; Matsko, A. B.; Ilchenko, V. S.; Seidel, D.; Maleki, L.
2014-03-01
We demonstrate a free running 10 GHz microresonator-based RF photonic hyper-parametric oscillator characterized with phase noise better than -60 dBc/Hz at 10 Hz, -90 dBc/Hz at 100 Hz, and -150 dBc/Hz at 10 MHz. The device consumes less than 25 mW of optical power. A correlation between the frequency of the continuous wave laser pumping the nonlinear resonator and the generated RF frequency is confirmed. The performance of the device is compared with the performance of a standard optical fiber based coupled opto-electronic oscillator of OEwaves.
Minimalist Model for the Dynamics of Helical Polypeptides: A Statistic-Based Parametrization.
Spampinato, Giulia Lia Beatrice; Maccari, Giuseppe; Tozzini, Valentina
2014-09-01
Low-resolution models are often used to address macroscopic time and size scales in molecular dynamics simulations of biomolecular systems. Coarse graining is often coupled to knowledge-based parametrization to obtain empirical potentials able to reproduce the system thermodynamic behavior. Here, a minimalist coarse grained (GC) model for the helical structures of proteins is reported. A knowledge-based parametrization strategy is coupled to the explicit inclusion of hydrogen-bonding-related terms, resulting in an accurate reproduction of the structure and dynamics of each single helical type, as well as the internal conformational variables correlation. The proposed strategy of basing the force field terms on real physicochemical interactions is transferable to different secondary structures. Thus, this work, though conclusive for helices, is to be considered the first of a series devoted to the application of the knowledge-based, physicochemical model to extended secondary structures and unstructured proteins.
Parametric Estimation in a Recurrent Competing Risks Model
Peña, Edsel A.
2014-01-01
A resource-efficient approach to making inferences about the distributional properties of the failure times in a competing risks setting is presented. Efficiency is gained by observing recurrences of the competing risks over a random monitoring period. The resulting model is called the recurrent competing risks model (RCRM) and is coupled with two repair strategies whenever the system fails. Maximum likelihood estimators of the parameters of the marginal distribution functions associated with each of the competing risks and also of the system lifetime distribution function are presented. Estimators are derived under perfect and partial repair strategies. Consistency and asymptotic properties of the estimators are obtained. The estimation methods are applied to a data set of failures for cars under warranty. Simulation studies are used to ascertain the small sample properties and the efficiency gains of the resulting estimators. PMID:25346751
Fitting of Parametric Building Models to Oblique Aerial Images
NASA Astrophysics Data System (ADS)
Panday, U. S.; Gerke, M.
2011-09-01
In literature and in photogrammetric workstations many approaches and systems to automatically reconstruct buildings from remote sensing data are described and available. Those building models are being used for instance in city modeling or in cadastre context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS) data. This leads to inconsistent building outlines, which has a negative influence on visual impression, but more seriously also represents a wrong legal boundary in the cadaster. Oblique aerial images as opposed to nadir-view images reveal greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images directly and those images are used for automated roof overhang estimation in this research. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frames to their corresponding edge lines extracted from the images. Self-occlusion is detected based on intersection result of viewing ray and the planes formed by the building whereas occlusion from other objects is detected using an ALS point cloud. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Experimental results are verified with high resolution ortho-images, field survey, and ALS data. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings' orientation were accurate to mean of 0.23° and standard deviation of 0.96° with ortho-image. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of - 9cm and 8cm with standard deviations of 16cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. More images should be acquired for completeness of
Parametric Pattern Selection in a Reaction-Diffusion Model
Stich, Michael; Ghoshal, Gourab; Pérez-Mercader, Juan
2013-01-01
We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate) where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways. PMID:24204813
Pediatric bed fall computer simulation model: parametric sensitivity analysis.
Thompson, Angela; Bertocci, Gina
2014-01-01
Falls from beds and other household furniture are common scenarios that may result in injury and may also be stated to conceal child abuse. Knowledge of the biomechanics associated with short-distance falls may aid clinicians in distinguishing between abusive and accidental injuries. In this study, a validated bed fall computer simulation model of an anthropomorphic test device representing a 12-month-old child was used to investigate the effect of altering fall environment parameters (fall height, impact surface stiffness, initial force used to initiate the fall) and child surrogate parameters (overall mass, head stiffness, neck stiffness, stiffness for other body segments) on fall dynamics and outcomes related to injury potential. The sensitivity of head and neck injury outcome measures to model parameters was determined. Parameters associated with the greatest sensitivity values (fall height, initiating force, and surrogate mass) altered fall dynamics and impact orientation. This suggests that fall dynamics and impact orientation play a key role in head and neck injury potential. With the exception of surrogate mass, injury outcome measures tended to be more sensitive to changes in environmental parameters (bed height, impact surface stiffness, initiating force) than surrogate parameters (head stiffness, neck stiffness, body segment stiffness).
Nonlinear parametric model for Granger causality of time series
NASA Astrophysics Data System (ADS)
Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano
2006-06-01
The notion of Granger causality between two time series examines if the prediction of one series could be improved by incorporating information of the other. In particular, if the prediction error of the first time series is reduced by including measurements from the second time series, then the second time series is said to have a causal influence on the first one. We propose a radial basis function approach to nonlinear Granger causality. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in two applications. In the first application, a physiological one, we consider time series of heart rate and blood pressure in congestive heart failure patients and patients affected by sepsis: we find that sepsis patients, unlike congestive heart failure patients, show symmetric causal relationships between the two time series. In the second application, we consider the feedback loop in a model of excitatory and inhibitory neurons: we find that in this system causality measures the combined influence of couplings and membrane time constants.
Microprocessor-controlled colonic peristalsis: dynamic parametric modeling in dogs.
Rashev, Peter Z; Amaris, Manuel; Bowes, Kenneth L; Mintchev, Martin P
2002-05-01
The study aimed at completing a model of functional colonic electric stimulation and testing it for artificial recreation of peristalsis in dogs. Dynamic measurements of invoked single contractions obtained from two unconscious dogs as well as previously reported static contraction properties were utilized to suggest the optimal stimulation parameters of: (1) length of the stimulating electrodes, (2) separation between the successive electrode sets, (3) duration, and (4) phase lag between the stimuli sequentially applied to the electrode sets. The derived electrode configuration and stimulation pattern were adjusted for different anatomical dimensions and tested in distended colon full of viscous content. Forward and backward propagating peristaltic waves were invoked in two other unconscious dogs, indicating that the recreation of colonic peristalsis under microprocessor control is feasible.
Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations
NASA Astrophysics Data System (ADS)
Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila
2016-07-01
It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.
NASA Astrophysics Data System (ADS)
Song, Guo-Zhu; Wu, Fang-Zhou; Zhang, Mei; Yang, Guo-Jian
2016-06-01
Quantum repeater is the key element in quantum communication and quantum information processing. Here, we investigate the possibility of achieving a heralded quantum repeater based on the scattering of photons off single emitters in one-dimensional waveguides. We design the compact quantum circuits for nonlocal entanglement generation, entanglement swapping, and entanglement purification, and discuss the feasibility of our protocols with current experimental technology. In our scheme, we use a parametric down-conversion source instead of ideal single-photon sources to realize the heralded quantum repeater. Moreover, our protocols can turn faulty events into the detection of photon polarization, and the fidelity can reach 100% in principle. Our scheme is attractive and scalable, since it can be realized with artificial solid-state quantum systems. With developed experimental technique on controlling emitter-waveguide systems, the repeater may be very useful in long-distance quantum communication.
Song, Guo-Zhu; Wu, Fang-Zhou; Zhang, Mei; Yang, Guo-Jian
2016-01-01
Quantum repeater is the key element in quantum communication and quantum information processing. Here, we investigate the possibility of achieving a heralded quantum repeater based on the scattering of photons off single emitters in one-dimensional waveguides. We design the compact quantum circuits for nonlocal entanglement generation, entanglement swapping, and entanglement purification, and discuss the feasibility of our protocols with current experimental technology. In our scheme, we use a parametric down-conversion source instead of ideal single-photon sources to realize the heralded quantum repeater. Moreover, our protocols can turn faulty events into the detection of photon polarization, and the fidelity can reach 100% in principle. Our scheme is attractive and scalable, since it can be realized with artificial solid-state quantum systems. With developed experimental technique on controlling emitter-waveguide systems, the repeater may be very useful in long-distance quantum communication. PMID:27350159
Song, Guo-Zhu; Wu, Fang-Zhou; Zhang, Mei; Yang, Guo-Jian
2016-06-28
Quantum repeater is the key element in quantum communication and quantum information processing. Here, we investigate the possibility of achieving a heralded quantum repeater based on the scattering of photons off single emitters in one-dimensional waveguides. We design the compact quantum circuits for nonlocal entanglement generation, entanglement swapping, and entanglement purification, and discuss the feasibility of our protocols with current experimental technology. In our scheme, we use a parametric down-conversion source instead of ideal single-photon sources to realize the heralded quantum repeater. Moreover, our protocols can turn faulty events into the detection of photon polarization, and the fidelity can reach 100% in principle. Our scheme is attractive and scalable, since it can be realized with artificial solid-state quantum systems. With developed experimental technique on controlling emitter-waveguide systems, the repeater may be very useful in long-distance quantum communication.
Song, Guo-Zhu; Wu, Fang-Zhou; Zhang, Mei; Yang, Guo-Jian
2016-01-01
Quantum repeater is the key element in quantum communication and quantum information processing. Here, we investigate the possibility of achieving a heralded quantum repeater based on the scattering of photons off single emitters in one-dimensional waveguides. We design the compact quantum circuits for nonlocal entanglement generation, entanglement swapping, and entanglement purification, and discuss the feasibility of our protocols with current experimental technology. In our scheme, we use a parametric down-conversion source instead of ideal single-photon sources to realize the heralded quantum repeater. Moreover, our protocols can turn faulty events into the detection of photon polarization, and the fidelity can reach 100% in principle. Our scheme is attractive and scalable, since it can be realized with artificial solid-state quantum systems. With developed experimental technique on controlling emitter-waveguide systems, the repeater may be very useful in long-distance quantum communication. PMID:27350159
Modeling parametric scattering instabilities in large-scale expanding plasmas
NASA Astrophysics Data System (ADS)
Masson-Laborde, P. E.; Hüller, S.; Pesme, D.; Casanova, M.; Loiseau, P.; Labaune, Ch.
2006-06-01
We present results from two-dimensional simulations of long scale-length laser-plasma interaction experiments performed at LULI. With the goal of predictive modeling of such experiments with our code Harmony2D, we take into account realistic plasma density and velocity profiles, the propagation of the laser light beam and the scattered light, as well as the coupling with the ion acoustic waves in order to describe Stimulated Brillouin Scattering (SBS). Laser pulse shaping is taken into account to follow the evolution ofthe SBS reflectivity as close as possible to the experiment. The light reflectivity is analyzed by distinguishing the backscattered light confined in the solid angle defined by the aperture of the incident light beam and the scattered light outside this cone. As in the experiment, it is observed that the aperture of the scattered light tends to increase with the mean intensity of the RPP-smoothed laser beam. A further common feature between simulations and experiments is the observed localization of the SBS-driven ion acoustic waves (IAW) in the front part of the target (with respect to the incoming laser beam).
Classification performance prediction using parametric scattering feature models
NASA Astrophysics Data System (ADS)
Chiang, Hung-Chih; Moses, Randolph L.; Potter, Lee C.
2000-08-01
We consider a method for estimating classification performance of a model-based synthetic aperture radar (SAR) automatic target recognition system. Target classification is performed by comparing an unordered feature set extracted from a measured SAR image chip with an unordered feature set predicted from a hypothesized target class and pose. A Bayes likelihood metric that incorporates uncertainty in both the predicted and extracted feature vectors is used to compute the match score. Evaluation of the match likelihoods requires a correspondence between the unordered predicted and extracted feature sets. This is a bipartite graph matching problem with insertions and deletions; we show that the optimal match can be found in polynomial time. We extend the results in 1 to estimate classification performance for a ten-class SAR ATR problem. We consider a synthetic classification problem to validate the classifier and to address resolution and robustness questions in the likelihood scoring method. Specifically, we consider performance versus SAR resolution, performance degradation due to mismatch between the assumed and actual feature statistics, and performance impact of correlated feature attributes.
PARAMETRIC STUDY OF GROUND SOURCE HEAT PUMP SYSTEM FOR HOT AND HUMID CLMATE
Jiang Zhu; Yong X. Tao
2011-11-01
The U-tube sizes and varied thermal conductivity with different grout materials are studied based on the benchmark residential building in Hot-humid Pensacola, Florida. In this study, the benchmark building is metered and the data is used to validate the simulation model. And a list of comparative simulation cases with varied parameter value are simulated to study the importance of pipe size and grout to the ground source heat pump energy consumption. The simulation software TRNSYS [1] is employed to fulfill this task. The results show the preliminary energy saving based on varied parameters. Future work needs to be conducted for the cost analysis, include the installation cost from contractor and materials cost.
Parametric Studies and Optimization of Eddy Current Techniques through Computer Modeling
Todorov, E. I.
2007-03-21
The paper demonstrates the use of computer models for parametric studies and optimization of surface and subsurface eddy current techniques. The study with high-frequency probe investigates the effect of eddy current frequency and probe shape on the detectability of flaws in the steel substrate. The low-frequency sliding probe study addresses the effect of conductivity between the fastener and the hole, frequency and coil separation distance on detectability of flaws in subsurface layers.
NASA Astrophysics Data System (ADS)
Saffin, Leo; Methven, John; Gray, Sue
2016-04-01
Numerical models of the atmosphere combine a dynamical core, which approximates solutions to the adiabatic and frictionless governing equations, with the tendencies arising from the parametrization of physical processes. Tracers of potential vorticity (PV) can be used to accumulate the tendencies of parametrized physical processes and diagnose their impacts on the large-scale dynamics. This is due to two key properties of PV, conservation following an air mass and invertibility which relates the PV distribution to the balanced dynamics of the atmosphere. Applying the PV tracers to many short forecasts allows for a systematic investigation of the behaviour of parametrized physical processes. The forecasts are 2.5 day lead time forecasts run using the Met Office Unified Model (MetUM) initialised at 0Z for each day in November/December/January 2013/14. The analysis of the PV tracers has been focussed on regions where diabatic processes can be important (tropopause ridges and troughs, frontal regions and the boundary layer top). The tropopause can be described as a surface of constant PV with a sharp PV gradient. Previous work using the PV tracers in individual case studies has shown that parametrized physical processes act to enhance the tropopause PV contrast which can affect the Rossby wave phase speed. The short forecasts show results consistent with a systematic enhancement of tropopause PV contrast by diabatic processes and show systematically different behaviour between ridges and troughs. The implication of this work is that a failure to correctly represent the effects of diabatic processes on the tropopause in models can lead to poor Rossby wave evolution and potentially downstream forecast busts.
Pyka, Martin; Klatt, Sebastian; Cheng, Sen
2014-01-01
Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort. PMID:25309338
NASA Technical Reports Server (NTRS)
Hashemi-Kia, M.; Toossi, M.
1990-01-01
As a result of this work, a reduction procedure has been developed which can be applied to large finite element model of airframe type structures. This procedure, which is tailored to be used with MSC/NASTRAN finite element code, is applied to the full airframe dynamic finite element model of AH-64A Attack Helicopter. The applicability of the resulting reduced model to parametric and optimization studies is examined. Through application of the design sensitivity analysis, the viability and efficiency of this reduction technique has been demonstrated in a vibration reduction study.
NASA Astrophysics Data System (ADS)
Kimstrand, Peter; Traneus, Erik; Ahnesjö, Anders; Tilly, Nina
2008-07-01
Collimators are routinely used in proton radiotherapy to laterally confine the field and improve the penumbra. Collimator scatter contributes up to 15% of the local dose and is therefore important to include in treatment planning dose calculation. We present a method for reconstruction of the collimator scatter phase space based on the parametrization of pre-calculated scatter kernels. Collimator scatter distributions, generated by the Monte Carlo (MC) package GEANT4.8.2, were scored differential in direction and energy. The distributions were then parametrized so as to enable a fast reconstruction by sampling. MC calculated dose distributions in water based on the parametrized phase space were compared to full MC simulations that included the collimator in the simulation geometry, as well as to experimental data. The experiments were performed at the scanned proton beam line at the The Svedberg Laboratory (TSL) in Uppsala, Sweden. Dose calculations using the parametrization of this work and the full MC for isolated typical cases of collimator scatter were compared by means of the gamma index. The result showed that in total 96.7% (99.3%) of the voxels fulfilled the gamma 2.0%/2.0 mm (3.0%/3.0 mm) criterion. The dose distribution for a collimated field was calculated based on the phase space created by the collimator scatter model incorporated into the generation of the phase space of a scanned proton beam. Comparing these dose distributions to full MC simulations, including particle transport in the MLC, yielded that in total for 18 different collimated fields, 99.1% of the voxels satisfied the gamma 1.0%/1.0 mm criterion and no voxel exceeded the gamma 2.6%/2.6 mm criterion. The dose contribution of collimator scatter along the central axis as predicted by the model showed good agreement with experimental data.
The Parametric Model of the Human Mandible Coronoid Process Created by Method of Anatomical Features
Vitković, Nikola; Mitić, Jelena; Manić, Miodrag; Trajanović, Miroslav; Husain, Karim; Petrović, Slađana; Arsić, Stojanka
2015-01-01
Geometrically accurate and anatomically correct 3D models of the human bones are of great importance for medical research and practice in orthopedics and surgery. These geometrical models can be created by the use of techniques which can be based on input geometrical data acquired from volumetric methods of scanning (e.g., Computed Tomography (CT)) or on the 2D images (e.g., X-ray). Geometrical models of human bones created in such way can be applied for education of medical practitioners, preoperative planning, etc. In cases when geometrical data about the human bone is incomplete (e.g., fractures), it may be necessary to create its complete geometrical model. The possible solution for this problem is the application of parametric models. The geometry of these models can be changed and adapted to the specific patient based on the values of parameters acquired from medical images (e.g., X-ray). In this paper, Method of Anatomical Features (MAF) which enables creation of geometrically precise and anatomically accurate geometrical models of the human bones is implemented for the creation of the parametric model of the Human Mandible Coronoid Process (HMCP). The obtained results about geometrical accuracy of the model are quite satisfactory, as it is stated by the medical practitioners and confirmed in the literature. PMID:26064183
Vitković, Nikola; Mitić, Jelena; Manić, Miodrag; Trajanović, Miroslav; Husain, Karim; Petrović, Slađana; Arsić, Stojanka
2015-01-01
Geometrically accurate and anatomically correct 3D models of the human bones are of great importance for medical research and practice in orthopedics and surgery. These geometrical models can be created by the use of techniques which can be based on input geometrical data acquired from volumetric methods of scanning (e.g., Computed Tomography (CT)) or on the 2D images (e.g., X-ray). Geometrical models of human bones created in such way can be applied for education of medical practitioners, preoperative planning, etc. In cases when geometrical data about the human bone is incomplete (e.g., fractures), it may be necessary to create its complete geometrical model. The possible solution for this problem is the application of parametric models. The geometry of these models can be changed and adapted to the specific patient based on the values of parameters acquired from medical images (e.g., X-ray). In this paper, Method of Anatomical Features (MAF) which enables creation of geometrically precise and anatomically accurate geometrical models of the human bones is implemented for the creation of the parametric model of the Human Mandible Coronoid Process (HMCP). The obtained results about geometrical accuracy of the model are quite satisfactory, as it is stated by the medical practitioners and confirmed in the literature. PMID:26064183
Third-order spontaneous parametric down-conversion in thin optical fibers as a photon-triplet source
Corona, Maria; Garay-Palmett, Karina; U'Ren, Alfred B.
2011-09-15
We study the third-order spontaneous parametric down-conversion (TOSPDC) process, as a means to generate entangled photon triplets. Specifically, we consider thin optical fibers as the nonlinear medium to be used as the basis for TOSPDC in configurations where phase matching is attained through the use of more than one fiber transverse modes. Our analysis in this paper, which follows from our earlier paper [Opt. Lett. 36, 190-192 (2011)], aims to supply experimentalists with the details required in order to design a TOSPDC photon-triplet source. Specifically, our analysis focuses on the photon triplet state, on the rate of emission, and on the TOSPDC phase-matching characteristics for the cases of frequency-degenerate and frequency nondegenerate TOSPDC.
Nusinovich, G.S.; Vlasov, A.N. )
1994-03-01
In microwave sources of coherent [hacek C]erenkov radiation the electrons usually propagate near the rippled wall of a slow-wave structure. These ripples cause the periodic modulation of electron potential depression, and therefore, lead to periodic modulation of electron axial velocities. Since the period of this electrostatic pumping is the period of the slow-wave structure the parametric coupling of electrons to originally nonsynchronous spatial harmonics of the microwave field may occur. This effect can be especially important for backward-wave oscillators (BWO's) driven by high current, relativistic electron beams. In the paper both linear and nonlinear theories of the relativistic resonant BWO with periodic modulation of electron axial velocities are developed and results illustrating the evolution of the linear gain function and the efficiency of operation in the large-signal regime are presented.
Parametrization of orographic thermal effect on the deep convection triggering in Global Model
NASA Astrophysics Data System (ADS)
Jingmei, Y.; Jean-Yves, G.; Alain, L.
2013-05-01
The work is based on the hypothesis that anabatic winds (or valley breeze) is an important mechanism of deep convection triggering. Induced by the temperature difference between the mountain surface and the environmental air, anabatic winds own a kinetic energy which may eventually overcome the Planet Boundary Layer inhibition (CIN, Convective Inhibition) and allows the associated convection to develop into the free troposphere. This sub-grid scale phenomenon needs a special parametrization in general circulation models (GCMs). Its lack of representation in present GCM versions is thought of being the cause of the deficit of deep convection systems genesis observed in certain orographical zones, as Mount Cameroun in West Africa for example. A valley breeze parametrization has been designed and built in a GCM (LMDZ). The model computes kinetic energy of the valley breeze in relation to the sub-grid scale orographical characteristics (elevation, slope, orientation). It consists of a grid slim layer along the mountain surface. It is coupled with a multi-layers conductive-capacitive soil model. The coupling is accomplished by using the energy budget at the surface of the mountain. The model was tested in the dynamical mode by systematic sensitivity analysis to the principal parameters and to the environmental conditions. It has then been implemented in the 1D version of the GCM (SCM, Single Column Model), coupled with the Emanuel deep convection scheme, and tested against a radiative-convective equilibrium case and the Hapex campaign case. The stationnary solution of the aeraulic part of the model has been adopted for the GCM. The parametrization finally has been introduced in the 3D version of the GCM, in the diagnostic mode (without coupling to the convection process). It gives a spatial distribution of the triggering frequency of deep convection in coherence with that of the satellite image observation in the West Africa region, during the West African Monsoon
Shang, Yaping; Xu, Jiangming; Wang, Peng; Li, Xiao; Zhou, Pu; Xu, Xiaojun
2016-09-19
The longterm stability of the laser system is very important in many applications. In this letter, an ultra-stable, broadband, mid-infrared (MIR) optical parametric oscillator (OPO) pumped by a super-fluorescent fiber source is demonstrated. An idler MIR output power of 11.3 W with excellent beam quality was obtained and the corresponding pump-to-idler conversion slope efficiency was 15.9%. Furthermore, during 1h measurement at full power operation, the peak-to-peak fluctuation of idler output power was less than 1.9% and the corresponding standard deviation was less than 0.4% RMS, which was much better than that of a traditional single mode fiber laser pumped OPO system (10.9% for peak-to-peak fluctuation and 1.8% RMS for the standard deviation) in another experiment for comparison. To our knowledge, this is the first demonstration on a high-power, ultra-stable OPO system by using the modefree pump source, which offered an effective approach to achieve an ultra-stable MIR source and broadened the range of the super-fluorescent fiber source applications.
Shang, Yaping; Xu, Jiangming; Wang, Peng; Li, Xiao; Zhou, Pu; Xu, Xiaojun
2016-09-19
The longterm stability of the laser system is very important in many applications. In this letter, an ultra-stable, broadband, mid-infrared (MIR) optical parametric oscillator (OPO) pumped by a super-fluorescent fiber source is demonstrated. An idler MIR output power of 11.3 W with excellent beam quality was obtained and the corresponding pump-to-idler conversion slope efficiency was 15.9%. Furthermore, during 1h measurement at full power operation, the peak-to-peak fluctuation of idler output power was less than 1.9% and the corresponding standard deviation was less than 0.4% RMS, which was much better than that of a traditional single mode fiber laser pumped OPO system (10.9% for peak-to-peak fluctuation and 1.8% RMS for the standard deviation) in another experiment for comparison. To our knowledge, this is the first demonstration on a high-power, ultra-stable OPO system by using the modefree pump source, which offered an effective approach to achieve an ultra-stable MIR source and broadened the range of the super-fluorescent fiber source applications. PMID:27661906
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
X-1 to X-Wings: Developing a Parametric Cost Model
NASA Technical Reports Server (NTRS)
Sterk, Steve; McAtee, Aaron
2015-01-01
In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.
Automatic measurement of vertebral body deformations in CT images based on a 3D parametric model
NASA Astrophysics Data System (ADS)
Štern, Darko; Bürmen, Miran; Njagulj, Vesna; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2012-03-01
Accurate and objective evaluation of vertebral body deformations represents an important part of the clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is oriented towards threedimensional (3D) imaging techniques, the established methods for the evaluation of vertebral body deformations are based on measurements in two-dimensional (2D) X-ray images. In this paper, we propose a method for automatic measurement of vertebral body deformations in computed tomography (CT) images that is based on efficient modeling of the vertebral body shape with a 3D parametric model. By fitting the 3D model to the vertebral body in the image, quantitative description of normal and pathological vertebral bodies is obtained from the value of 25 parameters of the model. The evaluation of vertebral body deformations is based on the distance of the observed vertebral body from the distribution of the parameter values of normal vertebral bodies in the parametric space. The distribution is obtained from 80 normal vertebral bodies in the training data set and verified with eight normal vertebral bodies in the control data set. The statistically meaningful distance of eight pathological vertebral bodies in the study data set from the distribution of normal vertebral bodies in the parametric space shows that the parameters can be used to successfully model vertebral body deformations in 3D. The proposed method may therefore be used to assess vertebral body deformations in 3D or provide clinically meaningful observations that are not available when using 2D methods that are established in clinical practice.
Parametric-based brain Magnetic Resonance Elastography using a Rayleigh damping material model.
Petrov, Andrii Y; Sellier, Mathieu; Docherty, Paul D; Chase, J Geoffrey
2014-10-01
The three-parameter Rayleigh damping (RD) model applied to time-harmonic Magnetic Resonance Elastography (MRE) has potential to better characterise fluid-saturated tissue systems. However, it is not uniquely identifiable at a single frequency. One solution to this problem involves simultaneous inverse problem solution of multiple input frequencies over a broad range. As data is often limited, an alternative elegant solution is a parametric RD reconstruction, where one of the RD parameters (μI or ρI) is globally constrained allowing accurate identification of the remaining two RD parameters. This research examines this parametric inversion approach as applied to in vivo brain imaging. Overall, success was achieved in reconstruction of the real shear modulus (μR) that showed good correlation with brain anatomical structures. The mean and standard deviation shear stiffness values of the white and gray matter were found to be 3±0.11kPa and 2.2±0.11kPa, respectively, which are in good agreement with values established in the literature or measured by mechanical testing. Parametric results with globally constrained μI indicate that selecting a reasonable value for the μI distribution has a major effect on the reconstructed ρI image and concomitant damping ratio (ξd). More specifically, the reconstructed ρI image using a realistic μI=333Pa value representative of a greater portion of the brain tissue showed more accurate differentiation of the ventricles within the intracranial matter compared to μI=1000Pa, and ξd reconstruction with μI=333Pa accurately captured the higher damping levels expected within the vicinity of the ventricles. Parametric RD reconstruction shows potential for accurate recovery of the stiffness characteristics and overall damping profile of the in vivo living brain despite its underlying limitations. Hence, a parametric approach could be valuable with RD models for diagnostic MRE imaging with single frequency data. PMID:24986109
NASA Astrophysics Data System (ADS)
Macafee, Allan W.; Pearson, Garry M.
2006-09-01
Over the years, researchers have developed parametric wind models to depict the surface winds within a tropical cyclone (TC). Most models were developed using data from aircraft flights into low-latitude (south of 30°N) TCs in the Atlantic Ocean, Gulf of Mexico, and Caribbean Sea. Such models may not adequately reproduce the midlatitude TC wind field where synoptic interaction and acceleration are more pronounced. To tailor these models for midlatitude application, latitude-dependent angular size and shape details were added by using new techniques to set values for model input parameters and by incorporating additional field-shaping procedures. A method to assess the different techniques and field-shaping procedures was developed in which qualitative and quantitative assessment was performed using five parametric models and samples of buoy and 2D surface wind data. Contingency tables and statistical scores such as mean absolute error and bias were used to select the techniques and procedures that create the most realistic depiction of low- and midlatitude TC surface wind fields.
A Parametric Study of Erupting Flux Rope Rotation: Modeling the 'Cartwheel CME' on 9 April 2008
NASA Technical Reports Server (NTRS)
Kliem, B.; Toeroek, T.; Thompson, W. T.
2012-01-01
The rotation of erupting filaments in the solar corona is addressed through a parametric simulation study of unstable, rotating flux ropes in bipolar force-free initial equilibrium. The Lorentz force due to the external shear-field component and the relaxation of tension in the twisted field are the major contributors to the rotation in this model, while reconnection with the ambient field is of minor importance, due to the field's simple structure. In the low-beta corona, the rotation is not guided by the changing orientation of the vertical field component's polarity inversion line with height. The model yields strong initial rotations which saturate in the corona and differ qualitatively from the profile of rotation vs. height obtained in a recent simulation of an eruption without preexisting flux rope. Both major mechanisms writhe the flux rope axis, converting part of the initial twist helicity, and produce rotation profiles which, to a large part, are very similar within a range of shear-twist combinations. A difference lies in the tendency of twist-driven rotation to saturate at lower heights than shear-driven rotation. For parameters characteristic of the source regions of erupting filaments and coronal mass ejections, the shear field is found to be the dominant origin of rotations in the corona and to be required if the rotation reaches angles of order 90 degrees and higher; it dominates even if the twist exceeds the threshold of the helical kink instability. The contributions by shear and twist to the total rotation can be disentangled in the analysis of observations if the rotation and rise profiles are simultaneously compared with model calculations. The resulting twist estimate allows one to judge whether the helical kink instability occurred. This is demonstrated for the erupting prominence in the "Cartwheel CME" on 9 April 2008, which has shown a rotation of approximately 115 deg. up to a height of 1.5 Solar R above the photosphere. Out of a range of
Advanced parametrical modelling of 24 GHz radar sensor IC packaging components
NASA Astrophysics Data System (ADS)
Kazemzadeh, R.; John, W.; Wellmann, J.; Bala, U. B.; Thiede, A.
2011-08-01
This paper deals with the development of an advanced parametrical modelling concept for packaging components of a 24 GHz radar sensor IC used in automotive driver assistance systems. For fast and efficient design of packages for system-in-package modules (SiP), a simplified model for the description of parasitic electromagnetic effects within the package is desirable, as 3-D field computation becomes inefficient due to the high density of conductive elements of the various signal paths in the package. By using lumped element models for the characterization of the conductive components, a fast indication of the design's signal-quality can be gained, but so far does not offer enough flexibility to cover the whole range of geometric arrangements of signal paths in a contemporary package. This work pursues to meet the challenge of developing a flexible and fast package modelling concept by defining parametric lumped-element models for all basic signal path components, e.g. bond wires, vias, strip lines, bumps and balls.
Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.
2013-10-20
We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048{sup 3} dark matter particles, 2048{sup 3} gas cells, and 17 billion adaptive rays in a L = 100 Mpc h {sup –1} box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h {sup –1}). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h {sup –1}) in order to make mock observations and theoretical predictions.
Economic policy optimization based on both one stochastic model and the parametric control theory
NASA Astrophysics Data System (ADS)
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
NASA Astrophysics Data System (ADS)
Weaver, R.; Plesko, C. S.; Gisler, G. R.
2013-12-01
We are performing detailed hydrodynamic simulations of the interaction from a strong explosion with sample Asteroid objects. The purpose of these simulations is to apply modern hydrodynamic codes that have been well verified and validated (V&V) to the problem of mitigating the hazard from a potentially hazardous object (PHO), an asteroid or comet that is on an Earth crossing orbit. The code we use for these simulations is the RAGE code from Los Alamos National Laboratory [1-6]. Initial runs were performed using a spherical object. Next we ran simulations using the shape form from a known asteroid: 25143 Itokawa. This particular asteroid is not a PHO but we use its shape to consider the influence of non-spherical objects. The initial work was performed using 2D cylindrically symmetric simulations and simple geometries. We then performed a major fully 3D simulation. For an Itokawa size object (~500 m) and an explosion energies ranging from 0.5 - 1 megatons, the velocities imparted to all of the PHO "rocks" in all cases were many m/s. The velocities calculated were much larger than escape velocity and would preclude re-assembly of the fragments. The dispersion of the asteroid remnants is very directional from a surface burst, with all fragments moving away from the point of the explosion. This detail can be used to time the intercept for maximum movement off the original orbit. Results from these previous studies will be summarized for background. In the new work presented here we show a variety of parametric studies around these initial simulations. We modified the explosion energy by +/- 20% and varied the internal composition from a few large "rocks" to several hundred smaller rocks. The results of these parametric studies will be presented. We have also extended our work [6],[7] to stand-off nuclear bursts and will present the initial results for the energy deposition by a generic source into the non-uniform composition asteroid. The goal of this new work is to
Hybrid Model of Inhomogeneous Solar Wind Plasma Heating by Alfven Wave Spectrum: Parametric Studies
NASA Technical Reports Server (NTRS)
Ofman, L.
2010-01-01
Observations of the solar wind plasma at 0.3 AU and beyond show that a turbulent spectrum of magnetic fluctuations is present. Remote sensing observations of the corona indicate that heavy ions are hotter than protons and their temperature is anisotropic (T(sub perpindicular / T(sub parallel) >> 1). We study the heating and the acceleration of multi-ion plasma in the solar wind by a turbulent spectrum of Alfvenic fluctuations using a 2-D hybrid numerical model. In the hybrid model the protons and heavy ions are treated kinetically as particles, while the electrons are included as neutralizing background fluid. This is the first two-dimensional hybrid parametric study of the solar wind plasma that includes an input turbulent wave spectrum guided by observation with inhomogeneous background density. We also investigate the effects of He++ ion beams in the inhomogeneous background plasma density on the heating of the solar wind plasma. The 2-D hybrid model treats parallel and oblique waves, together with cross-field inhomogeneity, self-consistently. We investigate the parametric dependence of the perpendicular heating, and the temperature anisotropy in the H+-He++ solar wind plasma. It was found that the scaling of the magnetic fluctuations power spectrum steepens in the higher-density regions, and the heating is channeled to these regions from the surrounding lower-density plasma due to wave refraction. The model parameters are applicable to the expected solar wind conditions at about 10 solar radii.
Parametric study of compound semiconductor etching utilizing inductively coupled plasma source
Constantine, C.; Johnson, D.; Barratt, C.
1996-07-01
Inductively Coupled Plasma (ICP) sources are extremely promising for large-area, high-ion density etching or deposition processes. In this review the authors compare results for GaAs and GaN etching with both ICP and Electron Cyclotron Resonance (ECR) sources on the same single-wafer platform. The ICP is shown to be capable of very high rates with excellent anisotropy for fabrication of GaAs vias or deep mesas in GaAs or GaN waveguide structures.
Parametric modeling in distributed optical fiber vibration sensing system for position determination
NASA Astrophysics Data System (ADS)
Wu, Hongyan; Wang, Jian; Jia, Bo
2016-04-01
Distributed optical fiber vibration sensing system is widely used as a monitoring system in communication cable and pipeline of long distances. When a vibration signal occurs at a particular position along the fiber, the response of the system, in the frequency domain, presents a series of periodic maxima and minima (or null frequencies). These minima depend on the position of the vibration signal along the fiber. Power spectral estimation methods are considered to denoise the power spectrum of the system and determine these minima precisely. The experimental results show higher accuracy of the position using a parametric model with appropriate selection of order p and q than just using fast Fourier transform algorithm.
Development of Parametric Mass and Volume Models for an Aerospace SOFC/Gas Turbine Hybrid System
NASA Technical Reports Server (NTRS)
Tornabene, Robert; Wang, Xiao-yen; Steffen, Christopher J., Jr.; Freeh, Joshua E.
2005-01-01
In aerospace power systems, mass and volume are key considerations to produce a viable design. The utilization of fuel cells is being studied for a commercial aircraft electrical power unit. Based on preliminary analyses, a SOFC/gas turbine system may be a potential solution. This paper describes the parametric mass and volume models that are used to assess an aerospace hybrid system design. The design tool utilizes input from the thermodynamic system model and produces component sizing, performance, and mass estimates. The software is designed such that the thermodynamic model is linked to the mass and volume model to provide immediate feedback during the design process. It allows for automating an optimization process that accounts for mass and volume in its figure of merit. Each component in the system is modeled with a combination of theoretical and empirical approaches. A description of the assumptions and design analyses is presented.
Animal models of source memory.
Crystal, Jonathon D
2016-01-01
Source memory is the aspect of episodic memory that encodes the origin (i.e., source) of information acquired in the past. Episodic memory (i.e., our memories for unique personal past events) typically involves source memory because those memories focus on the origin of previous events. Source memory is at work when, for example, someone tells a favorite joke to a person while avoiding retelling the joke to the friend who originally shared the joke. Importantly, source memory permits differentiation of one episodic memory from another because source memory includes features that were present when the different memories were formed. This article reviews recent efforts to develop an animal model of source memory using rats. Experiments are reviewed which suggest that source memory is dissociated from other forms of memory. The review highlights strengths and weaknesses of a number of animal models of episodic memory. Animal models of source memory may be used to probe the biological bases of memory. Moreover, these models can be combined with genetic models of Alzheimer's disease to evaluate pharmacotherapies that ultimately have the potential to improve memory.
Yang, Xue; Lauzon, Carolyn B.; Crainiceanu, Ciprian; Caffo, Brian; Resnick, Susan M.; Landman, Bennett A.
2012-01-01
Massively univariate regression and inference in the form of statistical parametric mapping have transformed the way in which multi-dimensional imaging data are studied. In functional and structural neuroimaging, the de facto standard “design matrix”-based general linear regression model and its multi-level cousins have enabled investigation of the biological basis of the human brain. With modern study designs, it is possible to acquire multi-modal three-dimensional assessments of the same individuals — e.g., structural, functional and quantitative magnetic resonance imaging, alongside functional and ligand binding maps with positron emission tomography. Largely, current statistical methods in the imaging community assume that the regressors are non-random. For more realistic multi-parametric assessment (e.g., voxel-wise modeling), distributional consideration of all observations is appropriate. Herein, we discuss two unified regression and inference approaches, model II regression and regression calibration, for use in massively univariate inference with imaging data. These methods use the design matrix paradigm and account for both random and non-random imaging regressors. We characterize these methods in simulation and illustrate their use on an empirical dataset. Both methods have been made readily available as a toolbox plug-in for the SPM software. PMID:22609453
A parametric sizing model for Molten Regolith Electrolysis reactors to produce oxygen on the Moon
NASA Astrophysics Data System (ADS)
Schreiner, Samuel S.; Sibille, Laurent; Dominguez, Jesus A.; Hoffman, Jeffrey A.
2016-04-01
We present a parametric sizing model for a Molten Regolith Electrolysis (MRE) reactor that produces oxygen and molten metals from lunar regolith. The model has a foundation of regolith material property models validated using data from Apollo samples and simulants. A multiphysics simulation of an MRE reactor is developed and leveraged to generate a database linking reactor design and performance trends. A novel design methodology is created which utilizes this database to parametrically design an MRE reactor that can (1) sustain the required current, operating temperature, and mass of molten regolith to meet a desired oxygen production level, (2) operate for long periods of time by protecting the reactor walls from the corrosive molten regolith with a layer of solid "frozen" regolith, and (3) support a range of electrode separations to enable operational flexibility. Mass, power, and performance estimates for an MRE reactor are presented for a range of oxygen production levels. Sensitivity analyses are presented for several design variables, including operating temperature, regolith feedstock composition, and the degree of operational flexibility.
Parametric modeling of energy filtering by energy barriers in thermoelectric nanocomposites
Zianni, Xanthippi E-mail: xzianni@gmail.com; Narducci, Dario
2015-01-21
We present a parametric modeling of the thermoelectric transport coefficients based on a model previously used to interpret experimental measurements on the conductivity, σ, and Seebeck coefficient, S, in highly Boron-doped polycrystalline Si, where a very significant thermoelectric power factor (TPF) enhancement was observed. We have derived analytical formalism for the transport coefficients in the presence of an energy barrier assuming thermionic emission over the barrier for (i) non-degenerate and (ii) degenerate one-band semiconductor. Simple generic parametric equations are found that are in agreement with the exact Boltzmann transport formalism in a wide range of parameters. Moreover, we explore the effect of energy barriers in 1-d composite semiconductors in the presence of two phases: (a) the bulk-like phase and (b) the barrier phase. It is pointed out that significant TPF enhancement can be achieved in the composite structure of two phases with different thermal conductivities. The TPF enhancement is estimated as a function of temperature, the Fermi energy position, the type of scattering, and the barrier height. The derived modeling provides guidance for experiments and device design.
NASA Astrophysics Data System (ADS)
Fischer, Cornelia; Bartlome, Richard; Sigrist, Markus W.
2005-04-01
In this paper, we present first results of a spectral characterisation of doping substances using a resonant optoacoustic cell and a Nd:YAG laser pumped optical parametric generation (OPG) laser source in the mid-infrared wavelength range between 3.0 and 4.0 μm with periodically poled LiNbO3 as nonlinear medium for the frequency conversion. Single spectra covering a wavelength range of about 220 nm can be conducted within less than 2 hours (3s averaging time, 7s between consecutive data points, about 0.3nm step-width). Despite the large linewidth of the OPG source of 240 GHz (8 cm-1), the laser spectrometer is well suited for the spectral analysis of these large organic molecules as they exhibit structured continuum absorption over a wide spectral range rather than isolated absorption peaks. We present measured spectra of ephedrine, alprenolol, ethacrynic acid, etc. and discuss the potential of laser-based detection of doping substances both as a supplement to existing methods and in view of a fast in situ screening technique at sporting events.
Developing two non-parametric performance models for higher learning institutions
NASA Astrophysics Data System (ADS)
Kasim, Maznah Mat; Kashim, Rosmaini; Rahim, Rahela Abdul; Khan, Sahubar Ali Muhamed Nadhar
2016-08-01
Measuring the performance of higher learning Institutions (HLIs) is a must for these institutions to improve their excellence. This paper focuses on formation of two performance models: efficiency and effectiveness models by utilizing a non-parametric method, Data Envelopment Analysis (DEA). The proposed models are validated by measuring the performance of 16 public universities in Malaysia for year 2008. However, since data for one of the variables is unavailable, an estimate was used as a proxy to represent the real data. The results show that average efficiency and effectiveness scores were 0.817 and 0.900 respectively, while six universities were fully efficient and eight universities were fully effective. A total of six universities were both efficient and effective. It is suggested that the two proposed performance models would work as complementary methods to the existing performance appraisal method or as alternative methods in monitoring the performance of HLIs especially in Malaysia.
NASA Technical Reports Server (NTRS)
Schreiner, Samuel S.; Dominguez, Jesus A.; Sibille, Laurent; Hoffman, Jeffrey A.
2015-01-01
We present a parametric sizing model for a Molten Electrolysis Reactor that produces oxygen and molten metals from lunar regolith. The model has a foundation of regolith material properties validated using data from Apollo samples and simulants. A multiphysics simulation of an MRE reactor is developed and leveraged to generate a vast database of reactor performance and design trends. A novel design methodology is created which utilizes this database to parametrically design an MRE reactor that 1) can sustain the required mass of molten regolith, current, and operating temperature to meet the desired oxygen production level, 2) can operate for long durations via joule heated, cold wall operation in which molten regolith does not touch the reactor side walls, 3) can support a range of electrode separations to enable operational flexibility. Mass, power, and performance estimates for an MRE reactor are presented for a range of oxygen production levels. The effects of several design variables are explored, including operating temperature, regolith type/composition, batch time, and the degree of operational flexibility.
Developing integrated parametric planning models for budgeting and managing complex projects
NASA Technical Reports Server (NTRS)
Etnyre, Vance A.; Black, Ken U.
1988-01-01
The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.
A shape constrained parametric active contour model for breast contour detection.
Lee, Juhun; Muralidhar, Gautam S; Reece, Gregory P; Markey, Mia K
2012-01-01
Quantitative measures of breast morphology can help a breast cancer survivor to understand outcomes of reconstructive surgeries. One bottleneck of quantifying breast morphology is that there are only a few reliable automation algorithms for detecting the breast contour. This study proposes a novel approach for detecting the breast contour, which is based on a parametric active contour model. In addition to employing the traditional parametric active contour model, the proposed approach enforces a mathematical shape constraint based on the catenary curve, which has been previously shown to capture the overall shape of the breast contour reliably. The mathematical shape constraint regulates the evolution of the active contour and helps the contour evolve towards the breast, while minimizing the undesired effects of other structures such as, the nipple/areola and scars. The efficacy of the proposed approach was evaluated on anterior posterior photographs of women who underwent or were scheduled for breast reconstruction surgery including autologous tissue reconstruction. The proposed algorithm shows promising results for detecting the breast contour.
NASA Astrophysics Data System (ADS)
Garagnani, S.; Manferdini, A. M.
2013-02-01
Since their introduction, modeling tools aimed to architectural design evolved in today's "digital multi-purpose drawing boards" based on enhanced parametric elements able to originate whole buildings within virtual environments. Semantic splitting and elements topology are features that allow objects to be "intelligent" (i.e. self-aware of what kind of element they are and with whom they can interact), representing this way basics of Building Information Modeling (BIM), a coordinated, consistent and always up to date workflow improved in order to reach higher quality, reliability and cost reductions all over the design process. Even if BIM was originally intended for new architectures, its attitude to store semantic inter-related information can be successfully applied to existing buildings as well, especially if they deserve particular care such as Cultural Heritage sites. BIM engines can easily manage simple parametric geometries, collapsing them to standard primitives connected through hierarchical relationships: however, when components are generated by existing morphologies, for example acquiring point clouds by digital photogrammetry or laser scanning equipment, complex abstractions have to be introduced while remodeling elements by hand, since automatic feature extraction in available software is still not effective. In order to introduce a methodology destined to process point cloud data in a BIM environment with high accuracy, this paper describes some experiences on monumental sites documentation, generated through a plug-in written for Autodesk Revit and codenamed GreenSpider after its capability to layout points in space as if they were nodes of an ideal cobweb.
A model-based parametric study of impact force during running.
Zadpoor, Amir Abbas; Nikooyan, Ali Asadi; Arshi, Ahmad Reza
2007-01-01
This paper deals with the impact force during foot-ground impact activities such as the running. A previously developed model is used for this study. The model is a lumped-parameter one consisting of four masses connected to each other via linear springs and viscous dampers. A shoe-specific nonlinear function is used for representation of the ground reaction force. The authors have previously showed that the previous version of the model as well as its simulation is incorrect. This paper slightly modifies the previous model so as it is able to produce results in agreement with the experiments. Then, the modified model is simulated for two typical shoe types. A parametric study is also conducted. The parametric study concerns with the effects of masses, mass ratios, stiffness constants, and damping coefficients on the dynamics of the impact. It is shown that the impact forces increase as the rigid and wobbling masses increase. However, the increase in the impact forces is not the same for all the masses. It is found that the impact force increases as the touchdown velocities increase. Simulations imply that the variations of the damping coefficients result in larger variations of the impact force compared to the stiffness. The effect of the variation of gravity on the simulated impact force is also explored. It is concluded that both the first and the second peaks of the impact force are increased with gravity. An in-depth discussion is included to compare results of the current paper with results of other investigators. PMID:17092510
Modeling and Simulation of a Parametrically Resonant Micromirror With Duty-Cycled Excitation
Shahid, Wajiha; Qiu, Zhen; Duan, Xiyu; Li, Haijun; Wang, Thomas D.; Oldham, Kenn R.
2014-01-01
High frequency large scanning angle electrostatically actuated microelectromechanical systems (MEMS) mirrors are used in a variety of applications involving fast optical scanning. A 1-D parametrically resonant torsional micromirror for use in biomedical imaging is analyzed here with respect to operation by duty-cycled square waves. Duty-cycled square wave excitation can have significant advantages for practical mirror regulation and/or control. The mirror’s nonlinear dynamics under such excitation is analyzed in a Hill’s equation form. This form is used to predict stability regions (the voltage-frequency relationship) of parametric resonance behavior over large scanning angles using iterative approximations for nonlinear capacitance behavior of the mirror. Numerical simulations are also performed to obtain the mirror’s frequency response over several voltages for various duty cycles. Frequency sweeps, stability results, and duty cycle trends from both analytical and simulation methods are compared with experimental results. Both analytical models and simulations show good agreement with experimental results over the range of duty cycled excitations tested. This paper discusses the implications of changing amplitude and phase with duty cycle for robust open-loop operation and future closed-loop operating strategies. PMID:25506188
Expert-Guided Generative Topographical Modeling with Visual to Parametric Interaction
2016-01-01
Introduced by Bishop et al. in 1996, Generative Topographic Mapping (GTM) is a powerful nonlinear latent variable modeling approach for visualizing high-dimensional data. It has shown useful when typical linear methods fail. However, GTM still suffers from drawbacks. Its complex parameterization of data make GTM hard to fit and sensitive to slight changes in the model. For this reason, we extend GTM to a visual analytics framework so that users may guide the parameterization and assess the data from multiple GTM perspectives. Specifically, we develop the theory and methods for Visual to Parametric Interaction (V2PI) with data using GTM visualizations. The result is a dynamic version of GTM that fosters data exploration. We refer to the new version as V2PI-GTM. In this paper, we develop V2PI-GTM in stages and demonstrate its benefits within the context of a text mining case study. PMID:26905728
Expert-Guided Generative Topographical Modeling with Visual to Parametric Interaction.
Han, Chao; House, Leanna; Leman, Scotland C
2016-01-01
Introduced by Bishop et al. in 1996, Generative Topographic Mapping (GTM) is a powerful nonlinear latent variable modeling approach for visualizing high-dimensional data. It has shown useful when typical linear methods fail. However, GTM still suffers from drawbacks. Its complex parameterization of data make GTM hard to fit and sensitive to slight changes in the model. For this reason, we extend GTM to a visual analytics framework so that users may guide the parameterization and assess the data from multiple GTM perspectives. Specifically, we develop the theory and methods for Visual to Parametric Interaction (V2PI) with data using GTM visualizations. The result is a dynamic version of GTM that fosters data exploration. We refer to the new version as V2PI-GTM. In this paper, we develop V2PI-GTM in stages and demonstrate its benefits within the context of a text mining case study.
Martinez, L C; Calzado, A
2016-01-01
A parametric model is used for the calculation of the CT number of some selected human tissues of known compositions (Hi) in two hybrid systems, one SPECT-CT and one PET-CT. Only one well characterized substance, not necessarily tissue-like, needs to be scanned with the protocol of interest. The linear attenuation coefficients of these tissues for some energies of interest (μ(i)) have been calculated from their tabulated compositions and the NIST databases. These coefficients have been compared with those calculated with the bilinear model from the CT number (μ(B)i). No relevant differences have been found for bones and lung. In the soft tissue region, the differences can be up to 5%. These discrepancies are attributed to the different chemical composition for the tissues assumed by both methods.
Uncertainties in volcanic plume modeling: a parametric study using FPLUME model
NASA Astrophysics Data System (ADS)
Macedonio, Giovanni; Costa, Antonio; Folch, Arnau
2016-04-01
Tephra transport and dispersal models are commonly used for volcanic hazard assessment and tephra dispersal (ash cloud) forecasts. The proper quantification of the parameters defining the source term in the dispersal models, and in particular the estimation of the mass eruption rate, plume height, and particle vertical mass distribution, is of paramount importance for obtaining reliable results in terms of particle mass concentration in the atmosphere and loading on the ground. The study builds upon numerical simulations of using FPLUME, an integral steady-state model based on the Buoyant Plume Theory, generalized in order to account for volcanic processes (particle fallout and re-entrainment, water phase changes, effects of wind, etc). As reference cases for strong and weak plumes, we consider the cases defined during the IAVCEI Commission on tephra hazard modeling inter-comparison exercise. The goal was to explore the leading order role of each parameter in order to assess which should be better constrained to better quantify the eruption source parameters for use by the dispersal models. Moreover, a sensitivity analysis investigates the role of wind entrainment and intensity, atmospheric humidity, water phase changes, and particle fallout and re-entrainment. Results show that the leading-order parameters are the mass eruption rate and the air entrainment coefficient, specially for weak plumes.
Bayesian Semi- and Non-parametric Models for Longitudinal Data with Multiple Membership Effects in R
Savitsky, Terrance D.; Paddock, Susan M.
2014-01-01
We introduce growcurves for R that performs analysis of repeated measures multiple membership (MM) data. This data structure arises in studies under which an intervention is delivered to each subject through the subject's participation in a set of multiple elements that characterize the intervention. In our motivating study design under which subjects receive a group cognitive behavioral therapy (CBT) treatment, an element is a group CBT session and each subject attends multiple sessions that, together, comprise the treatment. The sets of elements, or group CBT sessions, attended by subjects will partly overlap with some of those from other subjects to induce a dependence in their responses. The growcurves package offers two alternative sets of hierarchical models: 1. Separate terms are specified for multivariate subject and MM element random effects, where the subject effects are modeled under a Dirichlet process prior to produce a semi-parametric construction; 2. A single term is employed to model joint subject-by-MM effects. A fully non-parametric dependent Dirichlet process formulation allows exploration of differences in subject responses across different MM elements. This model allows for borrowing information among subjects who express similar longitudinal trajectories for flexible estimation. growcurves deploys “estimation” functions to perform posterior sampling under a suite of prior options. An accompanying set of “plot” functions allow the user to readily extract by-subject growth curves. The design approach intends to anticipate inferential goals with tools that fully extract information from repeated measures data. Computational efficiency is achieved by performing the sampling for estimation functions using compiled C++. PMID:25400517
Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien
2016-07-01
Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require.
Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien
2016-07-01
Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require. PMID:26972806
NASA Astrophysics Data System (ADS)
Seshadreesan, Kaushik P.; Takeoka, Masahiro; Sasaki, Masahide
2016-04-01
Device-independent quantum key distribution (DIQKD) guarantees unconditional security of a secret key without making assumptions about the internal workings of the devices used for distribution. It does so using the loophole-free violation of a Bell's inequality. The primary challenge in realizing DIQKD in practice is the detection loophole problem that is inherent to photonic tests of Bell' s inequalities over lossy channels. We revisit the proposal of Curty and Moroder [Phys. Rev. A 84, 010304(R) (2011), 10.1103/PhysRevA.84.010304] to use a linear optics-based entanglement-swapping relay (ESR) to counter this problem. We consider realistic models for the entanglement sources and photodetectors: more precisely, (a) polarization-entangled states based on pulsed spontaneous parametric down-conversion sources with infinitely higher-order multiphoton components and multimode spectral structure, and (b) on-off photodetectors with nonunit efficiencies and nonzero dark-count probabilities. We show that the ESR-based scheme is robust against the above imperfections and enables positive key rates at distances much larger than what is possible otherwise.
NASA Astrophysics Data System (ADS)
Alsing, Paul M.
2015-04-01
In this paper we extend the investigation of Adami and Ver Steeg (2014 Class. Quantum Grav. 31 075015) to treat the process of black hole (BH) particle emission effectively as the analogous quantum optical process of parametric down conversion with a dynamical (depleted versus non-depleted) ‘pump’ source mode which models the evaporating BH energy degree of freedom. We investigate both the short time (non-depleted pump) and long time (depleted pump) regimes of the quantum state and its impact on the Holevo channel capacity for commu.nicating information from the far past to the far future in the presence of Hawking radiation. The new feature introduced in this work is the coupling of the emitted Hawking radiation modes through the common BH ‘source pump’ mode which phenomenologically represents a quantized energy degree of freedom of the gravitational field. This (zero-dimensional) model serves as a simplified arena to explore BH particle production/evaporation and back-action effects under an explicitly unitary evolution that enforces quantized energy/particle conservation. Within our analogous quantum optical model we examine the entanglement between two emitted particle/anti-particle and anti-particle/particle pairs coupled via the BH evaporating ‘pump’ source. We also analytically and dynamically verify the ‘Page information time’ for our model, which refers to the conventionally held belief that the information in the BH radiation becomes significant after the BH has evaporated half its initial energy into the outgoing radiation. Lastly, we investigate the effect of BH particle production/evaporation on two modes in the exterior region of the BH event horizon that are initially maximally entangled, when one mode falls inward and interacts with the BH, and the other remains forever outside and non-interacting.
Fuzzy modeling for chaotic systems via interval type-2 T-S fuzzy model with parametric uncertainty
NASA Astrophysics Data System (ADS)
Hasanifard, Goran; Gharaveisi, Ali Akbar; Vali, Mohammad Ali
2014-02-01
A motivation for using fuzzy systems stems in part from the fact that they are particularly suitable for processes when the physical systems or qualitative criteria are too complex to model and they have provided an efficient and effective way in the control of complex uncertain nonlinear systems. To realize a fuzzy model-based design for chaotic systems, it is mostly preferred to represent them by T-S fuzzy models. In this paper, a new fuzzy modeling method has been introduced for chaotic systems via the interval type-2 Takagi-Sugeno (IT2 T-S) fuzzy model. An IT2 fuzzy model is proposed to represent a chaotic system subjected to parametric uncertainty, covered by the lower and upper membership functions of the interval type-2 fuzzy sets. Investigating many well-known chaotic systems, it is obvious that nonlinear terms have a single common variable or they depend only on one variable. If it is taken as the premise variable of fuzzy rules and another premise variable is defined subject to parametric uncertainties, a simple IT2 T-S fuzzy dynamical model can be obtained and will represent many well-known chaotic systems. This IT2 T-S fuzzy model can be used for physical application, chaotic synchronization, etc. The proposed approach is numerically applied to the well-known Lorenz system and Rossler system in MATLAB environment.
Adams, Matthew S.; Scott, Serena J.; Salgaonkar, Vasant A.; Sommer, Graham; Diederich, Chris J.
2016-01-01
Purpose To investigate endoluminal ultrasound applicator configurations for volumetric thermal ablation and hyperthermia of pancreatic tumors using 3D acoustic and biothermal finite element models. Materials and Methods Parametric studies compared endoluminal heating performance for varying applicator transducer configurations (planar, curvilinear-focused, or radial-diverging), frequencies (1–5 MHz), and anatomical conditions. Patient-specific pancreatic head and body tumor models were used to evaluate feasibility of generating hyperthermia and thermal ablation using an applicator positioned in the duodenal or stomach lumen. Temperature and thermal dose were calculated to define ablation (>240 EM43°C) and moderate hyperthermia (40–45 °C) boundaries, and to assess sparing of sensitive tissues. Proportional-integral control was incorporated to regulate maximum temperature to 70–80 °C for ablation and 45 °C for hyperthermia in target regions. Results Parametric studies indicated that 1–3 MHz planar transducers are most suitable for volumetric ablation, producing 5–8 cm3 lesion volumes for a stationary 5 minute sonication. Curvilinear-focused geometries produce more localized ablation to 20–45 mm depth from the GI tract and enhance thermal sparing (Tmax<42 °C) of the luminal wall. Patient anatomy simulations show feasibility in ablating 60.1–92.9% of head/body tumor volumes (4.3–37.2 cm3) with dose <15 EM43°C in the luminal wall for 18–48 min treatment durations, using 1–3 applicator placements in GI lumen. For hyperthermia, planar and radial-diverging transducers could maintain up to 8 cm3 and 15 cm3 of tissue, respectively, between 40–45 °C for a single applicator placement. Conclusions Modeling studies indicate the feasibility of endoluminal ultrasound for volumetric thermal ablation or hyperthermia treatment of pancreatic tumor tissue. PMID:27097663
Parametric geometric model and shape optimization of an underwater glider with blended-wing-body
NASA Astrophysics Data System (ADS)
Sun, Chunya; Song, Baowei; Wang, Peng
2015-11-01
Underwater glider, as a new kind of autonomous underwater vehicles, has many merits such as long-range, extended-duration and low costs. The shape of underwater glider is an important factor in determining the hydrodynamic efficiency. In this paper, a high lift to drag ratio configuration, the Blended-Wing-Body (BWB), is used to design a small civilian under water glider. In the parametric geometric model of the BWB underwater glider, the planform is defined with Bezier curve and linear line, and the section is defined with symmetrical airfoil NACA 0012. Computational investigations are carried out to study the hydrodynamic performance of the glider using the commercial Computational Fluid Dynamics (CFD) code Fluent. The Kriging-based genetic algorithm, called Efficient Global Optimization (EGO), is applied to hydrodynamic design optimization. The result demonstrates that the BWB underwater glider has excellent hydrodynamic performance, and the lift to drag ratio of initial design is increased by 7% in the EGO process.
Vard, Alireza; Jamshidi, Kamal; Movahhedinia, Naser
2012-06-01
This paper presents a fully automated approach to detect the intima and media-adventitia borders in intravascular ultrasound images based on parametric active contour models. To detect the intima border, we compute a new image feature applying a combination of short-term autocorrelations calculated for the contour pixels. These feature values are employed to define an energy function of the active contour called normalized cumulative short-term autocorrelation. Exploiting this energy function, the intima border is separated accurately from the blood region contaminated by high speckle noise. To extract media-adventitia boundary, we define a new form of energy function based on edge, texture and spring forces for the active contour. Utilizing this active contour, the media-adventitia border is identified correctly even in presence of branch openings and calcifications. Experimental results indicate accuracy of the proposed methods. In addition, statistical analysis demonstrates high conformity between manual tracing and the results obtained by the proposed approaches.
Skew-Quad Parametric-Resonance Ionization Cooling: Theory and Modeling
Afanaciev, Andre; Derbenev, Yaroslav S.; Morozov, Vasiliy; Sy, Amy; Johnson, Rolland P.
2015-09-01
Muon beam ionization cooling is a key component for the next generation of high-luminosity muon colliders. To reach adequately high luminosity without excessively large muon intensities, it was proposed previously to combine ionization cooling with techniques using a parametric resonance (PIC). Practical implementation of PIC proposal is a subject of this report. We show that an addition of skew quadrupoles to a planar PIC channel gives enough flexibility in the design to avoid unwanted resonances, while meeting the requirements of radially-periodic beam focusing at ionization-cooling plates, large dynamic aperture and an oscillating dispersion needed for aberration corrections. Theoretical arguments are corroborated with models and a detailed numerical analysis, providing step-by-step guidance for the design of Skew-quad PIC (SPIC) beamline.
Moment stability for a predator-prey model with parametric dichotomous noises
NASA Astrophysics Data System (ADS)
Jin, Yan-Fei
2015-06-01
In this paper, we investigate the solution moment stability for a Harrison-type predator-prey model with parametric dichotomous noises. Using the Shapiro-Loginov formula, the equations for the first-order and second-order moments are obtained and the corresponding stable conditions are given. It is found that the solution moment stability depends on the noise intensity and correlation time of noise. The first-order and second-order moments become unstable with the decrease of correlation time. That is, the dichotomous noise can improve the solution moment stability with respect to Gaussian white noise. Finally, some numerical results are presented to verify the theoretical analyses. Project supported by the National Natural Science Foundation of China (Grant No. 11272051).
Parametric Packet-Layer Model for Evaluation Audio Quality in Multimedia Streaming Services
NASA Astrophysics Data System (ADS)
Egi, Noritsugu; Hayashi, Takanori; Takahashi, Akira
We propose a parametric packet-layer model for monitoring audio quality in multimedia streaming services such as Internet protocol television (IPTV). This model estimates audio quality of experience (QoE) on the basis of quality degradation due to coding and packet loss of an audio sequence. The input parameters of this model are audio bit rate, sampling rate, frame length, packet-loss frequency, and average burst length. Audio bit rate, packet-loss frequency, and average burst length are calculated from header information in received IP packets. For sampling rate, frame length, and audio codec type, the values or the names used in monitored services are input into this model directly. We performed a subjective listening test to examine the relationships between these input parameters and perceived audio quality. The codec used in this test was the Advanced Audio Codec-Low Complexity (AAC-LC), which is one of the international standards for audio coding. On the basis of the test results, we developed an audio quality evaluation model. The verification results indicate that audio quality estimated by the proposed model has a high correlation with perceived audio quality.
NASA Astrophysics Data System (ADS)
Kozlovská, Mária; Čabala, Jozef; Struková, Zuzana
2014-11-01
Information technology is becoming a strong tool in different industries, including construction. The recent trend of buildings designing is leading up to creation of the most comprehensive virtual building model (Building Information Model) in order to solve all the problems relating to the project as early as in the designing phase. Building information modelling is a new way of approaching to the design of building projects documentation. Currently, the building site layout as a part of the building design documents has a very little support in the BIM environment. Recently, the research of designing the construction process conditions has centred on improvement of general practice in planning and on new approaches to construction site layout planning. The state of art in field of designing the construction process conditions indicated an unexplored problem related to connection of knowledge system with construction site facilities (CSF) layout through interactive modelling. The goal of the paper is to present the methodology for execution of 3D construction site facility allocation model (3D CSF-IAM), based on principles of parametric and interactive modelling.
The use of algorithmic behavioural transfer functions in parametric EO system performance models
NASA Astrophysics Data System (ADS)
Hickman, Duncan L.; Smith, Moira I.
2015-10-01
The use of mathematical models to predict the overall performance of an electro-optic (EO) system is well-established as a methodology and is used widely to support requirements definition, system design, and produce performance predictions. Traditionally these models have been based upon cascades of transfer functions based on established physical theory, such as the calculation of signal levels from radiometry equations, as well as the use of statistical models. However, the performance of an EO system is increasing being dominated by the on-board processing of the image data and this automated interpretation of image content is complex in nature and presents significant modelling challenges. Models and simulations of EO systems tend to either involve processing of image data as part of a performance simulation (image-flow) or else a series of mathematical functions that attempt to define the overall system characteristics (parametric). The former approach is generally more accurate but statistically and theoretically weak in terms of specific operational scenarios, and is also time consuming. The latter approach is generally faster but is unable to provide accurate predictions of a system's performance under operational conditions. An alternative and novel architecture is presented in this paper which combines the processing speed attributes of parametric models with the accuracy of image-flow representations in a statistically valid framework. An additional dimension needed to create an effective simulation is a robust software design whose architecture reflects the structure of the EO System and its interfaces. As such, the design of the simulator can be viewed as a software prototype of a new EO System or an abstraction of an existing design. This new approach has been used successfully to model a number of complex military systems and has been shown to combine improved performance estimation with speed of computation. Within the paper details of the approach
NASA Astrophysics Data System (ADS)
Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald
2016-09-01
The radio sources within the most recent celestial reference frame (CRF) catalog ICRF2 are represented by a single, time-invariant coordinate pair. The datum sources were chosen mainly according to certain statistical properties of their position time series. Yet, such statistics are not applicable unconditionally, and also ambiguous. However, ignoring systematics in the source positions of the datum sources inevitably leads to a degradation of the quality of the frame and, therefore, also of the derived quantities such as the Earth orientation parameters. One possible approach to overcome these deficiencies is to extend the parametrization of the source positions, similarly to what is done for the station positions. We decided to use the multivariate adaptive regression splines algorithm to parametrize the source coordinates. It allows a great deal of automation, by combining recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and, thus, the best number of polynomial pieces to fit the data autonomously. With that we can correct the ICRF2 a priori coordinates for our analysis and eliminate the systematics in the position estimates. This allows us to introduce also special handling sources into the datum definition, leading to on average 30 % more sources in the datum. We find that not only the CPO can be improved by more than 10 % due to the improved geometry, but also the station positions, especially in the early years of VLBI, can benefit greatly.
NASA Astrophysics Data System (ADS)
Wang, Bao; Zhao, Zhixiong; Wei, Guo-Wei
2016-09-01
In this work, a systematic protocol is proposed to automatically parametrize the non-polar part of implicit solvent models with polar and non-polar components. The proposed protocol utilizes either the classical Poisson model or the Kohn-Sham density functional theory based polarizable Poisson model for modeling polar solvation free energies. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the polar component. For the non-polar component, either the standard model of surface area, molecular volume, and van der Waals interactions or a model with atomic surface areas and molecular volume is employed. To automatically parametrize a non-polar model, we develop scoring and ranking algorithms to classify solute molecules. The their non-polar parametrization is obtained based on the assumption that similar molecules have similar parametrizations. A large database with 668 experimental data is collected and employed to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33 kcal/mol. Additionally, five subsets of the database, i.e., SAMPL0-SAMPL4, are employed to further demonstrate that the proposed protocol. The optimal RMS errors are 0.93, 2.82, 1.90, 0.78, and 1.03 kcal/mol, respectively, for SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 test sets. The corresponding RMS errors for the polarizable Poisson model with the Amber Bondi radii are 0.93, 2.89, 1.90, 1.16, and 1.07 kcal/mol, respectively.
Parametric retrieval model for estimating aerosol size distribution via the AERONET, LAGOS station.
Emetere, Moses Eterigho; Akinyemi, Marvel Lola; Akin-Ojo, Omololu
2015-12-01
The size characteristics of atmospheric aerosol over the tropical region of Lagos, Southern Nigeria were investigated using two years of continuous spectral aerosol optical depth measurements via the AERONET station for four major bands i.e. blue, green, red and infrared. Lagos lies within the latitude of 6.465°N and longitude of 3.406°E. Few systems of dispersion model was derived upon specified conditions to solve challenges on aerosols size distribution within the Stokes regime. The dispersion model was adopted to derive an aerosol size distribution (ASD) model which is in perfect agreement with existing model. The parametric nature of the formulated ASD model shows the independence of each band to determine the ASD over an area. The turbulence flow of particulates over the area was analyzed using the unified number (Un). A comparative study via the aid of the Davis automatic weather station was carried out on the Reynolds number, Knudsen number and the Unified number. The Reynolds and Unified number were more accurate to describe the atmospheric fields of the location. The aerosols loading trend in January to March (JFM) and August to October (ASO) shows a yearly 15% retention of aerosols in the atmosphere. The effect of the yearly aerosol retention can be seen to partly influence the aerosol loadings between October and February. PMID:26452005
NASA Technical Reports Server (NTRS)
Rosenberg, Leigh; Hihn, Jairus; Roust, Kevin; Warfield, Keith
2000-01-01
This paper presents an overview of a parametric cost model that has been built at JPL to estimate costs of future, deep space, robotic science missions. Due to the recent dramatic changes in JPL business practices brought about by an internal reengineering effort known as develop new products (DNP), high-level historic cost data is no longer considered analogous to future missions. Therefore, the historic data is of little value in forecasting costs for projects developed using the DNP process. This has lead to the development of an approach for obtaining expert opinion and also for combining actual data with expert opinion to provide a cost database for future missions. In addition, the DNP cost model has a maximum of objective cost drivers which reduces the likelihood of model input error. Version 2 is now under development which expands the model capabilities, links it more tightly with key design technical parameters, and is grounded in more rigorous statistical techniques. The challenges faced in building this model will be discussed, as well as it's background, development approach, status, validation, and future plans.
Ji, Songbai; Ghadyani, Hamidreza; Bolander, Richard P; Beckwith, Jonathan G; Ford, James C; McAllister, Thomas W; Flashman, Laura A; Paulsen, Keith D; Ernstrom, Karin; Jain, Sonia; Raman, Rema; Zhang, Liying; Greenwald, Richard M
2014-01-01
A number of human head finite element (FE) models have been developed from different research groups over the years to study the mechanisms of traumatic brain injury. These models can vary substantially in model features and parameters, making it important to evaluate whether simulation results from one model are readily comparable with another, and whether response-based injury thresholds established from a specific model can be generalized when a different model is employed. The purpose of this study is to parametrically compare regional brain mechanical responses from three validated head FE models to test the hypothesis that regional brain responses are dependent on the specific head model employed as well as the region of interest (ROI). The Dartmouth Scaled and Normalized Model (DSNM), the Simulated Injury Monitor (SIMon), and the Wayne State University Head Injury Model (WSUHIM) were selected for comparisons. For model input, 144 unique kinematic conditions were created to represent the range of head impacts sustained by male collegiate hockey players during play. These impacts encompass the 50th, 95th, and 99th percentile peak linear and rotational accelerations at 16 impact locations around the head. Five mechanical variables (strain, strain rate, strain × strain rate, stress, and pressure) in seven ROIs reported from the FE models were compared using Generalized Estimating Equation statistical models. Highly significant differences existed among FE models for nearly all output variables and ROIs. The WSUHIM produced substantially higher peak values for almost all output variables regardless of the ROI compared to the DSNM and SIMon models (p < 0.05). DSNM also produced significantly different stress and pressure compared with SIMon for all ROIs (p < 0.05), but such differences were not consistent across ROIs for other variables. Regardless of FE model, most output variables were highly correlated with linear and rotational peak accelerations. The
Ji, Songbai; Ghadyani, Hamidreza; Bolander, Richard P.; Beckwith, Jonathan G.; Ford, James C.; Mcallister, Thomas W.; Flashman, Laura A.; Paulsen, Keith D.; Ernstrom, Karin; Jain, Sonia; Raman, Rema; Zhang, Liying; Greenwald, Richard M.
2015-01-01
A number of human head finite element (FE) models have been developed from different research groups over the years to study the mechanisms of traumatic brain injury. These models can vary substantially in model features and parameters, making it important to evaluate whether simulation results from one model are readily comparable with another, and whether response-based injury thresholds established from a specific model can be generalized when a different model is employed. The purpose of this study is to parametrically compare regional brain mechanical responses from three validated head FE models to test the hypothesis that regional brain responses are dependent on the specific head model employed as well as the region of interest (ROI). The Dartmouth Scaled and Normalized Model (DSNM), the Simulated Injury Monitor (SIMon), and the Wayne State University Head Injury Model (WSUHIM) were selected for comparisons. For model input, 144 unique kinematic conditions were created to represent the range of head impacts sustained by male collegiate hockey players during play. These impacts encompass the 50th, 95th, and 99th percentile peak linear and rotational accelerations at 16 impact locations around the head. Five mechanical variables (strain, strain rate, strain × strain rate, stress, and pressure) in seven ROIs reported from the FE models were compared using Generalized Estimating Equation statistical models. Highly significant differences existed among FE models for nearly all output variables and ROIs. The WSUHIM produced substantially higher peak values for almost all output variables regardless of the ROI compared to the DSNM and SIMon models (p < 0.05). DSNM also produced significantly different stress and pressure compared with SIMon for all ROIs (p < 0.05), but such differences were not consistent across ROIs for other variables. Regardless of FE model, most output variables were highly correlated with linear and rotational peak accelerations. The
Parametric modeling for quantitative analysis of pulmonary structure to function relationships
NASA Astrophysics Data System (ADS)
Haider, Clifton R.; Bartholmai, Brian J.; Holmes, David R., III; Camp, Jon J.; Robb, Richard A.
2005-04-01
While lung anatomy is well understood, pulmonary structure-to-function relationships such as the complex elastic deformation of the lung during respiration are less well documented. Current methods for studying lung anatomy include conventional chest radiography, high-resolution computed tomography (CT scan) and magnetic resonance imaging with polarized gases (MRI scan). Pulmonary physiology can be studied using spirometry or V/Q nuclear medicine tests (V/Q scan). V/Q scanning and MRI scans may demonstrate global and regional function. However, each of these individual imaging methods lacks the ability to provide high-resolution anatomic detail, associated pulmonary mechanics and functional variability of the entire respiratory cycle. Specifically, spirometry provides only a one-dimensional gross estimate of pulmonary function, and V/Q scans have poor spatial resolution, reducing its potential for regional assessment of structure-to-function relationships. We have developed a method which utilizes standard clinical CT scanning to provide data for computation of dynamic anatomic parametric models of the lung during respiration which correlates high-resolution anatomy to underlying physiology. The lungs are segmented from both inspiration and expiration three-dimensional (3D) data sets and transformed into a geometric description of the surface of the lung. Parametric mapping of lung surface deformation then provides a visual and quantitative description of the mechanical properties of the lung. Any alteration in lung mechanics is manifest by alterations in normal deformation of the lung wall. The method produces a high-resolution anatomic and functional composite picture from sparse temporal-spatial methods which quantitatively illustrates detailed anatomic structure to pulmonary function relationships impossible for translational methods to provide.
ERIC Educational Resources Information Center
Steinhauer, H. M.
2012-01-01
Engineering graphics has historically been viewed as a challenging course to teach as students struggle to grasp and understand the fundamental concepts and then to master their proper application. The emergence of stable, fast, affordable 3D parametric modeling platforms such as CATIA, Pro-E, and AutoCAD while providing several pedagogical…
Technology Transfer Automated Retrieval System (TEKTRAN)
Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...
Model for straight and helical solar jets. I. Parametric studies of the magnetic field geometry
NASA Astrophysics Data System (ADS)
Pariat, E.; Dalmasse, K.; DeVore, C. R.; Antiochos, S. K.; Karpen, J. T.
2015-01-01
Context. Jets are dynamic, impulsive, well-collimated plasma events developing at many different scales and in different layers of the solar atmosphere. Aims: Jets are believed to be induced by magnetic reconnection, a process central to many astrophysical phenomena. Studying their dynamics can help us to better understand the processes acting in larger eruptive events (e.g., flares and coronal mass ejections) as well as mass, magnetic helicity, and energy transfer at all scales in the solar atmosphere. The relative simplicity of their magnetic geometry and topology, compared with larger solar active events, makes jets ideal candidates for studying the fundamental role of reconnection in energetic events. Methods: In this study, using our recently developed numerical solver ARMS, we present several parametric studies of a 3D numerical magneto-hydrodynamic model of solar-jet-like events. We studied the impact of the magnetic field inclination and photospheric field distribution on the generation and properties of two morphologically different types of solar jets, straight and helical, which can account for the observed so-called standard and blowout jets. Results: Our parametric studies validate our model of jets for different geometric properties of the magnetic configuration. We find that a helical jet is always triggered for the range of parameters we tested. This demonstrates that the 3D magnetic null-point configuration is a very robust structure for the energy storage and impulsive release characteristic of helical jets. In certain regimes determined by magnetic geometry, a straight jet precedes the onset of a helical jet. We show that the reconnection occurring during the straight-jet phase influences the triggering of the helical jet. Conclusions: Our results allow us to better understand the energization, triggering, and driving processes of straight and helical jets. Our model predicts the impulsiveness and energetics of jets in terms of the surrounding
NASA Astrophysics Data System (ADS)
Wang, W. L.; Yu, D. S.; Zhou, Z.
2015-10-01
Due to the high-speed operation of modern rail vehicles and severe in-service environment of their hydraulic dampers, it has become important to establish more practical and accurate damper models and apply those models in high-speed transit problem studies. An improved full parametric model with actual in-service parameters, such as variable viscous damping, comprehensive stiffness and small mounting clearance was established for a rail vehicle's axle-box hydraulic damper. A subtle variable oil property model was built and coupled to the modelling process, which included modelling of the dynamic flow losses and the relief-valve system dynamics. The experiments validated the accuracy and robustness of the established full in-service parametric model and simulation which captured the damping characteristics over an extremely wide range of excitation speeds. Further simulations were performed using the model to uncover the effects of key in-service parameter variations on the nominal damping characteristics of the damper. The obtained in-service parametric model coupled all of the main factors that had significant impacts on the damping characteristics, so that the model could be useful in more extensive parameter effects analysis, optimal specification and product design optimisation of hydraulic dampers for track-friendliness, ride comfort and other high-speed transit problems.
Solar tower power plant using a particle-heated steam generator: Modeling and parametric study
NASA Astrophysics Data System (ADS)
Krüger, Michael; Bartsch, Philipp; Pointner, Harald; Zunft, Stefan
2016-05-01
Within the framework of the project HiTExStor II, a system model for the entire power plant consisting of volumetric air receiver, air-sand heat exchanger, sand storage system, steam generator and water-steam cycle was implemented in software "Ebsilon Professional". As a steam generator, the two technologies fluidized bed cooler and moving bed heat exchangers were considered. Physical models for the non-conventional power plant components as air- sand heat exchanger, fluidized bed coolers and moving bed heat exchanger had to be created and implemented in the simulation environment. Using the simulation model for the power plant, the individual components and subassemblies have been designed and the operating parameters were optimized in extensive parametric studies in terms of the essential degrees of freedom. The annual net electricity output for different systems was determined in annual performance calculations at a selected location (Huelva, Spain) using the optimized values for the studied parameters. The solution with moderate regenerative feed water heating has been found the most advantageous. Furthermore, the system with moving bed heat exchanger prevails over the system with fluidized bed cooler due to a 6 % higher net electricity yield.
Limitations in the rapid extraction of evoked potentials using parametric modeling.
De Silva, A C; Sinclair, N C; Liley, D T J
2012-05-01
The rapid extraction of variations in evoked potentials (EPs) is of great clinical importance. Parametric modeling using autoregression with an exogenous input (ARX) and robust evoked potential estimator (REPE) are commonly used methods for extracting EPs over the conventional moving time average. However, a systematic study of the efficacy of these methods, using known synthetic EPs, has not been performed. Therefore, the current study evaluates the restrictions of these methods in the presence of known and systematic variations in EP component latency and signal-to-noise ratios (SNR). In the context of rapid extraction, variations of wave V of the auditory brainstem in response to stimulus intensity were considered. While the REPE methods were better able to recover the simulated model of the EP, morphology and the latency of the ARX-estimated EPs was a closer match to the actual EP than than that of the REPE-estimated EPs. We, therefore, concluded that ARX rapid extraction would perform better with regards to the rapid tracking of latency variations. By tracking simulated and empirically induced latency variations, we conclude that rapid EP extraction using ARX modeling is only capable of extracting latency variations of an EP in relatively high SNRs and, therefore, should be used with caution in low-noise environments. In particular, it is not a suitable method for the rapid extraction of early EP components such as the auditory brainstem potential. PMID:22394572
Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2012-01-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.
NASA Technical Reports Server (NTRS)
Splettstoesser, W. R.; Schultz, K. J.; Boxwell, D. A.; Schmitz, F. H.
1984-01-01
Acoustic data taken in the anechoic Deutsch-Niederlaendischer Windkanal (DNW) have documented the blade vortex interaction (BVI) impulsive noise radiated from a 1/7-scale model main rotor of the AH-1 series helicopter. Averaged model scale data were compared with averaged full scale, inflight acoustic data under similar nondimensional test conditions. At low advance ratios (mu = 0.164 to 0.194), the data scale remarkable well in level and waveform shape, and also duplicate the directivity pattern of BVI impulsive noise. At moderate advance ratios (mu = 0.224 to 0.270), the scaling deteriorates, suggesting that the model scale rotor is not adequately simulating the full scale BVI noise; presently, no proved explanation of this discrepancy exists. Carefully performed parametric variations over a complete matrix of testing conditions have shown that all of the four governing nondimensional parameters - tip Mach number at hover, advance ratio, local inflow ratio, and thrust coefficient - are highly sensitive to BVI noise radiation.
Towards a Multi-Variable Parametric Cost Model for Ground and Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd
2016-01-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper hypothesizes a single model, based on published models and engineering intuition, for both ground and space telescopes: OTA Cost approximately (X) D(exp (1.75 +/- 0.05)) lambda(exp(-0.5 +/- 0.25) T(exp -0.25) e (exp (-0.04)Y). Specific findings include: space telescopes cost 50X to 100X more ground telescopes; diameter is the most important CER; cost is reduced by approximately 50% every 20 years (presumably because of technology advance and process improvements); and, for space telescopes, cost associated with wavelength performance is balanced by cost associated with operating temperature. Finally, duplication only reduces cost for the manufacture of identical systems (i.e. multiple aperture sparse arrays or interferometers). And, while duplication does reduce the cost of manufacturing the mirrors of segmented primary mirror, this cost savings does not appear to manifest itself in the final primary mirror assembly (presumably because the structure for a segmented mirror is more complicated than for a monolithic mirror).
Parametric modeling and stagger angle optimization of an axial flow fan
NASA Astrophysics Data System (ADS)
Li, M. X.; Zhang, C. H.; Liu, Y.; Y Zheng, S.
2013-12-01
Axial flow fans are widely used in every field of social production. Improving their efficiency is a sustained and urgent demand of domestic industry. The optimization of stagger angle is an important method to improve fan performance. Parametric modeling and calculation process automation are realized in this paper to improve optimization efficiency. Geometric modeling and mesh division are parameterized based on GAMBIT. Parameter setting and flow field calculation are completed in the batch mode of FLUENT. A control program is developed in Visual C++ to dominate the data exchange of mentioned software. It also extracts calculation results for optimization algorithm module (provided by Matlab) to generate directive optimization control parameters, which as feedback are transferred upwards to modeling module. The center line of the blade airfoil, based on CLARK y profile, is constructed by non-constant circulation and triangle discharge method. Stagger angles of six airfoil sections are optimized, to reduce the influence of inlet shock loss as well as gas leak in blade tip clearance and hub resistance at blade root. Finally an optimal solution is obtained, which meets the total pressure requirement under given conditions and improves total pressure efficiency by about 6%.
NASA Technical Reports Server (NTRS)
Gersh-Range, Jessica A.; Arnold, William R.; Peck, Mason A.; Stahl, H. Philip
2011-01-01
Since future astrophysics missions require space telescopes with apertures of at least 10 meters, there is a need for on-orbit assembly methods that decouple the size of the primary mirror from the choice of launch vehicle. One option is to connect the segments edgewise using mechanisms analogous to damped springs. To evaluate the feasibility of this approach, a parametric ANSYS model that calculates the mode shapes, natural frequencies, and disturbance response of such a mirror, as well as of the equivalent monolithic mirror, has been developed. This model constructs a mirror using rings of hexagonal segments that are either connected continuously along the edges (to form a monolith) or at discrete locations corresponding to the mechanism locations (to form a segmented mirror). As an example, this paper presents the case of a mirror whose segments are connected edgewise by mechanisms analogous to a set of four collocated single-degree-of-freedom damped springs. The results of a set of parameter studies suggest that such mechanisms can be used to create a 15-m segmented mirror that behaves similarly to a monolith, although fully predicting the segmented mirror performance would require incorporating measured mechanism properties into the model. Keywords: segmented mirror, edgewise connectivity, space telescope
NASA Technical Reports Server (NTRS)
Boxwell, D. A.; Schmitz, F. H.; Splettstoesser, W. R.; Schultz, K. J.
1985-01-01
Acoustic data taken in the anechoic Deutsch-Niederlaendischer Windkanal (DNW) have documented the blade vortex interaction (BVI) impulsive noise radiated from a 1/7-scale model main rotor of the AH-1 series helicopter. Averaged model scale data were compared with averaged full scale, inflight acoustic data under similar nondimensional test conditions. At low advance ratios (mu = 0.164 to 0.194), the data scale remarkable well in level and waveform shape, and also duplicate the directivity pattern of BVI impulsive noise. At moderate advance ratios (mu = 0.224 to 0.270), the scalig deteriorates, suggesting that the model scale rotor is not adequately simulating the full scale BVI noise; presently, no proved explanation of this discrepancy exists. Carefully performed parametric variations over a complete matrix of testing conditions have shown that all of the four governing nondimensional parameters - tip Mach number at hover, advance ratio, local inflow ratio, and thrust coefficient - are highly sensitive to BVI noise radiation.
NASA Astrophysics Data System (ADS)
Hong, Sung-Kwon; Epureanu, Bogdan I.; Castanier, Matthew P.
2014-09-01
The goal of this work is to develop a numerical model for the vibration of hybrid electric vehicle (HEV) battery packs to enable probabilistic forced response simulations for the effects of variations. There are two important types of variations that affect their structural response significantly: the prestress that is applied when joining the cells within a pack; and the small, random structural property discrepancies among the cells of a battery pack. The main contributions of this work are summarized as follows. In order to account for these two important variations, a new parametric reduced order model (PROM) formulation is derived by employing three key observations: (1) the stiffness matrix can be parameterized for different levels of prestress, (2) the mode shapes of a battery pack with cell-to-cell variation can be represented as a linear combination of the mode shapes of the nominal system, and (3) the frame holding each cell has vibratory motion. A numerical example of an academic battery pack with pouch cells is presented to demonstrate that the PROM captures the effects of both prestress and structural variation on battery packs. The PROM is validated numerically by comparing full-order finite element models (FEMs) of the same systems.
Bayesian model selection of template forward models for EEG source reconstruction.
Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan
2014-06-01
Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction.
Hess, Jeremy J.; Ebi, Kristie L.; Markandya, Anil; Balbus, John M.; Wilkinson, Paul; Haines, Andy; Chalabi, Zaid
2014-01-01
simultaneously improving health. Citation: Remais JV, Hess JJ, Ebi KL, Markandya A, Balbus JM, Wilkinson P, Haines A, Chalabi Z. 2014. Estimating the health effects of greenhouse gas mitigation strategies: addressing parametric, model, and valuation challenges. Environ Health Perspect 122:447–455; http://dx.doi.org/10.1289/ehp.1306744 PMID:24583270
Non-parametric frequency response function tissue modeling in bipolar electrosurgery.
Barbé, Kurt; Ford, Carolyn; Bonn, Kenlyn; Gilbert, James
2015-01-01
High-frequency radio energy is applied to tissue therapeutically in a number of different medical applications. The ability to model the effects of RF energy on the collagen, elastin, and liquid content of the target tissue would allow for the refinement of the control of the energy in order to improve outcomes and reduce negative side-effects. In this paper, we study the time-varying impedance spectra of the circuit. It is expected that the collagen/elastin ratio does not change over time such that the time-varying impedance is a function of the liquid content. We apply a non-parametric model in which we characterize the measured impedance spectra by its frequency response function. The measurements indicate that the changing impedance as a function of time exhibit a polynomial shift which we characterize by a polynomial regression. Finally, we quantify the uncertainty to obtain prediction intervals for the estimated polynomial describing the time variation of the impedance spectra. PMID:26737664
NASA Astrophysics Data System (ADS)
Ding, Baocang; Pan, Hongguang
2016-08-01
The output feedback robust model predictive control (MPC), for the linear parameter varying (LPV) system with norm-bounded disturbance, is addressed, where the model parametric matrices are only known to be bounded within a polytope. The previous techniques of norm-bounding technique, quadratic boundedness (QB), dynamic output feedback, and ellipsoid (true-state bound; TSB) refreshment formula for guaranteeing recursive feasibility, are fused into the newly proposed approaches. In the notion of QB, the full Lyapunov matrix is applied for the first time in this context. The single-step dynamic output feedback robust MPC, where the infinite-horizon control moves are parameterised as a dynamic output feedback law, is the main topic of this paper, while the multi-step method is also suggested. In order to strictly guarantee the physical constraints, the outer bound of the true state replaces the true state itself, so tightness of this bound has a major effect on the control performance. In order to tighten the TSB, a procedure for refreshing the real-time ellipsoid based on that of the last sampling instant is given. This paper is conclusive for the past results and far-reaching for the future researches. Two benchmark examples are given to show the effectiveness of the novel results.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
Density-based load estimation using two-dimensional finite element models: a parametric study.
Bona, Max A; Martin, Larry D; Fischer, Kenneth J
2006-08-01
A parametric investigation was conducted to determine the effects on the load estimation method of varying: (1) the thickness of back-plates used in the two-dimensional finite element models of long bones, (2) the number of columns of nodes in the outer medial and lateral sections of the diaphysis to which the back-plate multipoint constraints are applied and (3) the region of bone used in the optimization procedure of the density-based load estimation technique. The study is performed using two-dimensional finite element models of the proximal femora of a chimpanzee, gorilla, lion and grizzly bear. It is shown that the density-based load estimation can be made more efficient and accurate by restricting the stimulus optimization region to the metaphysis/epiphysis. In addition, a simple method, based on the variation of diaphyseal cortical thickness, is developed for assigning the thickness to the back-plate. It is also shown that the number of columns of nodes used as multipoint constraints does not have a significant effect on the method. PMID:17132530
Bayesian kinematic earthquake source models
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.
2009-12-01
Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high
NASA Astrophysics Data System (ADS)
Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei
2015-11-01
In this paper, we present a comparative study of bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations/models. In traditional bias correction schemes, the statistics of the simulated model outputs are adjusted to those of the observation data. However, the model output and the observation data are only one case (i.e., realization) out of many possibilities, rather than being sampled from the entire population of a certain distribution due to internal climate variability. This issue has not been considered in the bias correction schemes of the existing climate change studies. Here, three approaches are employed to explore this issue, with the intention of providing a practical tool for bias correction of daily rainfall for use in hydrologic models ((1) conventional method, (2) non-informative Bayesian method, and (3) informative Bayesian method using a Weather Generator (WG) data). The results show some plausible uncertainty ranges of precipitation after correcting for the bias of RCM precipitation. The informative Bayesian approach shows a narrower uncertainty range by approximately 25-45% than the non-informative Bayesian method after bias correction for the baseline period. This indicates that the prior distribution derived from WG may assist in reducing the uncertainty associated with parameters. The implications of our results are of great importance in hydrological impact assessments of climate change because they are related to actions for mitigation and adaptation to climate change. Since this is a proof of concept study that mainly illustrates the logic of the analysis for uncertainty-based bias correction, future research exploring the impacts of uncertainty on climate impact assessments and how to utilize uncertainty while planning mitigation and adaptation strategies is still needed.
A parametric model for reactive high-power impulse magnetron sputtering of films
NASA Astrophysics Data System (ADS)
Kozák, Tomáš; Vlček, Jaroslav
2016-02-01
We present a time-dependent parametric model for reactive HiPIMS deposition of films. Specific features of HiPIMS discharges and a possible increase in the density of the reactive gas in front of the reactive gas inlets placed between the target and the substrate are considered in the model. The model makes it possible to calculate the compound fractions in two target layers and in one substrate layer, and the deposition rate of films at fixed partial pressures of the reactive and inert gas. A simplified relation for the deposition rate of films prepared using a reactive HiPIMS is presented. We used the model to simulate controlled reactive HiPIMS depositions of stoichiometric \\text{Zr}{{\\text{O}}2} films, which were recently carried out in our laboratories with two different configurations of the {{\\text{O}}2} inlets in front of the sputtered target. The repetition frequency was 500 Hz at the deposition-averaged target power densities of 5 Wcm-2and 50 Wcm-2 with a pulse-averaged target power density up to 2 kWcm-2. The pulse durations were 50 μs and 200 μs. Our model calculations show that the to-substrate {{\\text{O}}2} inlet provides systematically lower compound fractions in the target surface layer and higher compound fractions in the substrate surface layer, compared with the to-target {{\\text{O}}2} inlet. The low compound fractions in the target surface layer (being approximately 10% at the deposition-averaged target power density of 50 Wcm-2 and the pulse duration of 200 μs) result in high deposition rates of the films produced, which are in agreement with experimental values.
Dynamic modelling and stability parametric analysis of a flexible spacecraft with fuel slosh
NASA Astrophysics Data System (ADS)
Gasbarri, Paolo; Sabatini, Marco; Pisculli, Andrea
2016-10-01
Modern spacecraft often contain large quantities of liquid fuel to execute station keeping and attitude manoeuvres for space missions. In general the combined liquid-structure system is very difficult to model, and the analyses are based on some assumed simplifications. A realistic representation of the liquid dynamics inside closed containers can be approximated by an equivalent mechanical system. This technique can be considered a very useful mathematical tool for solving the complete dynamics problem of a space-system containing liquid. Thus they are particularly useful when designing a control system or to study the stability margins of the coupled dynamics. The commonly used equivalent mechanical models are the mass-spring models and the pendulum models. As far as the spacecraft modelling is concerned they are usually considered rigid; i.e. no flexible appendages such as solar arrays or antennas are considered when dealing with the interaction of the attitude dynamics with the fuel slosh. In the present work the interactions among the fuel slosh, the attitude dynamics and the flexible appendages of a spacecraft are first studied via a classical multi-body approach. In particular the equations of attitude and orbit motion are first derived for the partially liquid-filled flexible spacecraft undergoing fuel slosh; then several parametric analyses will be performed to study the stability conditions of the system during some assigned manoeuvers. The present study is propaedeutic for the synthesis of advanced attitude and/or station keeping control techniques able to minimize and/or reduce an undesired excitation of the satellite flexible appendages and of the fuel sloshing mass.
Haque, Md Mazharul; Washington, Simon
2014-01-01
The use of mobile phones while driving is more prevalent among young drivers-a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q advanced driving simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver's peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21-26 years old and split evenly by gender. Drivers' reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver's age, license type (provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins
An Evaluation of Parametric and Nonparametric Models of Fish Population Response.
Haas, Timothy C.; Peterson, James T.; Lee, Danny C.
1999-11-01
Predicting the distribution or status of animal populations at large scales often requires the use of broad-scale information describing landforms, climate, vegetation, etc. These data, however, often consist of mixtures of continuous and categorical covariates and nonmultiplicative interactions among covariates, complicating statistical analyses. Using data from the interior Columbia River Basin, USA, we compared four methods for predicting the distribution of seven salmonid taxa using landscape information. Subwatersheds (mean size, 7800 ha) were characterized using a set of 12 covariates describing physiography, vegetation, and current land-use. The techniques included generalized logit modeling, classification trees, a nearest neighbor technique, and a modular neural network. We evaluated model performance using out-of-sample prediction accuracy via leave-one-out cross-validation and introduce a computer-intensive Monte Carlo hypothesis testing approach for examining the statistical significance of landscape covariates with the non-parametric methods. We found the modular neural network and the nearest-neighbor techniques to be the most accurate, but were difficult to summarize in ways that provided ecological insight. The modular neural network also required the most extensive computer resources for model fitting and hypothesis testing. The generalized logit models were readily interpretable, but were the least accurate, possibly due to nonlinear relationships and nonmultiplicative interactions among covariates. Substantial overlap among the statistically significant (P<0.05) covariates for each method suggested that each is capable of detecting similar relationships between responses and covariates. Consequently, we believe that employing one or more methods may provide greater biological insight without sacrificing prediction accuracy.
Evaluation of treatment response in depression studies using a Bayesian parametric cure rate model.
Santen, Gijs; Danhof, Meindert; Della Pasqua, Oscar
2008-10-01
Efficacy trials with antidepressant drugs often fail to show significant treatment effect even though efficacious treatments are investigated. This failure can, amongst other factors, be attributed to the lack of sensitivity of the statistical method as well as of the endpoints to pharmacological activity. For regulatory purposes the most widely used efficacy endpoint is still the mean change in HAM-D score at the end of the study, despite evidence from literature showing that the HAM-D scale might not be a sensitive tool to assess drug effect and that changes from baseline at the end of treatment may not reflect the extent of response. In the current study, we evaluate the prospect of applying a Bayesian parametric cure rate model (CRM) to analyse antidepressant effect in efficacy trials with paroxetine. The model is based on a survival approach, which allows for a fraction of surviving patients indefinitely after completion of treatment. Data was extracted from GlaxoSmithKline's clinical databases. Response was defined as a 50% change from baseline HAM-D at any assessment time after start of therapy. Survival times were described by a log-normal distribution and drug effect was parameterised as a covariate on the fraction of non-responders. The model was able to fit the data from different studies accurately and results show that response to treatment does not lag for two weeks, as is mythically believed. In conclusion, we demonstrate how parameterisation of a survival model can be used to characterise treatment response in depression trials. The method contrasts with the long-established snapshot on changes from baseline, as it incorporates the time course of response throughout treatment.
A Parametric Model of Shoulder Articulation for Virtual Assessment of Space Suit Fit
NASA Technical Reports Server (NTRS)
Kim, K. Han; Young, Karen S.; Bernal, Yaritza; Boppana, Abhishektha; Vu, Linh Q.; Benson, Elizabeth A.; Jarvis, Sarah; Rajulu, Sudhakar L.
2016-01-01
Shoulder injury is one of the most severe risks that have the potential to impair crewmembers' performance and health in long duration space flight. Overall, 64% of crewmembers experience shoulder pain after extra-vehicular training in a space suit, and 14% of symptomatic crewmembers require surgical repair (Williams & Johnson, 2003). Suboptimal suit fit, in particular at the shoulder region, has been identified as one of the predominant risk factors. However, traditional suit fit assessments and laser scans represent only a single person's data, and thus may not be generalized across wide variations of body shapes and poses. The aim of this work is to develop a software tool based on a statistical analysis of a large dataset of crewmember body shapes. This tool can accurately predict the skin deformation and shape variations for any body size and shoulder pose for a target population, from which the geometry can be exported and evaluated against suit models in commercial CAD software. A preliminary software tool was developed by statistically analyzing 150 body shapes matched with body dimension ranges specified in the Human-Systems Integration Requirements of NASA ("baseline model"). Further, the baseline model was incorporated with shoulder joint articulation ("articulation model"), using additional subjects scanned in a variety of shoulder poses across a pre-specified range of motion. Scan data was cleaned and aligned using body landmarks. The skin deformation patterns were dimensionally reduced and the co-variation with shoulder angles was analyzed. A software tool is currently in development and will be presented in the final proceeding. This tool would allow suit engineers to parametrically generate body shapes in strategically targeted anthropometry dimensions and shoulder poses. This would also enable virtual fit assessments, with which the contact volume and clearance between the suit and body surface can be predictively quantified at reduced time and
Martinez-Murcia, Francisco J; Górriz, Juan M; Ramírez, Javier; Ortiz, Andres
2016-11-01
The usage of biomedical imaging in the diagnosis of dementia is increasingly widespread. A number of works explore the possibilities of computational techniques and algorithms in what is called computed aided diagnosis. Our work presents an automatic parametrization of the brain structure by means of a path generation algorithm based on hidden Markov models (HMMs). The path is traced using information of intensity and spatial orientation in each node, adapting to the structure of the brain. Each path is itself a useful way to characterize the distribution of the tissue inside the magnetic resonance imaging (MRI) image by, for example, extracting the intensity levels at each node or generating statistical information of the tissue distribution. Additionally, a further processing consisting of a modification of the grey level co-occurrence matrix (GLCM) can be used to characterize the textural changes that occur throughout the path, yielding more meaningful values that could be associated to Alzheimer's disease (AD), as well as providing a significant feature reduction. This methodology achieves moderate performance, up to 80.3% of accuracy using a single path in differential diagnosis involving Alzheimer-affected subjects versus controls belonging to the Alzheimer's disease neuroimaging initiative (ADNI).
2014-01-01
Background Early methods for estimating divergence times from gene sequence data relied on the assumption of a molecular clock. More sophisticated methods were created to model rate variation and used auto-correlation of rates, local clocks, or the so called “uncorrelated relaxed clock” where substitution rates are assumed to be drawn from a parametric distribution. In the case of Bayesian inference methods the impact of the prior on branching times is not clearly understood, and if the amount of data is limited the posterior could be strongly influenced by the prior. Results We develop a maximum likelihood method – Physher – that uses local or discrete clocks to estimate evolutionary rates and divergence times from heterochronous sequence data. Using two empirical data sets we show that our discrete clock estimates are similar to those obtained by other methods, and that Physher outperformed some methods in the estimation of the root age of an influenza virus data set. A simulation analysis suggests that Physher can outperform a Bayesian method when the real topology contains two long branches below the root node, even when evolution is strongly clock-like. Conclusions These results suggest it is advisable to use a variety of methods to estimate evolutionary rates and divergence times from heterochronous sequence data. Physher and the associated data sets used here are available online at http://code.google.com/p/physher/. PMID:25055743
NASA Astrophysics Data System (ADS)
Speyer, Gavriel; Kaczkowski, Peter; Brayman, Andrew; Crum, Lawrence
2010-03-01
Accurate monitoring of high intensity focused ultrasound (HIFU) surgery is critical to ensuring proper treatment. Pulse-echo diagnostic ultrasound (DU) is a recognized modality for identifying temperature differentials using speckle tracking between two DU radio frequency (RF) frames [2], [4]. This observation has motivated non-parametric temperature estimation, which associates temperature changes directly with the displacement estimates. We present an estimation paradigm termed displacement mode analysis (DMA), which uses physical modeling to associate particular patterns of observed displacement, called displacement modes, with corresponding modes of variation in the administered therapy. This correspondence allows DMA to estimate therapy directly using a linear combination of displacement modes, imbuing these displacement estimates into the reference using interpolation, and by aligning with the treatment frame, providing a therapy estimate with the heating modes. Since DMA is maximum likelihood estimation (MLE), the accuracy of its estimates can be assessed a priori, providing error bounds for estimates of applied heating, temperature, and thermal dose. Predicted performance is verified using both simulation and experiment for a point exposure of 4.2 Watts of electrical power in alginate, a tissue mimicking phantom.
Modelling of noise suppression in gain-saturated fiber optical parametric amplifiers
NASA Astrophysics Data System (ADS)
Pakarzadeh, H.; Zakery, A.
2013-11-01
Noise properties of both one-pump (1-P) and two-pump (2-P) fiber optical parametric amplifiers (FOPAs) are theoretically investigated and particularly the unique feature of FOPAs for the noise suppression in the gain-saturated regime is modeled. For the 1-P FOPAs, the simulation results are compared with the available experimental data and a very good agreement is obtained. Also, for the 2-P FOPA where no experimental work has been reported regarding their noise properties in the saturation regime, the noise behavior of the amplified signal is simulated for the first time. It is shown that for a specific power in the deep saturation regime, the signal noise is suppressed; and with further increase of the signal power when the gain saturation reaches its new cycle, a periodic behavior of noise suppression is observed originating from the phase-matching condition. The existence of a negative feedback mechanism which is responsible to the suppression of the excess noise in the first cycle of the gain saturation is confirmed both for 1-P and 2-P FOPAs. Generally, it is shown that the noise suppression can be observed for several specific powers at which the slope of the output signal power versus the input one is zero. The results of this paper may have some applications in signal processing, e.g., cleaning noisy signals.
Testing of the Trim Tab Parametric Model in NASA Langley's Unitary Plan Wind Tunnel
NASA Technical Reports Server (NTRS)
Murphy, Kelly J.; Watkins, Anthony N.; Korzun, Ashley M.; Edquist, Karl T.
2013-01-01
In support of NASA's Entry, Descent, and Landing technology development efforts, testing of Langley's Trim Tab Parametric Models was conducted in Test Section 2 of NASA Langley's Unitary Plan Wind Tunnel. The objectives of these tests were to generate quantitative aerodynamic data and qualitative surface pressure data for experimental and computational validation and aerodynamic database development. Six component force-and-moment data were measured on 38 unique, blunt body trim tab configurations at Mach numbers of 2.5, 3.5, and 4.5, angles of attack from -4deg to +20deg, and angles of sideslip from 0deg to +8deg. Configuration parameters investigated in this study were forebody shape, tab area, tab cant angle, and tab aspect ratio. Pressure Sensitive Paint was used to provide qualitative surface pressure mapping for a subset of these flow and configuration variables. Over the range of parameters tested, the effects of varying tab area and tab cant angle were found to be much more significant than varying tab aspect ratio relative to key aerodynamic performance requirements. Qualitative surface pressure data supported the integrated aerodynamic data and provided information to aid in future analyses of localized phenomena for trim tab configurations.
Readout IC requirement trends based on a simplified parametric seeker model.
Osborn, Thor D.
2010-03-01
Modern space based optical sensors place substantial demands on the focal plane array readout integrated circuit. Active pixel readout designs offer direct access to individual pixel data but require analog to digital conversion at or near each pixel. Thus, circuit designers must create precise, fundamentally analog circuitry within tightly constrained areas on the integrated circuit. Rapidly changing phenomena necessitate tradeoffs between sampling and conversion speed, data precision, and heat generation adjacent the detector array, especially of concern for thermally sensitive space grade infrared detectors. A simplified parametric model is presented that illustrates seeker system performance and analog to digital conversion requirements trends in the visible through mid-wave infrared, for varying sample rate. Notional limiting-case Earth optical backgrounds were generated using MODTRAN4 with a range of cloud extremes and approximate practical albedo limits for typical surface features from a composite of the Mosart and Aster spectral albedo databases. The dynamic range requirements imposed by these background spectra are discussed in the context of optical band selection and readout design impacts.
NASA Astrophysics Data System (ADS)
Wouters, Hendrik; Blahak, Ulrich; Helmert, Jürgen; Raschendorfer, Matthias; Demuzere, Matthias; Fay, Barbara; Trusilova, Kristina; Mironov, Dmitrii; Reinert, Daniel; Lüthi, Daniel; Machulskaya, Ekaterina
2015-04-01
In order to address urban climate at the regional scales, a new efficient urban land-surface parametrization TERRA_URB has been developed and coupled to the atmospheric numerical model COSMO-CLM. Hereby, several new advancements for urban land-surface models are introduced which are crucial for capturing the urban surface-energy balance and its seasonal dependency in the mid-latitudes. This includes a new PDF-based water-storage parametrization for impervious land, the representation of radiative absorption and emission by greenhouse gases in the infra-red spectrum in the urban canopy layer, and the inclusion of heat emission from human activity. TERRA_URB has been applied in offline urban-climate studies during European observation campaigns at Basel (BUBBLE), Toulouse (CAPITOUL), and Singapore, and currently applied in online studies for urban areas in Belgium, Germany, Switzerland, Helsinki, Singapore, and Melbourne. Because of its computational efficiency, high accuracy and its to-the-point conceptual easiness, TERRA_URB has been selected to become the standard urban parametrization of the atmospheric numerical model COSMO(-CLM). This allows for better weather forecasts for temperature and precipitation in cities with COSMO, and an improved assessment of urban outdoor hazards in the context of global climate change and urban expansion with COSMO-CLM. We propose additional extensions to TERRA_URB towards a more robust representation of cities over the world including their structural design. In a first step, COSMO's standard EXTernal PARarameter (EXTPAR) tool is updated for representing the cities into the land cover over the entire globe. Hereby, global datasets in the standard EXTPAR tool are used to retrieve the 'Paved' or 'sealed' surface Fraction (PF) referring to the presence of buildings and streets. Furthermore, new global data sets are incorporated in EXTPAR for describing the Anthropogenic Heat Flux (AHF) due to human activity, and optionally the
Refinement of numerical models and parametric study of SOFC stack performance
NASA Astrophysics Data System (ADS)
Burt, Andrew C.
The presence of multiple air and fuel channels per fuel cell and the need to combine many cells in series result in complex steady-state temperature distributions within Solid Oxide Fuel Cell (SOFC) stacks. Flow distribution in these channels, when non-uniform, has a significant effect on cell and stack performance. Large SOFC stacks are very difficult to model using full 3-D CFD codes because of the resource requirements needed to solve for the many scales involved. Studies have shown that implementations based on Reduced Order Methods (ROM), if calibrated appropriately, can provide simulations of stacks consisting of more than 20 cells with reasonable computational effort. A pseudo 2-D SOFC stack model capable of studying co-flow and counter-flow cell geometries was developed by solving multiple 1-D SOFC single cell models in parallel on a Beowulf cluster. In order to study cross-flow geometries a novel Multi-Component Multi-Physics (MCMP) scheme was instantiated to produce a Reduced Order 3-D Fuel Cell Model. A C++ implementation of the MCMP scheme developed in this study utilized geometry, control volume, component, and model structures allowing each physical model to be solved only for those components for which it is relevant. Channel flow dynamics were solved using a 1-D flow model to reduce computational effort. A parametric study was conducted to study the influence of mass flow distribution, radiation, and stack size on fuel cell stack performance. Using the pseudo 2-D planar SOFC stack model with stacks of various sizes from 2 to 40 cells it was shown that, with adiabatic wall conditions, the asymmetry of the individual cell can produce a temperature distribution where high and low temperatures are found in the top and bottom cells, respectively. Heat transfer mechanisms such as radiation were found to affect the reduction of the temperature gradient near the top and bottom cell. Results from the reduced order 3-D fuel cell model showed that greater
Application of a Momentum Source Model to the RAH-66 Comanche FANTAIL
NASA Technical Reports Server (NTRS)
Nygaard, Tor A.; Dimanlig, Arsenio C.; Meadowcroft, Edward T.
2004-01-01
A Momentum Source Model has been revised and implemented in the flow solver OVERFLOW-D. In this approach, the fan forces are evaluated from two-dimensional airfoil tables as a function of local Mach number and angle-of-attack and applied as source terms in the discretized Navier-Stokes equations. The model revisions include a new model for forces in the tip region and axial distribution of the source terms. The model revisions improve the results significantly. The Momentum Source Model agrees well with a discrete blade model for all computed collective pitch angles. The two models agree well with experimental data for thrust vs. torque. The Momentum Source Model is a good complement to Discrete Blade Models for ducted fan computations. The lower computational and labor costs make parametric studies, optimization studies and interactional aerodynamics studies feasible for cases beyond what is practical with a Discrete Blade Model today.
Foreground Bias from Parametric Models of Far-IR Dust Emission
NASA Astrophysics Data System (ADS)
Kogut, A.; Fixsen, D. J.
2016-08-01
We use simple toy models of far-IR dust emission to estimate the accuracy to which the polarization of the cosmic microwave background can be recovered using multi-frequency fits, if the parametric form chosen for the fitted dust model differs from the actual dust emission. Commonly used approximations to the far-IR dust spectrum yield CMB residuals comparable to or larger than the sensitivities expected for the next generation of CMB missions, despite fitting the combined CMB + foreground emission to precision 0.1% or better. The Rayleigh-Jeans approximation to the dust spectrum biases the fitted dust spectral index by {{Δ }}{β }d=0.2 and the inflationary B-mode amplitude by {{Δ }}r=0.03. Fitting the dust to a modified blackbody at a single temperature biases the best-fit CMB by {{Δ }}r\\gt 0.003 if the true dust spectrum contains multiple temperature components. A 13-parameter model fitting two temperature components reduces this bias by an order of magnitude if the true dust spectrum is in fact a simple superposition of emission at different temperatures, but fails at the level {{Δ }}r=0.006 for dust whose spectral index varies with frequency. Restricting the observing frequencies to a narrow region near the foreground minimum reduces these biases for some dust spectra but can increase the bias for others. Data at THz frequencies surrounding the peak of the dust emission can mitigate these biases while providing a direct determination of the dust temperature profile.
Foreground Bias from Parametric Models of Far-IR Dust Emission
NASA Technical Reports Server (NTRS)
Kogut, A.; Fixsen, D. J.
2016-01-01
We use simple toy models of far-IR dust emission to estimate the accuracy to which the polarization of the cosmic microwave background can be recovered using multi-frequency fits, if the parametric form chosen for the fitted dust model differs from the actual dust emission. Commonly used approximations to the far-IR dust spectrum yield CMB residuals comparable to or larger than the sensitivities expected for the next generation of CMB missions, despite fitting the combined CMB plus foreground emission to precision 0.1 percent or better. The Rayleigh-Jeans approximation to the dust spectrum biases the fitted dust spectral index by (Delta)(Beta)(sub d) = 0.2 and the inflationary B-mode amplitude by (Delta)(r) = 0.03. Fitting the dust to a modified blackbody at a single temperature biases the best-fit CMB by (Delta)(r) greater than 0.003 if the true dust spectrum contains multiple temperature components. A 13-parameter model fitting two temperature components reduces this bias by an order of magnitude if the true dust spectrum is in fact a simple superposition of emission at different temperatures, but fails at the level (Delta)(r) = 0.006 for dust whose spectral index varies with frequency. Restricting the observing frequencies to a narrow region near the foreground minimum reduces these biases for some dust spectra but can increase the bias for others. Data at THz frequencies surrounding the peak of the dust emission can mitigate these biases while providing a direct determination of the dust temperature profile.
Towards a low-order dynamic stall model using a parametric proper orthogonal decomposition
NASA Astrophysics Data System (ADS)
Coleman, Dustin G.
Measured unsteady surface pressures, which are a function of both space and time, of a harmonically pitching airfoil are expressed in terms of a parametric proper orthogonal decomposition (PPOD) in order to obtain an optimum (in the mean-square sense) modal representation. This decomposition is formulated in such a way that the resulting spatial modes act optimally over the entirety of a parameter space defined by the airfoil pitching motion characteristics, i.e. for attached flow pitching, light stall, and deep stall. This method provides a systematic and quantitative framework by which to elucidate common and disparate features of the light and deep dynamic stall processes and provides a bridge to the development of low-order models for the prediction of unsteady airloads, such as the normal force and quarter-chord pitching moment. This work primarily focuses on the development of two low-order models, distinguished by frame of reference, used for the reconstruction of unsteady aerodynamic loads. The first model decomposes the unsteady pressure field where the steady inviscid pressure field, provided by a Smith-Hess panel method, is removed. Conversely, the second model decomposes the unsteady pressure field with the fully viscous, steady pressure field removed. In each model, the parameter-independent modal shapes are determined from unsteady surface pressures of an arbitrarily chosen reference airfoil geometry operating over a large range of pitching trajectories. It is shown that the aerodynamic loads of the reference geometry are reconstructed with as few as 5 PPOD modes. For the first model, the airloads of a candidate airfoil, one where the unsteady surface pressure field is desired for a given pitching trajectory, are shown to be reconstructed using the same 5 reference PPOD modes plus an additional spatial mode calculated from the candidate airfoil's steady pressure field. Likewise, the second model is capable of reconstructing the candidate airfoil
Moore, Julia L; Liang, Song; Akullian, Adam; Remais, Justin V
2012-12-01
Developmental models, such as degree-day models, are commonly used to predict the impact of future climate change on the intensity, distribution, and timing of the transmission of infectious diseases, particularly those caused by pathogens carried by vectors or intermediate hosts. Resulting projections can be useful in policy discussions concerning regional or national responses to future distributions of important infectious diseases. Although the simplicity of degree-day models is appealing, little work has been done to analyze their ability to make reliable projections of the distribution of important pathogens, vectors, or intermediate hosts in the presence of the often considerable parametric uncertainty common to such models. Here, a population model of Oncomelania hupensis, the intermediate host of Schistosoma japonicum, was used to investigate the sensitivity of host range predictions in Sichuan Province, China, to uncertainty in two key degree-day model parameters: delta(min) (minimum temperature threshold for development) and K (total degree-days required for completion of snail development). The intent was to examine the consequences of parametric uncertainty in a plausible biological model, rather than to generate the definitive model. Results indicate that model output, the seasonality of population dynamics, and range predictions, particularly along the edge of the range, are highly sensitive to changes in model parameters, even at levels of parametric uncertainty common to such applications. Caution should be used when interpreting the results of degree-day models used to generate predictions of disease distribution and risk under scenarios of future climate change, and predictions should be considered most reliable when the temperature ranges used in projections resemble those used to estimate model parameters. Given the potential for substantial changes in degree-day model output with modest changes in parameter values, caution is warranted when
Wear of polyethylene cups in total hip arthroplasty: a parametric mathematical model.
Pietrabissa, R; Raimondi, M; Di Martino, E
1998-04-01
This paper presents a parametric mathematical model of the head-cup wear coupling in total hip arthroplasty (THA). The model evaluates the dependence of acetabular volumetric wear upon the characteristic parameters of patient and hip prosthesis. Archard's law is assumed to calculate the wear coupling behaviour. The wear factor is taken from pin-on-disc wear tests as a function of materials and finishing of the articular joint. The forces acting on the hip joint are taken from experimental data found in the literature whilst the load distribution is calculated under the hypotheses of perfectly rigid ideal wear coupling. The sliding distance is obtained by combining the three elementary displacements -- due to rotations around the three axes -- at the generic bearing surface location. The simulations show that the polymeric wear volume per step cycle decreases ranging from fast walking speeds to low running speeds, it increases linearly with patient body weight and with femoral head diameter, it decreases slightly for positive variations of the socket inclination angle and it increases exponentially with femoral head roughness. The volumetric wear rate per year calculated for a standard reference patient is 5.8 mm3. The relevant iso-wear maps show a marginal pattern with the maximum located near the cup superior borderline. At the instant of peak load, the iso-stress maps show a paracentral pattern with the maximum superior to the cup polar point, and the iso-sliding distance maps show a marginal pattern with two maxima located near the cup's superior and inferior borderlines. PMID:9690490
A Semi-parametric Multivariate Gap-filling Model for Eddy Covariance Latent Heat Flux
NASA Astrophysics Data System (ADS)
Li, M.; Chen, Y.
2010-12-01
Quantitative descriptions of latent heat fluxes are important to study the water and energy exchanges between terrestrial ecosystems and the atmosphere. The eddy covariance approaches have been recognized as the most reliable technique for measuring surface fluxes over time scales ranging from hours to years. However, unfavorable micrometeorological conditions, instrument failures, and applicable measurement limitations may cause inevitable flux gaps in time series data. Development and application of suitable gap-filling techniques are crucial to estimate long term fluxes. In this study, a semi-parametric multivariate gap-filling model was developed to fill latent heat flux gaps for eddy covariance measurements. Our approach combines the advantages of a multivariate statistical analysis (principal component analysis, PCA) and a nonlinear interpolation technique (K-nearest-neighbors, KNN). The PCA method was first used to resolve the multicollinearity relationships among various hydrometeorological factors, such as radiation, soil moisture deficit, LAI, and wind speed. The KNN method was then applied as a nonlinear interpolation tool to estimate the flux gaps as the weighted sum latent heat fluxes with the K-nearest distances in the PCs’ domain. Two years, 2008 and 2009, of eddy covariance and hydrometeorological data from a subtropical mixed evergreen forest (the Lien-Hua-Chih Site) were collected to calibrate and validate the proposed approach with artificial gaps after standard QC/QA procedures. The optimal K values and weighting factors were determined by the maximum likelihood test. The results of gap-filled latent heat fluxes conclude that developed model successful preserving energy balances of daily, monthly, and yearly time scales. Annual amounts of evapotranspiration from this study forest were 747 mm and 708 mm for 2008 and 2009, respectively. Nocturnal evapotranspiration was estimated with filled gaps and results are comparable with other studies
SSC models for compact sources
NASA Astrophysics Data System (ADS)
Ghisellini, Gabriele
A particular class of synchrotron self-Compton models are discussed, assuming that (1) a power-law distribution of relativistic electrons is continuously injected throughout the source, (2) the radiative cooling time is shorter than the escape time, and (3) the magnetic-to-radiation energy density ratio is greater than unity. Taking into account self-absorption, the Compton-to-synchrotron luminosity ratio greatly exceeds that of radiation-to-magnetic energy density, if the injected power-law electron distribution is steep. As in the model proposed, in a different context, by Zdziarski and Lamb (1986), the radiation produced by the multiple Compton process dominates the emission at all frequencies, and the overall spectrum slope is always flatter than unity.
Notake, Takashi; Nawata, Kouji; Kawamata, Hiroshi; Matsukawa, Takeshi; Qi, Feng; Minamide, Hiroaki
2012-11-01
We developed a difference frequency generation (DFG) source with an organic nonlinear optical crystal of DAST or BNA selectively excited by a dual-wavelength β-BaB(2)O(4) optical parametric oscillator (BBO-OPO). The dual-wavelength BBO-OPO can independently oscillate two lights with different wavelengths from 800 to 1800 nm in a cavity. THz-wave generation by using each organic crystal covers ultrawide range from 1 to 30 THz with inherent intensity dips by crystal absorption modes. The reduced outputs can be improved by switching over the crystals with adequately tuned pump wavelengths of the BBO-OPO.
NASA Astrophysics Data System (ADS)
Gu, Yongxian
The demand of portable power generation systems for both domestic and military applications has driven the advances of mesoscale internal combustion engine systems. This dissertation was devoted to the gasdynamic modeling and parametric study of the mesoscale internal combustion swing engine/generator systems. First, the system-level thermodynamic modeling for the swing engine/generator systems has been developed. The system performance as well as the potentials of both two- and four-stroke swing engine systems has been investigated based on this model. Then through parameterc studies, the parameters that have significant impacts on the system performance have been identified, among which, the burn time and spark advance time are the critical factors related to combustion process. It is found that the shorter burn time leads to higher system efficiency and power output and the optimal spark advance time is about half of the burn time. Secondly, the turbulent combustion modeling based on levelset method (G-equation) has been implemented into the commercial software FLUENT. Thereafter, the turbulent flame propagation in a generic mesoscale combustion chamber and realistic swing engine chambers has been studied. It is found that, in mesoscale combustion engines, the burn time is dominated by the mean turbulent kinetic energy in the chamber. It is also shown that in a generic mesoscale combustion chamber, the burn time depends on the longest distance between the initial ignition kernel to its walls and by changing the ignition and injection locations, the burn time can be reduced by a factor of two. Furthermore, the studies of turbulent flame propagation in real swing engine chambers show that the combustion can be enhanced through in-chamber turbulence augmentation and with higher engine frequency, the burn time is shorter, which indicates that the in-chamber turbulence can be induced by the motion of moving components as well as the intake gas jet flow. The burn time
Non-parametric temporal modeling of the hemodynamic response function via a liquid state machine.
Avesani, Paolo; Hazan, Hananel; Koilis, Ester; Manevitz, Larry M; Sona, Diego
2015-10-01
Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model e.g., double-gamma; (ii) the effect of stimuli on the signal is taken to be linearly additive. While these assumptions have been empirically proven to generate high sensitivity for statistical methods, they also limit the identification of relevant voxels to what is already postulated in the signal, thus not allowing the discovery of unknown correlates in the data due to the presence of unexpected hemodynamics. This paper tries to overcome these limitations by proposing a method wherein the HRF is learned directly from data rather than induced from its basic form assumed in advance. This approach produces a set of voxel-wise models of HRF and, as a result, relevant voxels are filterable according to the accuracy of their prediction in a machine learning framework. This approach is instantiated using a temporal architecture based on the paradigm of Reservoir Computing wherein a Liquid State Machine is combined with a decoding Feed-Forward Neural Network. This splits the modeling into two parts: first a representation of the complex temporal reactivity of the hemodynamic response is determined by a universal global "reservoir" which is essentially temporal; second an interpretation of the encoded representation is determined by a standard feed-forward neural network, which is trained by the data. Thus the reservoir models the temporal state of information during and following temporal stimuli in a feed-back system, while the neural network "translates" this data to fit the specific HRF response as given, e.g. by BOLD signal measurements in fMRI. An empirical analysis on synthetic datasets shows that the learning process can
NASA Astrophysics Data System (ADS)
Mukherjee, Sananda
In recent years, there has been great interest in the potential of green roofs as an alternative roofing option to reduce the energy consumed by individual buildings as well as mitigate large scale urban environmental problems such as the heat island effect. There is a widespread recognition and a growing literature of measured data that suggest green roofs can reduce building energy consumption. This thesis investigates the potential of green roofs in reducing the building energy loads and focuses on how the different parameters of a green roof assembly affect the thermal performance of a building. A green roof assembly is modeled in Design Builder- a 3D graphical design modeling and energy use simulation program (interface) that uses the EnergyPlus simulation engine, and the simulated data set thus obtained is compared to field experiment data to validate the roof assembly model on the basis of how accurately it simulates the behavior of a green roof. Then the software is used to evaluate the thermal performance of several green roof assemblies under three different climate types, looking at the whole building energy consumption. For the purpose of this parametric simulation study, a prototypical single story small office building is considered and one parameter of the green roof is altered for each simulation run in order to understand its effect on building's energy loads. These parameters include different insulation thicknesses, leaf area indices (LAI) and growing medium or soil depth, each of which are tested under the three different climate types. The energy use intensities (EUIs), the peak and annual heating and cooling loads resulting from the use of these green roof assemblies are compared with each other and to a cool roof base case to determine the energy load reductions, if any. The heat flux through the roof is also evaluated and compared. The simulation results are then organized and finally presented as a decision support tool that would
Halevy, A; Megidish, E; Dovrat, L; Eisenberg, H S; Becker, P; Bohatý, L
2011-10-10
We describe the full characterization of the biaxial nonlinear crystal BiB₃O₆ (BiBO) as a polarization entangled photon source using non-collinear type-II parametric down-conversion. We consider the relevant parameters for crystal design, such as cutting angles, polarization of the photons, effective nonlinearity, spatial and temporal walk-offs, crystal thickness and the effect of the pump laser bandwidth. Experimental results showing entanglement generation with high rates and a comparison to the well investigated β-BaB₂O₄ (BBO) crystal are presented as well. Changing the down-conversion crystal of a polarization entangled photon source from BBO to BiBO enhances the generation rate as if the pump power was increased by 2.5 times. Such an improvement is currently required for the generation of multiphoton entangled states.
NASA Astrophysics Data System (ADS)
Ziehn, Tilo; Dixon, Nick S.; Tomlin, Alison S.
2009-12-01
A combined Lagrangian stochastic model with micro-mixing and chemical sub-models is used to investigate a reactive plume of nitrogen oxides (NO x) released into a turbulent grid flow doped with ozone (O 3). Sensitivities to the model input parameters are explored for different source NO x scenarios. The wind tunnel experiments of Brown and Bilger (1996) provide the simulation conditions for the first case study where photolysis reactions are not included and the main uncertainties occur in parameters defining the turbulence scales, source size and reaction rate of NO with O 3. Using nominal values of the parameters from previous studies, the model gives a good representation of the radial profile of the conserved mean scalar Γ although slightly over predicts peak mean NO 2 concentrations Γ compared to the experiments. The high dimensional model representation (HDMR) method is used to investigate the effects of uncertainties in model inputs on the simulation of chemical species concentrations. For this scenario, the Lagrangian velocity structure function coefficient has the largest impact on simulated Γ profiles. Photolysis reactions are then included in a chemical scheme consisting of eight reactions between species NO, O, O 3 and NO 2. Independent and interactive effects of 22 input parameters are studied for two source NO x scenarios using HDMR, including turbulence parameters, temperature dependant rate parameters, photolysis rates, temperature, fraction of NO in total NO x at the source and background ozone concentration [O 3]. For this reactive case, the variance in the predicted mean plume centre Γ is caused by parameters describing both physical (mixing time-scale coefficient) and chemical processes (activation energy for the reaction O 3+NO). The variance in predicted plume centre Γ and root mean square NO 2 concentration γNO', is strongly influenced by the fraction of NO in the source NO x, and to a lesser extent the mixing time-scale coefficient
NASA Astrophysics Data System (ADS)
Wu, Zhiliang; Wang, Shuxin; Zhang, Lianhong; Hu, S. Jack
This paper presents an analytical model of the electrical contact resistance between the carbon paper gas diffusion layers (GDLs) and the graphite bipolar plates (BPPs) in a proton exchange membrane (PEM) fuel cell. The model is developed based on the classical statistical contact theory for a PEM fuel cell, using the same probability distributions of the GDL structure and BPP surface profile as previously described in Wu et al. [Z. Wu, Y. Zhou, G. Lin, S. Wang, S.J. Hu, J. Power Sources 182 (2008) 265-269] and Zhou et al. [Y. Zhou, G. Lin, A.J. Shih, S.J. Hu, J. Power Sources 163 (2007) 777-783]. Results show that estimates of the contact resistance compare favorably with experimental data by Zhou et al. [Y. Zhou, G. Lin, A.J. Shih, S.J. Hu, J. Power Sources 163 (2007) 777-783]. Factors affecting the contact behavior are systematically studied using the analytical model, including the material properties of the two contact bodies and factors arising from the manufacturing processes. The transverse Young's modulus of chopped carbon fibers in the GDL and the surface profile of the BPP are found to be significant to the contact resistance. The factor study also sheds light on the manufacturing requirements of carbon fiber GDLs for a better contact performance in PEM fuel cells.
Grating lobe elimination in steerable parametric loudspeaker.
Shi, Chuang; Gan, Woon-Seng
2011-02-01
In the past two decades, the majority of research on the parametric loudspeaker has concentrated on the nonlinear modeling of acoustic propagation and pre-processing techniques to reduce nonlinear distortion in sound reproduction. There are, however, very few studies on directivity control of the parametric loudspeaker. In this paper, we propose an equivalent circular Gaussian source array that approximates the directivity characteristics of the linear ultrasonic transducer array. By using this approximation, the directivity of the sound beam from the parametric loudspeaker can be predicted by the product directivity principle. New theoretical results, which are verified through measurements, are presented to show the effectiveness of the delay-and-sum beamsteering structure for the parametric loudspeaker. Unlike the conventional loudspeaker array, where the spacing between array elements must be less than half the wavelength to avoid spatial aliasing, the parametric loudspeaker can take advantage of grating lobe elimination to extend the spacing of ultrasonic transducer array to more than 1.5 wavelengths in a typical application.
NASA Astrophysics Data System (ADS)
Lange, Stefan; Rockel, Burkhardt; Volkholz, Jan; Bookhagen, Bodo
2014-05-01
This study provides a first thorough evaluation of the COnsortium for SMall scale MOdeling weather prediction model in CLimate Mode (COSMO-CLM) over South America. Simulations are driven by ERA-Interim reanalysis data. Next to precipitation we examine the surface radiation budget, cloud cover, 2 m temperatures, and the low level circulation. We evaluate against reanalysis data as well as observations from ground stations and satellites. Our analysis focuses on the sensitivity of results to the parametrization of non-precipitating subgrid-scale clouds in comparison to the sensitivity to the convective parametrization. Specifically, we compare simulations with a relative humidity versus a statistical subgrid-scale cloud scheme, in combination with convection schemes according to Tiedtke (1989) and from the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) cycle 33r1. The sensitivity of simulated tropical precipitation to the parametrizations of convection and subgrid-scale clouds is of similar magnitude. We show that model runs with different subgrid-scale cloud schemes produce substantially different cloud ice and liquid water contents. This impacts surface radiation budgets, and in turn convection and precipitation. Considering all evaluated variables in synopsis, the model performs best with the (both non-default) IFS and statistical schemes for convection and subgrid-scale clouds, respectively. Despite several remaining deficiencies, such as a poor simulation of the diurnal cycle of precipitation or a substantial austral summer warm bias in northern Argentina, this new setup considerably reduces long-standing model biases, which have been a feature of COSMO-CLM across tropical domains.
NASA Astrophysics Data System (ADS)
Lange, Stefan; Rockel, Burkhardt; Volkholz, Jan; Bookhagen, Bodo
2015-05-01
This study provides a first thorough evaluation of the COnsortium for Small scale MOdeling weather prediction model in CLimate Mode (COSMO-CLM) over South America. Simulations are driven by ERA-Interim reanalysis data. Besides precipitation, we examine the surface radiation budget, cloud cover, 2 m temperatures, and the low level circulation. We evaluate against reanalysis data as well as observations from ground stations and satellites. Our analysis focuses on the sensitivity of results to the convective parametrization in comparison to their sensitivity to the representation of non-precipitating subgrid-scale clouds in the parametrization of radiation. Specifically, we compare simulations with a relative humidity versus a statistical subgrid-scale cloud scheme, in combination with convection schemes according to Tiedtke (Mon Weather Rev 117(8):1779-1800, 1989) and from the European Centre for Medium-Range Weather Forecasts Integrated Forecasting System (IFS) cycle 33r1. The sensitivity of simulated tropical precipitation to the parametrizations of convection and subgrid-scale clouds is of similar magnitude. We show that model runs with different subgrid-scale cloud schemes produce substantially different cloud ice and liquid water contents. This impacts surface radiation budgets, and in turn convection and precipitation. Considering all evaluated variables in synopsis, the model performs best with the (both non-default) IFS and statistical schemes for convection and subgrid-scale clouds, respectively. Despite several remaining deficiencies, such as a poor simulation of the diurnal cycle of precipitation or a substantial austral summer warm bias in northern Argentina, this new setup considerably reduces long-standing model biases, which have been a feature of COSMO-CLM across tropical domains.
Yan, Huiping; Qian, Yun; Lin, Guang; Leung, Lai-Yung R.; Yang, Ben; Fu, Q.
2014-03-25
Convective parameterizations used in weather and climate models all display sensitivity to model resolution and variable skill in different climatic regimes. Although parameters in convective schemes can be calibrated using observations to reduce model errors, it is not clear if the optimal parameters calibrated based on regional data can robustly improve model skill across different model resolutions and climatic regimes. In this study, this issue is investigated using a regional modeling framework based on the Weather Research and Forecasting (WRF) model. To quantify the response and sensitivity of model performance to model parameters, we identified five key input parameters and specified their ranges in the Kain-Fritsch (KF) convection scheme in WRF and calibrated them across different spatial resolutions, climatic regimes, and radiation schemes using observed precipitation data. Results show that the optimal values for the five input parameters in the KF scheme are close and model sensitivity and error exhibit similar dependence on the input parameters for all experiments conducted in this study despite differences in the precipitation climatology. We found that the model overall performances in simulating precipitation are more sensitive to the coefficients of downdraft (Pd) and entrainment (Pe) mass flux and starting height of downdraft (Ph). However, we found that rainfall biases, which are probably more related to structural errors, still exist over some regions in the simulation even with the optimal parameters, suggesting further studies are needed to identify the sources of uncertainties and reduce the model biases or structural errors associated with missed or misrepresented physical processes and/or potential problems with the modeling framework.
Source modeling sleep slow waves
Murphy, Michael; Riedner, Brady A.; Huber, Reto; Massimini, Marcello; Ferrarelli, Fabio; Tononi, Giulio
2009-01-01
Slow waves are the most prominent electroencephalographic (EEG) feature of sleep. These waves arise from the synchronization of slow oscillations in the membrane potentials of millions of neurons. Scalp-level studies have indicated that slow waves are not instantaneous events, but rather they travel across the brain. Previous studies of EEG slow waves were limited by the poor spatial resolution of EEGs and by the difficulty of relating scalp potentials to the activity of the underlying cortex. Here we use high-density EEG (hd-EEG) source modeling to show that individual spontaneous slow waves have distinct cortical origins, propagate uniquely across the cortex, and involve unique subsets of cortical structures. However, when the waves are examined en masse, we find that there are diffuse hot spots of slow wave origins centered on the lateral sulci. Furthermore, slow wave propagation along the anterior−posterior axis of the brain is largely mediated by a cingulate highway. As a group, slow waves are associated with large currents in the medial frontal gyrus, the middle frontal gyrus, the inferior frontal gyrus, the anterior cingulate, the precuneus, and the posterior cingulate. These areas overlap with the major connectional backbone of the cortex and with many parts of the default network. PMID:19164756
NASA Technical Reports Server (NTRS)
Smialek, James L.
2002-01-01
An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.
NASA Astrophysics Data System (ADS)
Barrientos Barria, Jessica; Dobroc, Alexandre; Coudert-Alteirac, Hélène; Raybaut, Myriam; Cézard, Nicolas; Dherbecourt, Jean-Baptiste; Schmid, Thomas; Faure, Basile; Souhaité, Grégoire; Pelon, Jacques; Melkonian, Jean-Michel; Godard, Antoine; Lefebvre, Michel
2014-10-01
We report on the remote sensing capability of an integrated path differential absorption lidar (IPDIAL) instrument, for multi-species gas detection and monitoring in the 3.3-3.7 µm range. This instrument is based on an optical parametric source composed of a master oscillator-power amplifier scheme—whose core building block is a nested cavity optical parametric oscillator—emitting up to 10 µJ at 3.3 µm. Optical pumping is realized with an innovative single-frequency, 2-kHz repetition rate, nanosecond microchip laser, amplified up to 200 µJ per pulse in a single-crystal fiber amplifier. Simultaneous monitoring of mean atmospheric water vapor and methane concentrations was performed over several days by use of a topographic target, and water vapor concentration measurements show good agreement compared with an in situ hygrometer measurement. Performances of the IPDIAL instrument are assessed in terms of concentration measurement uncertainties and maximum remote achievable range.
[Review of urban nonpoint source pollution models].
Wang, Long; Huang, Yue-Fei; Wang, Guang-Qian
2010-10-01
The development history of urban nonpoint source pollution models is reviewed. Features, applicability and limitations of seven popular urban nonpoint source pollution models (SWMM, STORM, SLAMM, HSPF, DR3M-QUAL, MOUSE, and HydroWorks) are discussed. The methodology and research findings of uncertainty in urban nonpoint source pollution modeling are presented. Analytical probabilistic models for estimation of urban nonpoint sources are also presented. The research achievements of urban nonpoint source pollution models in China are summarized. The shortcomings and gaps of approaches on urban nonpoint source pollution models are pointed out. Improvements in modeling of pollutants buildup and washoff, sediments and pollutants transport, and pollutants biochemical reactions are desired for those seven popular models. Most of the models developed by researchers in China are empirical models, so that they can only applied for specific small areas and have inadequate accuracy. Future approaches include improving capability in fate and transport simulation of sediments and pollutants, exploring methodologies of modeling urban nonpoint source pollution in regions with little data or incomplete information, developing stochastic models for urban nonpoint source pollution simulation, and applying GIS to facilitate urban nonpoint source pollution simulation.
Strauch, K; Fimmers, R; Kurz, T; Deichmann, K A; Wienker, T F; Baur, M P
2000-01-01
We present two extensions to linkage analysis for genetically complex traits. The first extension allows investigators to perform parametric (LOD-score) analysis of traits caused by imprinted genes-that is, of traits showing a parent-of-origin effect. By specification of two heterozygote penetrance parameters, paternal and maternal origin of the mutation can be treated differently in terms of probability of expression of the trait. Therefore, a single-disease-locus-imprinting model includes four penetrances instead of only three. In the second extension, parametric and nonparametric linkage analysis with two trait loci is formulated for a multimarker setting, optionally taking imprinting into account. We have implemented both methods into the program GENEHUNTER. The new tools, GENEHUNTER-IMPRINTING and GENEHUNTER-TWOLOCUS, were applied to human family data for sensitization to mite allergens. The data set comprises pedigrees from England, Germany, Italy, and Portugal. With single-disease-locus-imprinting MOD-score analysis, we find several regions that show at least suggestive evidence for linkage. Most prominently, a maximum LOD score of 4.76 is obtained near D8S511, for the English population, when a model that implies complete maternal imprinting is used. Parametric two-trait-locus analysis yields a maximum LOD score of 6.09 for the German population, occurring exactly at D4S430 and D18S452. The heterogeneity model specified for analysis alludes to complete maternal imprinting at both disease loci. Altogether, our results suggest that the two novel formulations of linkage analysis provide valuable tools for genetic mapping of multifactorial traits. PMID:10796874
NASA Astrophysics Data System (ADS)
Lee, Young-Hee; Ahn, Kwang-Deuk; Lee, Yong Hee
2016-06-01
We have developed a parametrization of tidal effects for use in the Noah land-surface model and have validated the land-surface model using observations taken over a tidal flat of the western coast of South Korea. The parametrization is based on the energy budget of a water layer with varying thickness above the soil. During flood tide, heat transfer by the moving water is considered in addition to the surface energy budget. In addition, partial penetration of solar radiation through the water layer is considered in the surface energy budget, and the water thickness varying with time is used as an additional input. Seven days with clear-sky conditions and westerly winds during the daytime are selected for validation of the model. Two simulations are performed in an offline mode: a control simulation without the tidal effect (CONTROL) and a simulation with the tidal effect (TIDE). Comparisons of results have been made with eddy-covariance measurements and soil temperature data for the tidal flats. Observations show that inundation significantly reduces both sensible and latent heat fluxes during daytime, which is well simulated in the TIDE simulation. The diurnal variation and magnitude of soil temperature are better simulated in the TIDE than in the CONTROL simulation. Some underestimation of the latent heat flux over the water surface is partly attributed to the use of one layer of water and the underestimated roughness length at this site. In addition, other model deficiencies are discussed.
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; Fleming, Paul A.; Ruben, S. D.; Marden, J. R.; Pao, L. Y.
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limited number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.
Enabling Parametric Optimal Ascent Trajectory Modeling During Early Phases of Design
NASA Technical Reports Server (NTRS)
Holt, James B.; Dees, Patrick D.; Diaz, Manuel J.
2015-01-01
-modal due to the interaction of various constraints. Additionally, when these obstacles are coupled with The Program to Optimize Simulated Trajectories [1] (POST), an industry standard program to optimize ascent trajectories that is difficult to use, it requires expert trajectory analysts to effectively optimize a vehicle's ascent trajectory. As it has been pointed out, the paradigm of trajectory optimization is still a very manual one because using modern computational resources on POST is still a challenging problem. The nuances and difficulties involved in correctly utilizing, and therefore automating, the program presents a large problem. In order to address these issues, the authors will discuss a methodology that has been developed. The methodology is two-fold: first, a set of heuristics will be introduced and discussed that were captured while working with expert analysts to replicate the current state-of-the-art; secondly, leveraging the power of modern computing to evaluate multiple trajectories simultaneously, and therefore, enable the exploration of the trajectory's design space early during the pre-conceptual and conceptual phases of design. When this methodology is coupled with design of experiments in order to train surrogate models, the authors were able to visualize the trajectory design space, enabling parametric optimal ascent trajectory information to be introduced with other pre-conceptual and conceptual design tools. The potential impact of this methodology's success would be a fully automated POST evaluation suite for the purpose of conceptual and preliminary design trade studies. This will enable engineers to characterize the ascent trajectory's sensitivity to design changes in an arbitrary number of dimensions and for finding settings for trajectory specific variables, which result in optimal performance for a "dialed-in" launch vehicle design. The effort described in this paper was developed for the Advanced Concepts Office [2] at NASA Marshall
Binding energy and structure of e{sup +}Li and e{sup -}Li using a parametric model potential
Shertzer, J.; Ward, S. J.
2006-02-15
The parametric model potential developed by Peach for describing the electron interaction with the alkali-metal ion core yields energy levels that are in excellent agreement with the measurements of the spectra. Because of its relative simplicity, the l-independent model potential is an attractive choice for studying positron-alkali-metal collisions. In order to test how well the model potential can be used to describe an effective three-body system, we use the Peach model potential to calculate the energy and geometry of the weakly bound e{sup +}Li and e{sup -}Li systems. The binding energy is in good agreement with calculations using the exact Hamiltonian.
Sources, Sinks, and Model Accuracy
Spatial demographic models are a necessary tool for understanding how to manage landscapes sustainably for animal populations. These models, therefore, must offer precise and testable predications about animal population dynamics and how animal demographic parameters respond to ...
Learning models for multi-source integration
Tejada, S.; Knoblock, C.A.; Minton, S.
1996-12-31
Because of the growing number of information sources available through the internet there are many cases in which information needed to solve a problem or answer a question is spread across several information sources. For example, when given two sources, one about comic books and the other about super heroes, you might want to ask the question {open_quotes}Is Spiderman a Marvel Super Hero?{close_quotes} This query accesses both sources; therefore, it is necessary to have information about the relationships of the data within each source and between sources to properly access and integrate the data retrieved. The SIMS information broker captures this type of information in the form of a model. All the information sources map into the model providing the user a single interface to multiple sources.
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Musson-Genon, Luc; Dupont, Eric; Milliez, Maya; Carissimo, Bertrand
2014-05-01
A detailed numerical simulation of a radiation fog event with a single column model is presented, which takes into account recent developments in microphysical parametrizations. One-dimensional simulations are performed using the computational fluid dynamics model Code_Saturne and the results are compared to a very detailed in situ dataset collected during the ParisFog campaign, which took place near Paris, France, during the winter 2006-2007. Special attention is given to the detailed and complete diurnal simulations and to the role of microphysics in the fog life cycle. The comparison between the simulated and the observed visibility, in the single-column model case study, shows that the evolution of radiation fog is correctly simulated. Sensitivity simulations show that fog development and dissipation are sensitive to the droplet-size distribution through sedimentation/deposition processes but the aerosol number concentration in the coarse mode has a low impact on the time of fog formation.
NASA Astrophysics Data System (ADS)
Fai, S.; Filippi, M.; Paliaga, S.
2013-07-01
Whether a house of worship or a simple farmhouse, the fabrication of a building reveals both the unspoken cultural aspirations of the builder and the inevitable exigencies of the construction process. In other-words, why buildings are made is intimately and inevitably associated with how buildings are made. Nowhere is this more evident than in vernacular architecture. At the Carleton Immersive Media Studio (CIMS) we are concerned that the de-population of Canada's rural areas, paucity of specialized tradespersons, and increasing complexity of building codes threaten the sustainability of this invaluable cultural resource. For current and future generations, the quantitative and qualitative values of traditional methods of construction are essential for an inclusive cultural memory. More practically, and equally pressing, an operational knowledge of these technologies is essential for the conservation of our built heritage. To address these concerns, CIMS has launched a number of research initiatives over the past five years that explore novel protocols for the documentation and dissemination of knowledge related to traditional methods of construction. Our current project, Cultural Diversity and Material Imagination in Canadian Architecture (CDMICA), made possible through funding from Canada's Social Sciences and Humanities Research Council (SSHRC), explores the potential of building information modelling (BIM) within the context of a web-based environment. In this paper, we discuss our work-to-date on the development of a web-based library of BIM details that is referenced to ''typical'' assemblies culled from 19C and early 20C construction manuals. The parametric potential of these ''typical'' details is further refined by evidence from the documentation of ''specific'' details studied during comprehensive surveys of extant heritage buildings. Here, we consider a BIM of the roof truss assembly of one of the oldest buildings in Canada's national
NASA's X-Plane Database and Parametric Cost Model v 2.0
NASA Technical Reports Server (NTRS)
Sterk, Steve; Ogluin, Anthony; Greenberg, Marc
2016-01-01
The NASA Armstrong Cost Engineering Team with technical assistance from NASA HQ (SID)has gone through the full process in developing new CERs from Version #1 to Version #2 CERs. We took a step backward and reexamined all of the data collected, such as dependent and independent variables, cost, dry weight, length, wingspan, manned versus unmanned, altitude, Mach number, thrust, and skin. We used a well- known statistical analysis tool called CO$TAT instead of using "R" multiple linear or the "Regression" tool found in Microsoft Excel(TradeMark). We setup an "array of data" by adding 21" dummy variables;" we analyzed the standard error (SE) and then determined the "best fit." We have parametrically priced-out several future X-planes and compared our results to those of other resources. More work needs to be done in getting "accurate and traceable cost data" from historical X-plane records!
Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic
2014-06-28
The finite resolution of general circulation models of the coupled atmosphere-ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere-ocean climate system in operational forecast mode, and the latest seasonal forecasting system--System 4--has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981-2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden-Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid
Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic
2014-01-01
The finite resolution of general circulation models of the coupled atmosphere–ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere–ocean climate system in operational forecast mode, and the latest seasonal forecasting system—System 4—has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981–2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden–Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid
Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic
2014-06-28
The finite resolution of general circulation models of the coupled atmosphere-ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere-ocean climate system in operational forecast mode, and the latest seasonal forecasting system--System 4--has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981-2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden-Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid
Nascimento, Jacinto C; Marques, Jorge S; Lemos, João M
2013-05-01
Many approaches to trajectory analysis, such as clustering or classification, use probabilistic generative models, thus not requiring trajectory alignment/registration. Switched linear dynamical models (e.g., HMMs) have been used in this context, due to their ability to describe different motion regimes. However, these models are not suitable for handling space-dependent dynamics that are more naturally captured by nonlinear models. As is well known, these are more difficult to identify. In this paper, we propose a new way of modeling trajectories, based on a mixture of parametric motion vector fields that depend on a small number of parameters. Switching among these fields follows a probabilistic mechanism, characterized by a field of stochastic matrices. This approach allows representing a wide variety of trajectories and modeling space-dependent behaviors without using global nonlinear dynamical models. Experimental evaluation is conducted in both synthetic and real scenarios. The latter concerning with human trajectory modeling for activity classification, a central task in video surveillance.
Nascimento, Jacinto C; Marques, Jorge S; Lemos, João M
2013-05-01
Many approaches to trajectory analysis, such as clustering or classification, use probabilistic generative models, thus not requiring trajectory alignment/registration. Switched linear dynamical models (e.g., HMMs) have been used in this context, due to their ability to describe different motion regimes. However, these models are not suitable for handling space-dependent dynamics that are more naturally captured by nonlinear models. As is well known, these are more difficult to identify. In this paper, we propose a new way of modeling trajectories, based on a mixture of parametric motion vector fields that depend on a small number of parameters. Switching among these fields follows a probabilistic mechanism, characterized by a field of stochastic matrices. This approach allows representing a wide variety of trajectories and modeling space-dependent behaviors without using global nonlinear dynamical models. Experimental evaluation is conducted in both synthetic and real scenarios. The latter concerning with human trajectory modeling for activity classification, a central task in video surveillance. PMID:23380856
Park, Taeyoung; Krafty, Robert T.; Sánchez, Alvaro I.
2012-01-01
A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public. PMID:23393408
IMM filtering on parametric data for multi-sensor fusion
NASA Astrophysics Data System (ADS)
Shafer, Scott; Owen, Mark W.
2014-06-01
In tracking, many types of sensor data can be obtained and utilized to distinguish a particular target. Commonly, kinematic information is used for tracking, but this can be combined with identification attributes and parametric information passively collected from the targets emitters. Along with the standard tracking process (predict, associate, score, update, and initiate) that operates in all kinematic trackers, parametric data can also be utilized to perform these steps and provide a means for feature fusion. Feature fusion, utilizing parametrics from multiple sources, yields a rich data set providing many degrees of freedom to separate and correlate data into appropriate tracks. Parametric radar data can take on many dynamics to include: stable, agile, jitter, and others. By utilizing a running sample mean and sample variance a good estimate of radar parametrics is achieved. However, when dynamics are involved, a severe lag can occur and a non-optimal estimate is achieved. This estimate can yield incorrect associations in feature space and cause track fragmentation or miscorrelation. In this paper we investigate the accuracy of the interacting multiple model (IMM) filter at estimating the first and second moments of radar parametrics. The algorithm is assessed by Monte Carlo simulation and compared against a running sample mean/variance technique. We find that the IMM approach yields a better result due to its ability to quickly adapt to dynamical systems with the proper model and tuning.
Essays on parametric and nonparametric modeling and estimation with applications to energy economics
NASA Astrophysics Data System (ADS)
Gao, Weiyu
My dissertation research is composed of two parts: a theoretical part on semiparametric efficient estimation and an applied part in energy economics under different dynamic settings. The essays are related in terms of their applications as well as the way in which models are constructed and estimated. In the first essay, efficient estimation of the partially linear model is studied. We work out the efficient score functions and efficiency bounds under four stochastic restrictions---independence, conditional symmetry, conditional zero mean, and partially conditional zero mean. A feasible efficient estimation method for the linear part of the model is developed based on the efficient score. A battery of specification test that allows for choosing between the alternative assumptions is provided. A Monte Carlo simulation is also conducted. The second essay presents a dynamic optimization model for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from different sources to estimate the oil production cost function and the revenue function. We pay particular attention to the dynamic aspect of the oil production by employing petroleum-engineering software to simulate the interaction between control variables and reservoir state variables. Optimal solutions are studied under different scenarios to account for the possible changes in the exogenous variables and the uncertainty about the forecasts. The third essay examines the effect of oil price volatility on the level of innovation displayed by the U.S. economy. A measure of innovation is calculated by decomposing an output-based Malmquist index. We also construct a nonparametric measure for oil price volatility. Technical change and oil price volatility are then placed in a VAR system with oil price and a variable indicative of monetary policy. The system is estimated and analyzed for significant relationships. We find that oil price volatility displays a significant
Quarles, C Derrick; Carado, Anthony J; Barinaga, Charles J; Koppenaal, David W; Marcus, R Kenneth
2012-01-01
A new, low-power ionization source for the elemental analysis of aqueous solutions has recently been described. The liquid sampling-atmospheric pressure glow discharge (LS-APGD) source operates at relatively low currents (<20 mA) and solution flow rates (<50 μL min(-1)), yielding a relatively simple alternative for atomic mass spectrometry applications. The LS-APGD has been interfaced to what is otherwise an organic, LC-MS mass analyzer, the Thermo Scientific Exactive Orbitrap without any modifications, other than removing the electrospray ionization source supplied with that instrument. A glow discharge is initiated between the surface of the test solution exiting a glass capillary and a metallic counter electrode mounted at a 90° angle and separated by a distance of ~5 mm. As with any plasma-based ionization source, there are key discharge operation and ion sampling parameters that affect the intensity and composition of the derived mass spectra, including signal-to-background ratios. We describe here a preliminary parametric evaluation of the roles of discharge current, solution flow rate, argon sheath gas flow rate, and ion sampling distance as they apply on this mass analyzer system. A cursive evaluation of potential matrix effects due to the presence of easily ionized elements indicate that sodium concentrations of up to 50 μg mL(-1) generally cause suppressions of less than 50%, dependant upon the analyte species. Based on the results of this series of studies, preliminary limits of detection (LOD) have been established through the generation of calibration functions. While solution-based concentration LOD levels of 0.02-2 μg mL(-1) are not impressive on the surface, the fact that they are determined via discrete 5 μL injections leads to mass-based detection limits at picogram to single-nanogram levels. The overhead costs associated with source operation (10 W d.c. power, solution flow rates of <50 μL min(-1), and gas flow rates <10 mL min(-1)) are
NASA Technical Reports Server (NTRS)
Hou, A. Y.
1984-01-01
A simple mechanistic model of a zonally averaged circulation forced by heat and momentum sources is developed and applied to the Venus atmosphere in the light of recent data. Basic equations for a steady-state axisymmetric circulation are discussed, and the parametric dependence of a nearly inviscid Hadley circulation in the absence of eddy forcing is examined and extended to a wide range of thermal Rossby numbers. The effect of diffusion is considered and found to be small for the Venus cloud region. The zonally averaged eddy sources and sinks required to support the zonal superrotation on Venus are determined.
Lacitignola, Deborah; Saccomandi, Giuseppe
2014-03-01
We consider a simple mesoscopic model of DNA in which the binding of the RNA polymerase enzyme molecule to the promoter sequence of the DNA is included through a substrate energy term modeling the enzymatic interaction with the DNA strands. We focus on the differential system for solitary waves and derive conditions--in terms of the model parameters--for the occurrence of the parametric resonance phenomenon. We find that what truly matters for parametric resonance is not the ratio between the strength of the stacking and the inter-strand forces but the ratio between the substrate and the inter-strands. On the basis of these results, the standard objection that longitudinal motion is negligible because of the second order seems to fail, suggesting that all the studies involving the longitudinal degree of freedom in DNA should be reconsidered when the interaction of the RNA polymerase with the DNA macromolecule is not neglected. PMID:24510728
Jaspers, Stijn; Verbeke, Geert; Böhning, Dankmar; Aerts, Marc
2016-01-01
In the last decades, considerable attention has been paid to the collection of antimicrobial resistance data, with the aim of monitoring non-wild-type isolates. This monitoring is performed based on minimum inhibition concentration (MIC) values, which are collected through dilution experiments. We present a semi-parametric mixture model to estimate the entire MIC density on the continuous scale. The parametric first component is extended with a non-parametric second component and a new back-fitting algorithm, based on the Vertex Exchange Method, is proposed. Our data example shows how to estimate the MIC density for Escherichia coli tested for ampicillin and how to use this estimate for model-based classification. A simulation study was performed, showing the promising behavior of the new method, both in terms of density estimation as well as classification.
Parametric studies of magnetic-optic imaging using finite-element models
NASA Astrophysics Data System (ADS)
Chao, C.; Udpa, L.; Xuan, L.; Fitzpatrick, G.; Thorne, D.; Shih, W.
2000-05-01
Magneto-optic imaging is a relatively new sensor application of bubble memory technology to NDI. The Magneto-Optic Imager (MOI) uses a magneto-optic (MO) sensor to produce analog images of magnetic flux leakage from surface and subsurface defects. The flux leakage is produced by eddy current induction techniques in nonferrous metals and magnetic yokes are used in ferromagnetic materials. The technique has gained acceptance in the aircraft maintenance industry for use to detect surface-breaking cracks and corrosion. Until recently, much of the MOI development has been empirical in nature since the electromagnetic processes that produce images are rather complex. The availability of finite element techniques to numerically solve Maxwell's equations, in conjunction with MOI observations, allows greater understanding of the capabilities of the instrument. In this paper, we present a systematic set of finite element calculations along with MOI measurements on specific defects to quantify the current capability of the MOI as well as its desired performance. Parametric studies including effects of liftoff and proximity of edges are also studied.—This material is based upon work supported by the Federal Aviation Administration under Contract #DTFA03-98-D-00008, Delivery Order #IA013 and performed at Iowa State University's Center for NDE as part of the Center for Aviation Systems Reliability program.
Non-parametric early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.
2008-03-01
The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.
Ding, Li; Mariñas, Benito J; Schideman, Lance C; Snoeyink, Vernon L; Li, Qilin
2006-01-01
Natural organic matter (NOM) hinders adsorption of trace organic compounds on powdered activated carbon (PAC) via two dominant mechanisms: direct site competition and pore blockage. COMPSORB, a three-component model that incorporates these two competitive mechanisms, was developed in a previous study to describe the removal of trace contaminants in continuous-flow hybrid PAC adsorption/membrane filtration systems. Synthetic solutions containing two model compounds as surrogates for NOM were used in the original study to elucidate competitive effects and to verify the model. In the present study, a quantitative method to characterize the components of NOM that are responsible for competitive adsorption effects in natural water was developed to extend the application of COMPSORB to natural water systems. Using batch adsorption data, NOM was differentiated into two fictive fractions, representing the strongly competing and pore blocking components, and each was treated as a single compound. The equilibrium and kinetic parameters for these fictive compounds were calculated using simplified adsorption models. This parametrization procedure was carried out on two different natural waters, and the model was verified with experimental data obtained for atrazine removal from natural water in a PAC/membrane system. The model predicted the system performance reasonably well and highlighted the importance of considering both direct site competition and pore blockage effects of NOM in modeling these systems. PMID:16433371
Chemical modelling of molecular sources
NASA Astrophysics Data System (ADS)
Nejad, L. A. M.; Millar, T. J.
1987-09-01
The authors present detailed results of a chemical kinetic model of the outer envelope (1016cm to 1018cm) of the carbon-rich star IRC +10216. The chemistry is driven by a combination of cosmic-ray ionization and ultraviolet radiation and, starting from 7 parent molecules injected into the envelope, the authors find that a complex chemistry ensues. Ion-molecule reactions can efficiently build hydrocarbon species and account for the observed abundances of CH3CN and HNC. Reactions involving CO may lead to observable abundances of oxygen-bearing molecules such as C3O, CH2CO and HCO+.
The Commercial Open Source Business Model
NASA Astrophysics Data System (ADS)
Riehle, Dirk
Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.
Fjodorova, Natalja; Novič, Marjana
2015-09-01
Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits. PMID:26388367
Momentum structure of the self-energy and its parametrization for the two-dimensional Hubbard model
NASA Astrophysics Data System (ADS)
Pudleiner, P.; Schäfer, T.; Rost, D.; Li, G.; Held, K.; Blümer, N.
2016-05-01
We compute the self-energy for the half-filled Hubbard model on a square lattice using lattice quantum Monte Carlo simulations and the dynamical vertex approximation. The self-energy is strongly momentum-dependent, but it can be parametrized via the noninteracting energy-momentum dispersion ɛk, except for pseudogap features right at the Fermi edge. That is, it can be written as Σ (ɛk,ω ) , with two energylike parameters (ɛ , ω ) instead of three (kx, ky, and ω ). The self-energy has two rather broad and weakly dispersing high-energy features and a sharp ω =ɛk feature at high temperatures, which turns to ω =-ɛk at low temperatures. Altogether this yields a Z - and reversed-Z -like structure, respectively, for the imaginary part of Σ (ɛk,ω ) . We attribute the change of the low-energy structure to antiferromagnetic spin fluctuations.
Fjodorova, Natalja; Novič, Marjana
2015-09-01
Engineering optimization is an actual goal in manufacturing and service industries. In the tutorial we represented the concept of traditional parametric estimation models (Factorial Design (FD) and Central Composite Design (CCD)) for searching optimal setting parameters of technological processes. Then the 2D mapping method based on Auto Associative Neural Networks (ANN) (particularly, the Feed Forward Bottle Neck Neural Network (FFBN NN)) was described in comparison with traditional methods. The FFBN NN mapping technique enables visualization of all optimal solutions in considered processes due to the projection of input as well as output parameters in the same coordinates of 2D map. This phenomenon supports the more efficient way of improving the performance of existing systems. Comparison of two methods was performed on the bases of optimization of solder paste printing processes as well as optimization of properties of cheese. Application of both methods enables the double check. This increases the reliability of selected optima or specification limits.
Bim from Laser SCANS… not Just for Buildings: Nurbs-Based Parametric Modeling of a Medieval Bridge
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.; Brumana, R.; Previtali, M.; Roncoroni, F.
2016-06-01
Building Information Modelling is not limited to buildings. BIM technology includes civil infrastructures such as roads, dams, bridges, communications networks, water and wastewater networks and tunnels. This paper describes a novel methodology for the generation of a detailed BIM of a complex medieval bridge. The use of laser scans and images coupled with the development of algorithms able to handle irregular shapes allowed the creation of advanced parametric objects, which were assembled to obtain an accurate BIM. The lack of existing object libraries required the development of specific families for the different structural elements of the bridge. Finally, some applications aimed at assessing the stability and safety of the bridge are illustrated and discussed. The BIM of the bridge can incorporate this information towards a new "BIMonitoring" concept to preserve the geometric complexity provided by point clouds, obtaining a detailed BIM with object relationships and attributes.
Vertical transport and sources in flux models
Canavan, G.H.
1997-01-01
Vertical transport in flux models in examined and shown to reproduce expected limits for densities and fluxes. Disparities with catalog distributions are derived and inverted to find the sources required to rectify them.
The impact of parametrized convection on cloud feedback.
Webb, Mark J; Lock, Adrian P; Bretherton, Christopher S; Bony, Sandrine; Cole, Jason N S; Idelkadi, Abderrahmane; Kang, Sarah M; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D; Zhao, Ming
2015-11-13
We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that 'ConvOff' models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud feedback
The impact of parametrized convection on cloud feedback
Webb, Mark J.; Lock, Adrian P.; Bretherton, Christopher S.; Bony, Sandrine; Cole, Jason N. S.; Idelkadi, Abderrahmane; Kang, Sarah M.; Koshiro, Tsuyoshi; Kawai, Hideaki; Ogura, Tomoo; Roehrig, Romain; Shin, Yechul; Mauritsen, Thorsten; Sherwood, Steven C.; Vial, Jessica; Watanabe, Masahiro; Woelfle, Matthew D.; Zhao, Ming
2015-01-01
We investigate the sensitivity of cloud feedbacks to the use of convective parametrizations by repeating the CMIP5/CFMIP-2 AMIP/AMIP + 4K uniform sea surface temperature perturbation experiments with 10 climate models which have had their convective parametrizations turned off. Previous studies have suggested that differences between parametrized convection schemes are a leading source of inter-model spread in cloud feedbacks. We find however that ‘ConvOff’ models with convection switched off have a similar overall range of cloud feedbacks compared with the standard configurations. Furthermore, applying a simple bias correction method to allow for differences in present-day global cloud radiative effects substantially reduces the differences between the cloud feedbacks with and without parametrized convection in the individual models. We conclude that, while parametrized convection influences the strength of the cloud feedbacks substantially in some models, other processes must also contribute substantially to the overall inter-model spread. The positive shortwave cloud feedbacks seen in the models in subtropical regimes associated with shallow clouds are still present in the ConvOff experiments. Inter-model spread in shortwave cloud feedback increases slightly in regimes associated with trade cumulus in the ConvOff experiments but is quite similar in the most stable subtropical regimes associated with stratocumulus clouds. Inter-model spread in longwave cloud feedbacks in strongly precipitating regions of the tropics is substantially reduced in the ConvOff experiments however, indicating a considerable local contribution from differences in the details of convective parametrizations. In both standard and ConvOff experiments, models with less mid-level cloud and less moist static energy near the top of the boundary layer tend to have more positive tropical cloud feedbacks. The role of non-convective processes in contributing to inter-model spread in cloud
Tagging Water Sources in Atmospheric Models
NASA Technical Reports Server (NTRS)
Bosilovich, M.
2003-01-01
Tagging of water sources in atmospheric models allows for quantitative diagnostics of how water is transported from its source region to its sink region. In this presentation, we review how this methodology is applied to global atmospheric models. We will present several applications of the methodology. In one example, the regional sources of water for the North American Monsoon system are evaluated by tagging the surface evaporation. In another example, the tagged water is used to quantify the global water cycling rate and residence time. We will also discuss the need for more research and the importance of these diagnostics in water cycle studies.
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
García-Betances, Rebeca I; Cabrera-Umpiérrez, María Fernanda; Ottaviano, Manuel; Pastorino, Matteo; Arredondo, María T
2016-01-01
Despite the speedy evolution of Information and Computer Technology (ICT), and the growing recognition of the importance of the concept of universal design in all domains of daily living, mainstream ICT-based product designers and developers still work without any truly structured tools, guidance or support to effectively adapt their products and services to users' real needs. This paper presents the approach used to define and evaluate parametric cognitive models that describe interaction and usage of ICT by people with aging- and disability-derived functional impairments. A multisensorial training platform was used to train, based on real user measurements in real conditions, the virtual parameterized user models that act as subjects of the test-bed during all stages of simulated disabilities-friendly ICT-based products design. An analytical study was carried out to identify the relevant cognitive functions involved, together with their corresponding parameters as related to aging- and disability-derived functional impairments. Evaluation of the final cognitive virtual user models in a real application has confirmed that the use of these models produce concrete valuable benefits to the design and testing process of accessible ICT-based applications and services. Parameterization of cognitive virtual user models allows incorporating cognitive and perceptual aspects during the design process. PMID:26907296
García-Betances, Rebeca I.; Cabrera-Umpiérrez, María Fernanda; Ottaviano, Manuel; Pastorino, Matteo; Arredondo, María T.
2016-01-01
Despite the speedy evolution of Information and Computer Technology (ICT), and the growing recognition of the importance of the concept of universal design in all domains of daily living, mainstream ICT-based product designers and developers still work without any truly structured tools, guidance or support to effectively adapt their products and services to users’ real needs. This paper presents the approach used to define and evaluate parametric cognitive models that describe interaction and usage of ICT by people with aging- and disability-derived functional impairments. A multisensorial training platform was used to train, based on real user measurements in real conditions, the virtual parameterized user models that act as subjects of the test-bed during all stages of simulated disabilities-friendly ICT-based products design. An analytical study was carried out to identify the relevant cognitive functions involved, together with their corresponding parameters as related to aging- and disability-derived functional impairments. Evaluation of the final cognitive virtual user models in a real application has confirmed that the use of these models produce concrete valuable benefits to the design and testing process of accessible ICT-based applications and services. Parameterization of cognitive virtual user models allows incorporating cognitive and perceptual aspects during the design process. PMID:26907296
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; Fleming, Paul A.; Ruben, S. D.; Marden, J. R.; Pao, L. Y.
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limitedmore » number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.« less
Characterization and modeling of the heat source
Glickstein, S.S.; Friedman, E.
1993-10-01
A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.
Sheaffer, Jonathan; van Walstijn, Maarten; Fazenda, Bruno
2014-01-01
In finite difference time domain simulation of room acoustics, source functions are subject to various constraints. These depend on the way sources are injected into the grid and on the chosen parameters of the numerical scheme being used. This paper addresses the issue of selecting and designing sources for finite difference simulation, by first reviewing associated aims and constraints, and evaluating existing source models against these criteria. The process of exciting a model is generalized by introducing a system of three cascaded filters, respectively, characterizing the driving pulse, the source mechanics, and the injection of the resulting source function into the grid. It is shown that hard, soft, and transparent sources can be seen as special cases within this unified approach. Starting from the mechanics of a small pulsating sphere, a parametric source model is formulated by specifying suitable filters. This physically constrained source model is numerically consistent, does not scatter incoming waves, and is free from zero- and low-frequency artifacts. Simulation results are employed for comparison with existing source formulations in terms of meeting the spectral and temporal requirements on the outward propagating wave.
Source characterization refinements for routine modeling applications
NASA Astrophysics Data System (ADS)
Paine, Robert; Warren, Laura L.; Moore, Gary E.
2016-03-01
Steady-state dispersion models recommended by various environmental agencies worldwide have generally been evaluated with traditional stack release databases, including tracer studies. The sources associated with these field data are generally those with isolated stacks or release points under relatively ideal conditions. Many modeling applications, however, involve sources that act to modify the local dispersion environment as well as the conditions associated with plume buoyancy and final plume rise. The source characterizations affecting plume rise that are introduced and discussed in this paper include: 1) sources with large fugitive heat releases that result in a local urbanized effect, 2) stacks on or near individual buildings with large fugitive heat releases that tend to result in buoyant "liftoff" effects counteracting aerodynamic downwash effects, 3) stacks with considerable moisture content, which leads to additional heat of condensation during plume rise - an effect that is not considered by most dispersion models, and 4) stacks in a line that result in at least partial plume merging and buoyancy enhancement under certain conditions. One or more of these effects are appropriate for a given modeling application. We present examples of specific applications for one or more of these procedures in the paper. This paper describes methods to introduce the four source characterization approaches to more accurately simulate plume rise to a variety of dispersion models. The authors have focused upon applying these methods to the AERMOD modeling system, which is the United States Environmental Protection Agency's preferred model in addition to being used internationally, but the techniques are applicable to dispersion models worldwide. While the methods could be installed directly into specific models such as AERMOD, the advantage of implementing them outside the model is to allow them to be applicable to numerous models immediately and also to allow them to
NASA Astrophysics Data System (ADS)
Brauer, Claudia; Torfs, Paul; Teuling, Ryan; Uijlenhoet, Remko
2014-05-01
We present the Wageningen Lowland Runoff Simulator (WALRUS), a novel rainfall-runoff model to fill the gap between complex, spatially distributed models for lowland catchments and simple, parametric models for mountainous catchments. From observations and experience from two Dutch field sites (the Hupsel Brook catchment and the Cabauw polder), we identified key processes for runoff generation in lowland catchments and important feedbacks between components in the hydrological system. We used this knowledge to design a parametric model which can be used all over the world in both freely draining lowland catchments and polders with controlled water levels. While using only four parameters which require calibration, WALRUS explicitly accounts for processes that are important in lowland areas: (1) Groundwater-unsaturated zone coupling: WALRUS contains one soil reservoir, which is divided effectively by the (dynamic) groundwater table into a groundwater zone and a vadose zone. The condition of this soil reservoir is described by two strongly dependent variables: the groundwater depth and the storage deficit (the effective thickness of empty pores). This implementation enables capillary rise when the top soil has dried through evapotranspiration. (2) Wetness-dependent flowroutes: The storage deficit determines the division of rain water between the soil reservoir (slow routes: infiltration, percolation and groundwater flow) and a quickflow reservoir (quick routes: drainpipe, macropore and overland flow). (3) Groundwater-surface water feedbacks: Surface water forms an explicit part of the model structure. Drainage depends on the difference between surface water level and groundwater level (rather than groundwater level alone), allowing for feedbacks and infiltration of surface water into the soil. (4) Seepage and surface water supply: Groundwater seepage and surface water supply or extraction (pumping) are added to or subtracted from the soil or surface water reservoir
NASA Astrophysics Data System (ADS)
Luo, Zhiwen; Li, Yuguo
2011-10-01
This paper reports the results of a parametric CFD study on idealized city models to investigate the potential of slope flow in ventilating a city located in a mountainous region when the background synoptic wind is absent. Examples of such a city include Tokyo in Japan, Los Angeles and Phoenix in the US, and Hong Kong. Two types of buoyancy-driven flow are considered, i.e., slope flow from the mountain slope (katabatic wind at night and anabatic wind in the daytime), and wall flow due to heated/cooled urban surfaces. The combined buoyancy-driven flow system can serve the purpose of dispersing the accumulated urban air pollutants when the background wind is weak or absent. The microscopic picture of ventilation performance within the urban structures was evaluated in terms of air change rate (ACH) and age of air. The simulation results reveal that the slope flow plays an important role in ventilating the urban area, especially in calm conditions. Katabatic flow at night is conducive to mitigating the nocturnal urban heat island. In the present parametric study, the mountain slope angle and mountain height are assumed to be constant, and the changing variables are heating/cooling intensity and building height. For a typical mountain of 500 m inclined at an angle of 20° to the horizontal level, the interactive structure is very much dependent on the ratio of heating/cooling intensity as well as building height. When the building is lower than 60 m, the slope wind dominates. When the building is as high as 100 m, the contribution from the urban wall flow cannot be ignored. It is found that katabatic wind can be very beneficial to the thermal environment as well as air quality at the pedestrian level. The air change rate for the pedestrian volume can be as high as 300 ACH.
ON THE ROBUSTNESS OF z = 0-1 GALAXY SIZE MEASUREMENTS THROUGH MODEL AND NON-PARAMETRIC FITS
Mosleh, Moein; Franx, Marijn; Williams, Rik J.
2013-11-10
We present the size-stellar mass relations of nearby (z = 0.01-0.02) Sloan Digital Sky Survey galaxies, for samples selected by color, morphology, Sérsic index n, and specific star formation rate. Several commonly employed size measurement techniques are used, including single Sérsic fits, two-component Sérsic models, and a non-parametric method. Through simple simulations, we show that the non-parametric and two-component Sérsic methods provide the most robust effective radius measurements, while those based on single Sérsic profiles are often overestimates, especially for massive red/early-type galaxies. Using our robust sizes, we show for all sub-samples that the mass-size relations are shallow at low stellar masses and steepen above ∼3-4 × 10{sup 10} M{sub ☉}. The mass-size relations for galaxies classified as late-type, low-n, and star-forming are consistent with each other, while blue galaxies follow a somewhat steeper relation. The mass-size relations of early-type, high-n, red, and quiescent galaxies all agree with each other but are somewhat steeper at the high-mass end than previous results. To test potential systematics at high redshift, we artificially redshifted our sample (including surface brightness dimming and degraded resolution) to z = 1 and re-fit the galaxies using single Sérsic profiles. The sizes of these galaxies before and after redshifting are consistent and we conclude that systematic effects in sizes and the size-mass relation at z ∼ 1 are negligible. Interestingly, since the poorer physical resolution at high redshift washes out bright galaxy substructures, single Sérsic fitting appears to provide more reliable and unbiased effective radius measurements at high z than for nearby, well-resolved galaxies.
Phenomenological Modeling of Infrared Sources: Recent Advances
NASA Technical Reports Server (NTRS)
Leung, Chun Ming; Kwok, Sun (Editor)
1993-01-01
Infrared observations from planned space facilities (e.g., ISO (Infrared Space Observatory), SIRTF (Space Infrared Telescope Facility)) will yield a large and uniform sample of high-quality data from both photometric and spectroscopic measurements. To maximize the scientific returns of these space missions, complementary theoretical studies must be undertaken to interpret these observations. A crucial step in such studies is the construction of phenomenological models in which we parameterize the observed radiation characteristics in terms of the physical source properties. In the last decade, models with increasing degree of physical realism (in terms of grain properties, physical processes, and source geometry) have been constructed for infrared sources. Here we review current capabilities available in the phenomenological modeling of infrared sources and discuss briefly directions for future research in this area.
Edwin A. Harvego; Michael G. McKellar; James E. O'Brien; J. Stephen Herring
2009-09-01
High Temperature Electrolysis (HTE), when coupled to an advanced nuclear reactor capable of operating at reactor outlet temperatures of 800 °C to 950 °C, has the potential to efficiently produce the large quantities of hydrogen needed to meet future energy and transportation needs. To evaluate the potential benefits of nuclear-driven hydrogen production, the UniSim process analysis software was used to evaluate different reactor concepts coupled to a reference HTE process design concept. The reference HTE concept included an Intermediate Heat Exchanger and intermediate helium loop to separate the reactor primary system from the HTE process loops and additional heat exchangers to transfer reactor heat from the intermediate loop to the HTE process loops. The two process loops consisted of the water/steam loop feeding the cathode side of a HTE electrolysis stack, and the sweep gas loop used to remove oxygen from the anode side. The UniSim model of the process loops included pumps to circulate the working fluids and heat exchangers to recover heat from the oxygen and hydrogen product streams to improve the overall hydrogen production efficiencies. The reference HTE process loop model was coupled to separate UniSim models developed for three different advanced reactor concepts (a high-temperature helium cooled reactor concept and two different supercritical CO2 reactor concepts). Sensitivity studies were then performed to evaluate the affect of reactor outlet temperature on the power cycle efficiency and overall hydrogen production efficiency for each of the reactor power cycles. The results of these sensitivity studies showed that overall power cycle and hydrogen production efficiencies increased with reactor outlet temperature, but the power cycles producing the highest efficiencies varied depending on the temperature range considered.
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar
2016-04-01
A wealth of X-ray and radio observations has revealed in the past decade a growing diversity of neutron stars (NSs) with properties spanning orders of magnitude in magnetic field strength and ages, and with emission processes explained by a range of mechanisms dictating their radiation properties. However, serious difficulties exist with the magneto-dipole model of isolated NS fields and their inferred ages, such as a large range of observed braking indices (n, with values often <3) and a mismatch between the NS and associated supernova remnant (SNR) ages. This problem arises primarily from the assumptions of a constant magnetic field with n = 3, and an initial spin period that is much smaller than the observed current period. It has been suggested that a solution to this problem involves magnetic field evolution, with some NSs having magnetic fields buried within the crust by accretion of fall-back supernova material following their birth. In this work, we explore a parametric phenomenological model for magnetic field growth that generalizes previous suggested field evolution functions, and apply it to a variety of NSs with both secure SNR associations and known ages. We explore the flexibility of the model by recovering the results of previous work on buried magnetic fields in young NSs. Our model fits suggest that apparently disparate classes of NSs may be related to one another through the time evolution of the magnetic field.
NASA Astrophysics Data System (ADS)
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2016-08-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0 , is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0 ) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0 . We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
NASA Astrophysics Data System (ADS)
Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.
2014-06-01
This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.
NASA Astrophysics Data System (ADS)
Meneguz, Elena; Thomson, David; Witham, Claire; Kusmierczyk-Michulec, Jolanta
2015-04-01
NAME is a Lagrangian atmospheric dispersion model used by the Met Office to predict the dispersion of both natural and man-made contaminants in the atmosphere, e.g. volcanic ash, radioactive particles and chemical species. Atmospheric convection is responsible for transport and mixing of air resulting in a large exchange of heat and energy above the boundary layer. Although convection can transport material through the whole troposphere, convective clouds have a small horizontal length scale (of the order of few kilometres). Therefore, for large-scale transport the horizontal scale on which the convection exists is below the global NWP resolution used as input to NAME and convection must be parametrized. Prior to the work presented here, the enhanced vertical mixing generated by non-resolved convection was reproduced by randomly redistributing Lagrangian particles between the cloud base and cloud top with probability equal to 1/25th of the NWP predicted convective cloud fraction. Such a scheme is essentially diffusive and it does not make optimal use of all the information provided by the driving meteorological model. To make up for these shortcomings and make the parametrization more physically based, the convection scheme has been recently revised. The resulting version, presented in this paper, is now based on the balance equation between upward, entrainment and detrainment fluxes. In particular, upward mass fluxes are calculated with empirical formulas derived from Cloud Resolving Models and using the NWP convective precipitation diagnostic as closure. The fluxes are used to estimate how many particles entrain, move upward and detrain. Lastly, the scheme is completed by applying a compensating subsidence flux. The performance of the updated convection scheme is benchmarked against available observational data of passive tracers. In particular, radioxenon is a noble gas that can undergo significant long range transport: this study makes use of observations of
SWQM: Source Water Quality Modeling Software
2008-01-08
The Source Water Quality Modeling software (SWQM) simulates the water quality conditions that reflect properties of water generated by water treatment facilities. SWQM consists of a set of Matlab scripts that model the statistical variation that is expected in a water treatment facilitys water, such as pH and chlorine levels.
Meyer, Swen; Blaschek, Michael; Duttmann, Rainer; Ludwig, Ralf
2016-02-01
According to current climate projections, Mediterranean countries are at high risk for an even pronounced susceptibility to changes in the hydrological budget and extremes. These changes are expected to have severe direct impacts on the management of water resources, agricultural productivity and drinking water supply. Current projections of future hydrological change, based on regional climate model results and subsequent hydrological modeling schemes, are very uncertain and poorly validated. The Rio Mannu di San Sperate Basin, located in Sardinia, Italy, is one test site of the CLIMB project. The Water Simulation Model (WaSiM) was set up to model current and future hydrological conditions. The availability of measured meteorological and hydrological data is poor as it is common for many Mediterranean catchments. In this study we conducted a soil sampling campaign in the Rio Mannu catchment. We tested different deterministic and hybrid geostatistical interpolation methods on soil textures and tested the performance of the applied models. We calculated a new soil texture map based on the best prediction method. The soil model in WaSiM was set up with the improved new soil information. The simulation results were compared to standard soil parametrization. WaSiMs was validated with spatial evapotranspiration rates using the triangle method (Jiang and Islam, 1999). WaSiM was driven with the meteorological forcing taken from 4 different ENSEMBLES climate projections for a reference (1971-2000) and a future (2041-2070) times series. The climate change impact was assessed based on differences between reference and future time series. The simulated results show a reduction of all hydrological quantities in the future in the spring season. Furthermore simulation results reveal an earlier onset of dry conditions in the catchment. We show that a solid soil model setup based on short-term field measurements can improve long-term modeling results, which is especially important
Stimulated parametric emission microscopy
NASA Astrophysics Data System (ADS)
Isobe, Keisuke; Kataoka, Shogo; Murase, Rena; Watanabe, Wataru; Higashi, Tsunehito; Kawakami, Shigeki; Matsunaga, Sachihiro; Fukui, Kiichi; Itoh, Kazuyoshi
2006-01-01
We propose a novel microscopy technique based on the four-wave mixing (FWM) process that is enhanced by two-photon electronic resonance induced by a pump pulse along with stimulated emission induced by a dump pulse. A Ti:sapphire laser and an optical parametric oscillator are used as light sources for the pump and dump pulses, respectively. We demonstrate that our proposed FWM technique can be used to obtain a one-dimensional image of ethanol-thinned Coumarin 120 solution sandwiched between a hole-slide glass and a cover slip, and a two-dimensional image of a leaf of Camellia sinensis.
Mohseny, Maryam; Amanpour, Farzaneh; Mosavi-Jarrahi, Alireza; Jafari, Hossein; Moradi-Joo, Mohammad; Davoudi Monfared, Esmat
2016-01-01
Breast cancer is one of the most common causes of cancer mortality in Iran. Social determinants of health are among the key factors affecting the pathogenesis of diseases. This cross-sectional study aimed to determine the social determinants of breast cancer survival time with parametric and semi-parametric regression models. It was conducted on male and female patients diagnosed with breast cancer presenting to the Cancer Research Center of Shohada-E-Tajrish Hospital from 2006 to 2010. The Cox proportional hazard model and parametric models including the Weibull, log normal and log-logistic models were applied to determine the social determinants of survival time of breast cancer patients. The Akaike information criterion (AIC) was used to assess the best fit. Statistical analysis was performed with STATA (version 11) software. This study was performed on 797 breast cancer patients, aged 25-93 years with a mean age of 54.7 (±11.9) years. In both semi-parametric and parametric models, the three-year survival was related to level of education and municipal district of residence (P<0.05). The AIC suggested that log normal distribution was the best fit for the three-year survival time of breast cancer patients. Social determinants of health such as level of education and municipal district of residence affect the survival of breast cancer cases. Future studies must focus on the effect of childhood social class on the survival times of cancers, which have hitherto only been paid limited attention. PMID:27165244
NASA Astrophysics Data System (ADS)
Casarini, L.; Bonometto, S. A.; Tessarotto, E.; Corasaniti, P.-S.
2016-08-01
We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w0+(1‑a)wa. The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyote suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w0-wa parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.
Shah, Anoop D; Bartlett, Jonathan W; Carpenter, James; Nicholas, Owen; Hemingway, Harry
2014-03-15
Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data.
NASA Astrophysics Data System (ADS)
Casarini, L.; Bonometto, S. A.; Tessarotto, E.; Corasaniti, P.-S.
2016-08-01
We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w0+(1-a)wa. The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyote suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w0-wa parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.
Rigatos, G; Rigatou, E; Djida, J D
2015-01-01
The derivative-free nonlinear Kalman filter is proposed for state estimation and fault diagnosis in distributed parameter systems of the wave-type and particularly in the Peyrard-Bishop-Dauxois model of DNA dynamics. At a first stage, a nonlinear filtering approach is introduced for estimating the dynamics of the Peyrard-Bishop-Dauxois 1D nonlinear wave equation, through the processing of a small number of measurements. It is shown that the numerical solution of the associated partial differential equation results in a set of nonlinear ordinary differential equations. With the application of a diffeomorphism that is based on differential flatness theory it is shown that an equivalent description of the system is obtained in the linear canonical (Brunovsky) form. This transformation enables to obtain local estimates about the state vector of the DNA model through the application us of the standard Kalman filter recursion. At a second stage, the local statistical approach to fault diagnosis is used to perform fault diagnosis for this distributed parameter system by processing with statistical tools the differences (residuals) between the output of the Kalman filter and the measurements obtained from the distributed parameter system. Optimal selection of the fault threshold is succeeded by using the local statistical approach to fault diagnosis. The efficiency of the proposed filtering approach in the problem of fault diagnosis for parametric change detection, in nonlinear wave-type models of DNA dynamics, is confirmed through simulation experiments.
Bayesian Kinematic Finite Fault Source Models (Invited)
NASA Astrophysics Data System (ADS)
Minson, S. E.; Simons, M.; Beck, J. L.
2010-12-01
Finite fault earthquake source models are inherently under-determined: there is no unique solution to the inverse problem of determining the rupture history at depth as a function of time and space when our data are only limited observations at the Earth's surface. Traditional inverse techniques rely on model constraints and regularization to generate one model from the possibly broad space of all possible solutions. However, Bayesian methods allow us to determine the ensemble of all possible source models which are consistent with the data and our a priori assumptions about the physics of the earthquake source. Until now, Bayesian techniques have been of limited utility because they are computationally intractable for problems with as many free parameters as kinematic finite fault models. We have developed a methodology called Cascading Adaptive Tempered Metropolis In Parallel (CATMIP) which allows us to sample very high-dimensional problems in a parallel computing framework. The CATMIP algorithm combines elements of simulated annealing and genetic algorithms with the Metropolis algorithm to dynamically optimize the algorithm's efficiency as it runs. We will present synthetic performance tests of finite fault models made with this methodology as well as a kinematic source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. This earthquake was well recorded by multiple ascending and descending interferograms and a network of high-rate GPS stations whose records can be used as near-field seismograms.
A Semi-Parametric Bayesian Mixture Modeling Approach for the Analysis of Judge Mediated Data
ERIC Educational Resources Information Center
Muckle, Timothy Joseph
2010-01-01
Existing methods for the analysis of ordinal-level data arising from judge ratings, such as the Multi-Facet Rasch model (MFRM, or the so-called Facets model) have been widely used in assessment in order to render fair examinee ability estimates in situations where the judges vary in their behavior or severity. However, this model makes certain…
Parametric Hazard Function Estimation.
1999-09-13
Version 00 Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking ofmore » the model assumptions.« less
Snorradóttir, Bergthóra S; Jónsdóttir, Fjóla; Sigurdsson, Sven Th; Másson, Már
2014-08-01
A model is presented for transdermal drug delivery from single-layered silicone matrix systems. The work is based on our previous results that, in particular, extend the well-known Higuchi model. Recently, we have introduced a numerical transient model describing matrix systems where the drug dissolution can be non-instantaneous. Furthermore, our model can describe complex interactions within a multi-layered matrix and the matrix to skin boundary. The power of the modelling approach presented here is further illustrated by allowing the possibility of a donor solution. The model is validated by a comparison with experimental data, as well as validating the parameter values against each other, using various configurations with donor solution, silicone matrix and skin. Our results show that the model is a good approximation to real multi-layered delivery systems. The model offers the ability of comparing drug release for ibuprofen and diclofenac, which cannot be analysed by the Higuchi model because the dissolution in the latter case turns out to be limited. The experiments and numerical model outlined in this study could also be adjusted to more general formulations, which enhances the utility of the numerical model as a design tool for the development of drug-loaded matrices for trans-membrane and transdermal delivery.
Nonlinear model of elastic field sources
NASA Astrophysics Data System (ADS)
Lev, B. I.; Zagorodny, A. G.
2016-09-01
A general concept of the long-range elastic interactions in continuous medium is proposed. The interaction is determined as a consequence of symmetry breaking of the elastic field distribution produced by the topological defect as isolated inclusions. It is proposed to treat topological defects as the source of elastic field that can be described in terms of this field. The source is considered as a nonlinear object which determines the effective charge of the field at large distances in the linear theory. The models of the nonlinear source are proposed.
NASA Astrophysics Data System (ADS)
Wouters, Hendrik; Demuzere, Matthias; Blahak, Ulrich; Fortuniak, Krzysztof; Maiheu, Bino; Camps, Johan; Tielemans, Daniël; van Lipzig, Nicole P. M.
2016-09-01
This paper presents the Semi-empirical URban canopY parametrization (SURY) v1.0, which bridges the gap between bulk urban land-surface schemes and explicit-canyon schemes. Based on detailed observational studies, modelling experiments and available parameter inventories, it offers a robust translation of urban canopy parameters - containing the three-dimensional information - into bulk parameters. As a result, it brings canopy-dependent urban physics to existing bulk urban land-surface schemes of atmospheric models. At the same time, SURY preserves a low computational cost of bulk schemes for efficient numerical weather prediction and climate modelling at the convection-permitting scales. It offers versatility and consistency for employing both urban canopy parameters from bottom-up inventories and bulk parameters from top-down estimates. SURY is tested for Belgium at 2.8 km resolution with the COSMO-CLM model (v5.0_clm6) that is extended with the bulk urban land-surface scheme TERRA_URB (v2.0). The model reproduces very well the urban heat islands observed from in situ urban-climate observations, satellite imagery and tower observations, which is in contrast to the original COSMO-CLM model without an urban land-surface scheme. As an application of SURY, the sensitivity of atmospheric modelling with the COSMO-CLM model is addressed for the urban canopy parameter ranges from the local climate zones of http://WUDAPT.org. City-scale effects are found in modelling the land-surface temperatures, air temperatures and associated urban heat islands. Recommendations are formulated for more precise urban atmospheric modelling at the convection-permitting scales. It is concluded that urban canopy parametrizations including SURY, combined with the deployment of the WUDAPT urban database platform and advancements in atmospheric modelling systems, are essential.
An automated shell for management of parametric dispersion/deposition modeling
Paddock, R.A.; Absil, M.J.G.; Peerenboom, J.P.; Newsom, D.E.; North, M.J.; Coskey, R.J. Jr.
1994-03-01
In 1993, the US Army tasked Argonne National Laboratory to perform a study of chemical agent dispersion and deposition for the Chemical Stockpile Emergency Preparedness Program using an existing Army computer model. The study explored a wide range of situations in terms of six parameters: agent type, quantity released, liquid droplet size, release height, wind speed, and atmospheric stability. A number of discrete values of interest were chosen for each parameter resulting in a total of 18,144 possible different combinations of parameter values. Therefore, the need arose for a systematic method to assemble the large number of input streams for the model, filter out unrealistic combinations of parameter values, run the model, and extract the results of interest from the extensive model output. To meet these needs, we designed an automated shell for the computer model. The shell processed the inputs, ran the model, and reported the results of interest. By doing so, the shell compressed the time needed to perform the study and freed the researchers to focus on the evaluation and interpretation of the model predictions. The results of the study are still under review by the Army and other agencies; therefore, it would be premature to discuss the results in this paper. However, the design of the shell could be applied to other hazards for which multiple-parameter modeling is performed. This paper describes the design and operation of the shell as an example for other hazards and models.
Kulmala, A; Tenhunen, M
2012-11-01
The signal of the dosimetric detector is generally dependent on the shape and size of the sensitive volume of the detector. In order to optimize the performance of the detector and reliability of the output signal the effect of the detector size should be corrected or, at least, taken into account. The response of the detector can be modelled using the convolution theorem that connects the system input (actual dose), output (measured result) and the effect of the detector (response function) by a linear convolution operator. We have developed the super-resolution and non-parametric deconvolution method for determination of the cylinder symmetric ionization chamber radial response function. We have demonstrated that the presented deconvolution method is able to determine the radial response for the Roos parallel plate ionization chamber with a better than 0.5 mm correspondence with the physical measures of the chamber. In addition, the performance of the method was proved by the excellent agreement between the output factors of the stereotactic conical collimators (4-20 mm diameter) measured by the Roos chamber, where the detector size is larger than the measured field, and the reference detector (diode). The presented deconvolution method has a potential in providing reference data for more accurate physical models of the ionization chamber as well as for improving and enhancing the performance of the detectors in specific dosimetric problems.
NASA Astrophysics Data System (ADS)
Yamamoto, Yu; Yamada, Shoichi
2016-02-01
We conducted one-dimensional and two-dimensional hydrodynamic simulations of post-shock revival evolutions in core-collapse supernovae, employing the simple neutrino light bulb approximation to produce explosions rather easily. In order to estimate the explosion energy, we took into proper account nuclear recombinations and fusions consistently with the equation of state for matter not in statistical equilibrium in general. The methodology is similar to our previous work, but is somehow improved. In this paper, we studied the influence of the progenitor structure on the dynamics systematically. In order to expedite our understanding of the systematics, we constructed six parametric progenitor models, which are different in masses of Fe iron core and Si+S layer, instead of employing realistic models provided by stellar evolution calculations, which are sometimes of stochastic nature as a function of stellar mass on the main sequence. We found that the explosion energy is tightly correlated with the mass accretion rate at shock revival irrespective of dimension and the progenitors with light iron cores but with rather high entropies, which have yet to be produced by realistic stellar evolution calculations, may reproduce the canonical values of explosion energy and nickel mass. The mass of the Si+S layer is also important in the mass accretion history after bounce, on the other hand; the higher mass accretion rates and resultant heavier cores tend to hamper strong explosions.
Modeling the Soil Moisture Parametrization in a Snow Dominated Mountainous Region
NASA Astrophysics Data System (ADS)
Kikine, Daniel; Sensoy, Aynur; Sorman, Arda
2016-04-01
The study quantifies the effects of both the soil moisture accounting and the temperature index in the event based as well as the continuous simulation of a model in a snow dominated basin. Physically based watershed model parameters are required to reproduce the historical flows and forecast the stream flows. This study demonstrates that parameterization of hydrological model is a favorable approach to perform forecasting because it employs the relationship of the calibrated model parameters and those of the watershed's physical properties. With this consideration, the temperature index (degree-day) snowmelt and the soil moisture accounting models within the Hydrologic Engineering Center's hydrologic modeling system (HEC-HMS) are applied to the Upper Euphrates watershed. The versatile 14-parameter soil moisture accounting (SMA) algorithm is utilized for a better simulation and parameterization of the watershed. The methodology was exemplified by performing various independent simulations using the meteorological data and the observed stream discharges. The soil moisture parameters were calibrated and modified according to their statistical relationships with the land use for the 2002 - 2008 period, the obtained parameter set are then validated for the 2009 - 2012 period. Model outputs are evaluated in comparison to satellite derived soil moisture and snow water equivalent data. Deterministic Numerical Weather Prediction data are used together with the conceptual model to forecast runoff for the melting period of the year 2015.
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrologic models are used to simulate the responses of agricultural systems to different inputs and management strategies to identify alternative management practices to cope up with future climate and/or geophysical changes. The Agricultural Policy/Environmental eXtender (APEX) is a model develope...
Hamby, D M
2002-01-01
Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
Development, Validation and Parametric study of a 3-Year-Old Child Head Finite Element Model
NASA Astrophysics Data System (ADS)
Cui, Shihai; Chen, Yue; Li, Haiyan; Ruan, ShiJie
2015-12-01
Traumatic brain injury caused by drop and traffic accidents is an important reason for children's death and disability. Recently, the computer finite element (FE) head model has been developed to investigate brain injury mechanism and biomechanical responses. Based on CT data of a healthy 3-year-old child head, the FE head model with detailed anatomical structure was developed. The deep brain structures such as white matter, gray matter, cerebral ventricle, hippocampus, were firstly created in this FE model. The FE model was validated by comparing the simulation results with that of cadaver experiments based on reconstructing the child and adult cadaver experiments. In addition, the effects of skull stiffness on the child head dynamic responses were further investigated. All the simulation results confirmed the good biofidelity of the FE model.
NASA Astrophysics Data System (ADS)
Frick, Maximilian; Sippel, Judith; Cacace, Mauro; Scheck-Wenderoth, Magdalena
2016-04-01
The goal of this study was to quantify the influence of the geological structure and geophysical parametrization of model units on the geothermal field as calculated by 3D numerical simulations of coupled fluid and heat transport for the subsurface of Berlin, Germany. The study area is located in the Northeast German Basin which is filled with several kilometers of sediments. This sedimentary infill includes the clastic sedimentary units Middle Buntsandstein and Sedimentary Rotliegend which are of particular interest for geothermal exploration. Previous studies conducted in the Northeast German Basin have already shown the geometries and properties of the geological units majorly control the distribution of subsurface temperatures. In this study we followed a two-step approach, where we first improved an existing structural model by integrating newly available 57 geological cross-sections, well data and deep seismics (down to ~4 km). Secondly, we performed a sensitivity analysis investigating the effects of varying physical fluid and rock properties on the subsurface temperature field. The results of this study show, that the structural configuration of model units exerts the highest influence on the geothermal field (up to ± 23 K at 1000 m below sea level). Here, the Rupelian clay aquitard, displaying a heterogeneous thickness distribution, locally characterized by hydrogeological windows (i.e. domains of no thickness) enabling intra-aquifer groundwater circulation has been identified as major controlling factor. The new structural configuration of this unit (more continuous, less numerous hydrogeological windows) also leads to a reduction of the influence of different boundary conditions and heat transport mechanisms considered. Additionally, the models results show that calculated temperatures highly depend on geophysical properties of model units whereas the hydraulic conductivity of the Cenozoic succession was identified as most dominant, leading to changes
A Bayesian long-term survival model parametrized in the cured fraction.
de Castro, Mário; Cancho, Vicente G; Rodrigues, Josemar
2009-06-01
The main goal of this paper is to investigate a cure rate model that comprehends some well-known proposals found in the literature. In our work the number of competing causes of the event of interest follows the negative binomial distribution. The model is conveniently reparametrized through the cured fraction, which is then linked to covariates by means of the logistic link. We explore the use of Markov chain Monte Carlo methods to develop a Bayesian analysis in the proposed model. The procedure is illustrated with a numerical example.
Shukla, Mukesh Kumar; Maji, Partha Sona; Das, Ritwick
2016-07-01
We present an efficient and tunable source generating multi-watt single-frequency red radiation by intra-cavity frequency doubling of the signal in a MgO-doped periodically poled LiNbO_{3} (MgO:PPLN)-based singly resonant optical parametric oscillator (SRO). By optimally designing the SRO cavity in a six-mirror configuration, we generate ≈276 nm tunable idler radiation in mid-infrared with a maximum power of P_{i}=2.05 W at a pump power of P_{p}=14.0 W. The resonant signal is frequency doubled using a 10 mm-long BiB_{3}O_{6} (BiBO) crystal which resulted in tunability of a red beam from ≈753 to 780 nm band with maximum power P_{r}≈4.0 W recorded at λ_{r}≈756 nm. The deployment of a six-mirror SRO ensures single-frequency generation of red across the entire tuning range by inducing additional losses to Raman modes of LiNbO_{3} and, thus, inhibiting their oscillation. Using a scanning Fabry-Perot interferometer (FPI), nominal linewidth of the red beam is measured to ≈3 MHz which changes marginally over the entire tuning range. Long-term (over 1 h) peak-to-peak frequency fluctuation of the generated red beam is estimated to be about 3.3 GHz under free-running conditions at P_{p}=14.0 W. The generated red beam is delivered in a TEM_{00} mode profile with M^{2}≤1.32 at maximum power in a red beam.
Shukla, Mukesh Kumar; Maji, Partha Sona; Das, Ritwick
2016-07-01
We present an efficient and tunable source generating multi-watt single-frequency red radiation by intra-cavity frequency doubling of the signal in a MgO-doped periodically poled LiNbO_{3} (MgO:PPLN)-based singly resonant optical parametric oscillator (SRO). By optimally designing the SRO cavity in a six-mirror configuration, we generate ≈276 nm tunable idler radiation in mid-infrared with a maximum power of P_{i}=2.05 W at a pump power of P_{p}=14.0 W. The resonant signal is frequency doubled using a 10 mm-long BiB_{3}O_{6} (BiBO) crystal which resulted in tunability of a red beam from ≈753 to 780 nm band with maximum power P_{r}≈4.0 W recorded at λ_{r}≈756 nm. The deployment of a six-mirror SRO ensures single-frequency generation of red across the entire tuning range by inducing additional losses to Raman modes of LiNbO_{3} and, thus, inhibiting their oscillation. Using a scanning Fabry-Perot interferometer (FPI), nominal linewidth of the red beam is measured to ≈3 MHz which changes marginally over the entire tuning range. Long-term (over 1 h) peak-to-peak frequency fluctuation of the generated red beam is estimated to be about 3.3 GHz under free-running conditions at P_{p}=14.0 W. The generated red beam is delivered in a TEM_{00} mode profile with M^{2}≤1.32 at maximum power in a red beam. PMID:27367094
Devarajan, Karthik; Ebrahimi, Nader
2010-01-01
The assumption of proportional hazards (PH) fundamental to the Cox PH model sometimes may not hold in practice. In this paper, we propose a generalization of the Cox PH model in terms of the cumulative hazard function taking a form similar to the Cox PH model, with the extension that the baseline cumulative hazard function is raised to a power function. Our model allows for interaction between covariates and the baseline hazard and it also includes, for the two sample problem, the case of two Weibull distributions and two extreme value distributions differing in both scale and shape parameters. The partial likelihood approach can not be applied here to estimate the model parameters. We use the full likelihood approach via a cubic B-spline approximation for the baseline hazard to estimate the model parameters. A semi-automatic procedure for knot selection based on Akaike’s Information Criterion is developed. We illustrate the applicability of our approach using real-life data. PMID:21076652
Modelling of amorphous cellulose depolymerisation by cellulases, parametric studies and optimisation
Niu, Hongxing; Shah, Nilay; Kontoravdi, Cleo
2016-01-01
Improved understanding of heterogeneous cellulose hydrolysis by cellulases is the basis for optimising enzymatic catalysis-based cellulosic biorefineries. A detailed mechanistic model is developed to describe the dynamic adsorption/desorption and synergistic chain-end scissions of cellulases (endoglucanase, exoglucanase, and β-glucosidase) upon amorphous cellulose. The model can predict evolutions of the chain lengths of insoluble cellulose polymers and production of soluble sugars during hydrolysis. Simultaneously, a modelling framework for uncertainty analysis is built based on a quasi-Monte-Carlo method and global sensitivity analysis, which can systematically identify key parameters, help refine the model and improve its identifiability. The model, initially comprising 27 parameters, is found to be over-parameterized with structural and practical identification problems under usual operating conditions (low enzyme loadings). The parameter estimation problem is therefore mathematically ill posed. The framework allows us, on the one hand, to identify a subset of 13 crucial parameters, of which more accurate confidence intervals are estimated using a given experimental dataset, and, on the other hand, to overcome the identification problems. The model’s predictive capability is checked against an independent set of experimental data. Finally, the optimal composition of cellulases cocktail is obtained by model-based optimisation both for enzymatic hydrolysis and for the process of simultaneous saccharification and fermentation. PMID:26865832
Fitting parametric models of diffusion MRI in regions of partial volume
NASA Astrophysics Data System (ADS)
Eaton-Rosen, Zach; Cardoso, M. J.; Melbourne, Andrew; Orasanu, Eliza; Bainbridge, Alan; Kendall, Giles S.; Robertson, Nicola J.; Marlow, Neil; Ourselin, Sebastien
2016-03-01
Regional analysis is normally done by fitting models per voxel and then averaging over a region, accounting for partial volume (PV) only to some degree. In thin, folded regions such as the cerebral cortex, such methods do not work well, as the partial volume confounds parameter estimation. Instead, we propose to fit the models per region directly with explicit PV modeling. In this work we robustly estimate region-wise parameters whilst explicitly accounting for partial volume effects. We use a high-resolution segmentation from a T1 scan to assign each voxel in the diffusion image a probabilistic membership to each of k tissue classes. We rotate the DW signal at each voxel so that it aligns with the z-axis, then model the signal at each voxel as a linear superposition of a representative signal from each of the k tissue types. Fitting involves optimising these representative signals to best match the data, given the known probabilities of belonging to each tissue type that we obtained from the segmentation. We demonstrate this method improves parameter estimation in digital phantoms for the diffusion tensor (DT) and `Neurite Orientation Dispersion and Density Imaging' (NODDI) models. The method provides accurate parameter estimates even in regions where the normal approach fails completely, for example where partial volume is present in every voxel. Finally, we apply this model to brain data from preterm infants, where the thin, convoluted, maturing cortex necessitates such an approach.
Smoothed Particle Inference: A Kilo-Parametric Method for X-ray Galaxy Cluster Modeling
Peterson, John R.; Marshall, P.J.; Andersson, K.; /Stockholm U. /SLAC
2005-08-05
We propose an ambitious new method that models the intracluster medium in clusters of galaxies as a set of X-ray emitting smoothed particles of plasma. Each smoothed particle is described by a handful of parameters including temperature, location, size, and elemental abundances. Hundreds to thousands of these particles are used to construct a model cluster of galaxies, with the appropriate complexity estimated from the data quality. This model is then compared iteratively with X-ray data in the form of adaptively binned photon lists via a two-sample likelihood statistic and iterated via Markov Chain Monte Carlo. The complex cluster model is propagated through the X-ray instrument response using direct sampling Monte Carlo methods. Using this approach the method can reproduce many of the features observed in the X-ray emission in a less assumption-dependent way that traditional analyses, and it allows for a more detailed characterization of the density, temperature, and metal abundance structure of clusters. Multi-instrument X-ray analyses and simultaneous X-ray, Sunyaev-Zeldovich (SZ), and lensing analyses are a straight-forward extension of this methodology. Significant challenges still exist in understanding the degeneracy in these models and the statistical noise induced by the complexity of the models.
Jovian S emission: Model of radiation source
NASA Astrophysics Data System (ADS)
Ryabov, B. P.
1994-04-01
A physical model of the radiation source and an excitation mechanism have been suggested for the S component in Jupiter's sporadic radio emission. The model provides a unique explanation for most of the interrelated phenomena observed, allowing a consistent interpretation of the emission cone structure, behavior of the integrated radio spectrum, occurrence probability of S bursts, location and size of the radiation source, and fine structure of the dynamic spectra. The mechanism responsible for the S bursts is also discussed in connection with the L type emission. Relations are traced between parameters of the radio emission and geometry of the Io flux tube. Fluctuations in the current amplitude through the tube are estimated, along with the refractive index value and mass density of the plasma near the radiation source.
Study of parametrized dark energy models with a general non-canonical scalar field
NASA Astrophysics Data System (ADS)
Mamon, Abdulla Al; Das, Sudipta
2016-03-01
In this paper, we consider various dark energy models in the framework of a non-canonical scalar field with a Lagrangian density of the form {L}(φ , X)=f(φ )X{left(X/M^{4_{Pl}}right) }^{α -1} - V(φ ), which provides the standard canonical scalar field model for α =1 and f(φ )=1. In this particular non-canonical scalar field model, we carry out the analysis for α =2. We then obtain cosmological solutions for constant as well as variable equation of state parameter (ω _{φ }(z)) for dark energy. We also perform the data analysis for three different functional forms of ω _{φ }(z) by using the combination of SN Ia, BAO, and CMB datasets. We have found that for all the choices of ω _{φ }(z), the SN Ia + CMB/BAO dataset favors the past decelerated and recent accelerated expansion phase of the universe. Furthermore, using the combined dataset, we have observed that the reconstructed results of ω _{φ }(z) and q(z) are almost choice independent and the resulting cosmological scenarios are in good agreement with the Λ CDM model (within the 1σ confidence contour). We have also derived the form of the potentials for each model and the resulting potentials are found to be a quartic potential for constant ω _{φ } and a polynomial in φ for variable ω _{φ }.
NASA Technical Reports Server (NTRS)
Boxwell, D. A.; Schmitz, F. H.; Splettstoesser, W. R.; Schultz, K. J.
1987-01-01
Acoustic data taken in the anechoic Deutsch-Niederlaendischer Windkanal (DNW) have documented the blade-vortex interaction (BVI) impulsive noise radiated from a 1/7-scale model main rotor of the AH-1 series helicopter. Averaged model-scale data were compared with averaged full-scale, in-flight acoustic data under similar non-dimensional test conditions using an improved data analysis technique. At low advance ratios (mu = 0.164 - 0.194), the BVI impulsive noise data scale remarkably well in level, waveform, and directivity patterns. At moderate advance ratios (mu = 0.224 - 0.270), the scaling deteriorates, suggesting that the model-scale rotor is not adequately simulating the full-scale BVI noise. Presently, no proved explanation of this discrepancy exists. Measured BVI noise radiation is highly sensitive to all of the four governing nondimensional parameters--hover tip Mach number, advance ratio, local inflow ratio, and thrust coefficient.
A modified ICRP 66 iodine gas uptake model and its parametric uncertainty.
Harvey, R P; Hamby, D M; Palmer, T S
2004-11-01
Intakes via inhalation may occur from radionuclides released in the form of a gas. The chemical characteristics pertaining to the release influence the intake and subsequent dose to an exposed individual. Gases are taken up or absorbed in the entire respiratory tract and the associated uptake mechanisms are quite different from deposition of particulates. Gaseous iodine can exist in various chemical forms, e.g., elemental iodine, inorganic, and organic iodine compounds. These different chemical species play an integral role in the gaseous uptake o f iodine in t he respiratory tract. Gas uptake in the various regions of the respiratory tract results in the intake of iodinated material into the body. The radioactive iodine taken up in the gas-exchange tissues is absorbed into the bloodstream of an individual and subsequently transferred to other organs. Iodine in the circulatory system can then be taken up by the thyroid gland, with resulting dose to the thyroid. The magnitude and uncertainty in regional gas uptake is important in the assessment of individuals exposed to airborne releases of radioiodine. The current ICRP 66 model is rudimentary and estimates regional gas uptake based on solubility and reactivity of the different radionuclides entering the respiratory tract. The modified model proposed here employs methodology and a mathematical structure to determine estimates of fractional gas uptake rather than defaulting to literature values, as in the current ICRP model. Model parameters have been assigned input distributions and estimates of uncertainty have been determined. A sensitivity analysis of these parameters has been performed to demonstrate the importance of each of these parameters. The sensitivity analysis ranks the model-input parameters by their importance to estimates of regional gas uptake. The model developed herein may be used for improved estimation of gas uptake in the respiratory tract and subsequent dose estimates from the different
NASA Astrophysics Data System (ADS)
Johnson, Helen; Best, Martin
2015-04-01
It has been understood for a while now that atmospheric behaviour is affected by land surface processes, modelling this relationship however still presents challenges. Most numerical weather prediction (NWP) models couple an atmospheric model to a land surface model in order to forecast the weather and/or climate. The Global Land-Atmosphere Coupling Experiment (GLACE) demonstrated that soil moisture variability has considerable control over atmospheric behaviour, particularly impacting on precipitation and temperature variability. The study also suggested that differences in coupling strengths between models may be due to differences in atmospheric parametrizations. There have since been other studies which support this claim but it is not yet clear which parameters control the land-atmosphere coupling strength or indeed what it should be. In this study we investigate whether certain atmospheric parameters hold more control than others over model sensitivity to land surface changes. We focus on the interaction of the JULES (Joint UK Land Environment Simulator) land surface model with the Met Office Unified Model (UM) that is used for operational NWP and climate prediction. For computational efficiency we ran the UM at a single site using a single column model (SCM) rather than running a global model simulation. A site in the Sahel region of West Africa was chosen as this is an area that was identified by GLACE as being especially responsive to changes in soil moisture. JULES was run several times with various different initial soil moisture profiles to create an ensemble of surface sensible and latent heat fluxes that could be used to force a set of different SCM runs in order to simulate a range of different atmospheric conditions. Various atmospheric parameters in the SCM were then perturbed to create additional sets of SCM runs with different sensitivities to soil moisture changes. By analysing the difference in spread between the standard configuration and the
Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S
2012-12-01
Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. PMID:23137923
Kafarov, V.V.; Pisarenko, V.N.; Usacheva, I.I.
1986-04-01
A description is given of a pulse method for the investigation of heterogeneous catalytic processes, through which the parameters of a model can be evaluated with high accuracy. An example is given of the application of the procedure to an alloy catalyst.
Technology Transfer Automated Retrieval System (TEKTRAN)
Surface soil moisture is an important parameter in hydrology and climate investigations. Current and future satellite missions with L-band passive microwave radiometers can provide valuable information for monitoring the global soil moisture. A factor that can play a significant role in the modeling...
A parametric study of the drift-tearing mode using an extended-magnetohydrodynamic model
King, Jacob R.; Kruger, S. E.
2014-10-24
The linear, collisional, constant-ψ drift-tearing mode is analyzed for different regimes of the plasma-β, ion-skin-depth parameter space with an unreduced, extended-magnetohydrodynamic model. Here, new dispersion relations are found at moderate plasma β and previous drift-tearing results are classified as applicable at small plasma β.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
A virtual source model for Kilo-voltage cone beam CT: Source characteristics and model validation
Spezi, E.; Volken, W.; Frei, D.; Fix, M. K.
2011-09-15
Purpose: The purpose of this investigation was to study the source characteristics of a clinical kilo-voltage cone beam CT unit and to develop and validate a virtual source model that could be used for treatment planning purposes. Methods: We used a previously commissioned full Monte Carlo model and new bespoke software to study the source characteristics of a clinical kilo-voltage cone beam CT (CBCT) unit. We identified the main particle sources, their spatial, energy and angular distribution for all the image acquisition presets currently used in our clinical practice. This includes a combination of two energies (100 and 120 kVp), two filters (neutral and bowtie), and eight different x-ray beam apertures. We subsequently built a virtual source model which we validated against full Monte Carlo calculations. Results: We found that the radiation output of the clinical kilo-voltage cone beam CT unit investigated in this study could be reproduced with a virtual model comprising of two sources (target and filtration cone) or three sources (target, filtration cone and bowtie filter) when additional filtration was used. With this model, we accounted for more than 97% of the photons exiting the unit. Each source in our model was characterised by a origin distribution in both X and Y directions, a fluence map, a single energy spectrum for unfiltered beams and a two dimensional energy spectrum for bowtie filtered beams. The percentage dose difference between full Monte Carlo and virtual source model based dose distributions was well within the statistical uncertainty associated with the calculations ( {+-} 2%, one standard deviation) in all cases studied. Conclusions: The virtual source that we developed is accurate in calculating the dose delivered from a commercial kilo-voltage cone beam CT unit operating with routine clinical image acquisition settings. Our data have also shown that target, filtration cone, and bowtie filter sources needed to be all included in the model
A comparison of NEAR actual spacecraft costs with three parametric cost models
NASA Astrophysics Data System (ADS)
Mosher, Todd J.; Lao, Norman Y.; Davalos, Evelyn T.; Bearden, David A.
1999-11-01
Costs for modern (post-1990) U.S.-built small planetary spacecraft have been shown to exhibit significantly different trends from those of larger spacecraft. These differences cannot be accounted for simply by the change in size alone. Some have attributed this departure to NASA's "faster, better, cheaper" design approach embodied by the efficiency of smaller teams, reduced government oversight, increased focus on cost, and short development periods. With the Discovery, Mars Surveyor and New Millennium programs representing the new approach to planetary exploration, it is important to understand these current cost trends and to be able to estimate costs of future proposed missions. To address this issue, The Aerospace Corporation (hereafter referred to as Aerospace) performed a study to compare the actual costs of the Near Earth Asteroid Rendezvous (NEAR) spacecraft bus (instruments were not estimated) using three different cost models; the U.S. Air Force Unmanned Spacecraft Cost Model, Version 7 (USCM-7), the Science Applications International Corporation (SAIC) NASA/Air Force Cost Model 1996 (NAFCOM96) and The Aerospace Corporation's Small Satellite Cost Model 1998 (SSCM98). The NEAR spacecraft was chosen for comparison because it was the first Discovery mission launched, and recently recognized with a Laurel award by Aviation Week and Space Technology as a benchmark for NASA's Discovery program [North, 1997]. It was also selected because the cost data has been released into the public domain [Hemmings, 1996]which makes it easy to discuss in a public forum. This paper summarizes the NEAR program, provides a short synopsis of each of the three cost models, and demonstrates how they were applied for this study.
Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images
NASA Astrophysics Data System (ADS)
Štern, Darko; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2011-12-01
Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T2-weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.
Nanoscale electromechanical parametric amplifier
Aleman, Benjamin Jose; Zettl, Alexander
2016-09-20
This disclosure provides systems, methods, and apparatus related to a parametric amplifier. In one aspect, a device includes an electron source electrode, a counter electrode, and a pumping electrode. The electron source electrode may include a conductive base and a flexible conductor. The flexible conductor may have a first end and a second end, with the second end of the flexible conductor being coupled to the conductive base. A cross-sectional dimension of the flexible conductor may be less than about 100 nanometers. The counter electrode may be disposed proximate the first end of the flexible conductor and spaced a first distance from the first end of the flexible conductor. The pumping electrode may be disposed proximate a length of the flexible conductor and spaced a second distance from the flexible conductor.
Parametric Modeling of the Safety Effects of NextGen Terminal Maneuvering Area Conflict Scenarios
NASA Technical Reports Server (NTRS)
Rogers, William H.; Waldron, Timothy P.; Stroiney, Steven R.
2011-01-01
The goal of this work was to analytically identify and quantify the issues, challenges, technical hurdles, and pilot-vehicle interface issues associated with conflict detection and resolution (CD&R)in emerging operational concepts for a NextGen terminal aneuvering area, including surface operations. To this end, the work entailed analytical and trade studies focused on modeling the achievable safety benefits of different CD&R strategies and concepts in the current and future airport environment. In addition, crew-vehicle interface and pilot performance enhancements and potential issues were analyzed based on review of envisioned NextGen operations, expected equipage advances, and human factors expertise. The results of perturbation analysis, which quantify the high-level performance impact of changes to key parameters such as median response time and surveillance position error, show that the analytical model developed could be useful in making technology investment decisions.
A parametric model and estimation techniques for the inharmonicity and tuning of the piano.
Rigaud, François; David, Bertrand; Daudet, Laurent
2013-05-01
Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers. PMID:23654413
A parametric model and estimation techniques for the inharmonicity and tuning of the piano.
Rigaud, François; David, Bertrand; Daudet, Laurent
2013-05-01
Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers.
Phase diagram of Model C in the parametric space of order parameter and space dimensions
NASA Astrophysics Data System (ADS)
Dudka, M.; Folk, R.; Holovatch, Yu.
2016-03-01
The scaling behavior of Model C describing the dynamical behavior of the n -component nonconserved order parameter coupled statically to a scalar conserved density is considered in d -dimensional space. Conditions for the realization of different types of scaling regimes in the (n ,d ) plane are studied within the field-theoretical renormalization group approach. Borders separating these regions are calculated on the base of high-order RG functions using ɛ expansions as well as by fixed dimension d approach with resummation.
Savitsky, Terrance D; Paddock, Susan M
2013-06-01
We develop a dependent Dirichlet process (DDP) model for repeated measures multiple membership (MM) data. This data structure arises in studies under which an intervention is delivered to each client through a sequence of elements which overlap with those of other clients on different occasions. Our interest concentrates on study designs for which the overlaps of sequences occur for clients who receive an intervention in a shared or grouped fashion whose memberships may change over multiple treatment events. Our motivating application focuses on evaluation of the effectiveness of a group therapy intervention with treatment delivered through a sequence of cognitive behavioral therapy session blocks, called modules. An open-enrollment protocol permits entry of clients at the beginning of any new module in a manner that may produce unique MM sequences across clients. We begin with a model that composes an addition of client and multiple membership module random effect terms, which are assumed independent. Our MM DDP model relaxes the assumption of conditionally independent client and module random effects by specifying a collection of random distributions for the client effect parameters that are indexed by the unique set of module attendances. We demonstrate how this construction facilitates examining heterogeneity in the relative effectiveness of group therapy modules over repeated measurement occasions. PMID:24273629
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.; Najm, Habib N.; Debusschere, Bert
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
Savitsky, Terrance D.; Paddock, Susan M.
2012-01-01
We develop a dependent Dirichlet process (DDP) model for repeated measures multiple membership (MM) data. This data structure arises in studies under which an intervention is delivered to each client through a sequence of elements which overlap with those of other clients on different occasions. Our interest concentrates on study designs for which the overlaps of sequences occur for clients who receive an intervention in a shared or grouped fashion whose memberships may change over multiple treatment events. Our motivating application focuses on evaluation of the effectiveness of a group therapy intervention with treatment delivered through a sequence of cognitive behavioral therapy session blocks, called modules. An open-enrollment protocol permits entry of clients at the beginning of any new module in a manner that may produce unique MM sequences across clients. We begin with a model that composes an addition of client and multiple membership module random effect terms, which are assumed independent. Our MM DDP model relaxes the assumption of conditionally independent client and module random effects by specifying a collection of random distributions for the client effect parameters that are indexed by the unique set of module attendances. We demonstrate how this construction facilitates examining heterogeneity in the relative effectiveness of group therapy modules over repeated measurement occasions. PMID:24273629
Parametric Study of Plasma Torch Operation Using a MHD Model Coupling the Arc and Electrodes
NASA Astrophysics Data System (ADS)
Alaya, M.; Chazelas, C.; Vardelle, A.
2016-01-01
Coupling of the electromagnetic and heat transfer phenomena in a non-transferred arc plasma torch is generally based on a current density profile and a temperature imposed on the cathode surface. However, it is not possible to observe the current density profile experimentally and so the computations are grounded on an estimation of current distribution at cathode tip. To eliminate this boundary condition and be able to predict the arc dynamics in the plasma torch, the cathode was included in the computational domain, the arc current was imposed on the rear surface of the cathode, and the electromagnetism and energy conservation equations for the fluid and the electrode were coupled and solved. The solution of this system of equations was implemented in a CFD computer code to model various plasma torch operating conditions. The model predictions for various arc currents were consistent and indicated that such a model could be applied with confidence to plasma torches of different geometries, such as cascaded-anode plasma torches.
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.; Najm, Habib N.; Debusschere, Bert
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We also consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.
Vehicle Sketch Pad: a Parametric Geometry Modeler for Conceptual Aircraft Design
NASA Technical Reports Server (NTRS)
Hahn, Andrew S.
2010-01-01
The conceptual aircraft designer is faced with a dilemma, how to strike the best balance between productivity and fidelity? Historically, handbook methods have required only the coarsest of geometric parameterizations in order to perform analysis. Increasingly, there has been a drive to upgrade analysis methods, but these require considerably more precise and detailed geometry. Attempts have been made to use computer-aided design packages to fill this void, but their cost and steep learning curve have made them unwieldy at best. Vehicle Sketch Pad (VSP) has been developed over several years to better fill this void. While no substitute for the full feature set of computer-aided design packages, VSP allows even novices to quickly become proficient in defining three-dimensional, watertight aircraft geometries that are adequate for producing multi-disciplinary meta-models for higher order analysis methods, wind tunnel and display models, as well as a starting point for animation models. This paper will give an overview of the development and future course of VSP.
NASA Astrophysics Data System (ADS)
Brown, James; Seo, Dong-Jun
2010-05-01
Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results
Modeling the dynamic operation of a small fin plate heat exchanger - parametric analysis
NASA Astrophysics Data System (ADS)
Motyliński, Konrad; Kupecki, Jakub
2015-09-01
Given its high efficiency, low emissions and multiple fuelling options, the solid oxide fuel cells (SOFC) offer a promising alternative for stationary power generators, especially while engaged in micro-combined heat and power (μ-CHP) units. Despite the fact that the fuel cells are a key component in such power systems, other auxiliaries of the system can play a critical role and therefore require a significant attention. Since SOFC uses a ceramic material as an electrolyte, the high operating temperature (typically of the order of 700-900 °C) is required to achieve sufficient performance. For that reason both the fuel and the oxidant have to be preheated before entering the SOFC stack. Hot gases exiting the fuel cell stack transport substantial amount of energy which has to be partly recovered for preheating streams entering the stack and for heating purposes. Effective thermal integration of the μ-CHP can be achieved only when proper technical measures are used. The ability of efficiently preheating the streams of oxidant and fuel relies on heat exchangers which are present in all possible configurations of power system with solid oxide fuel cells. In this work a compact, fin plate heat exchanger operating in the high temperature regime was under consideration. Dynamic model was proposed for investigation of its performance under the transitional states of the fuel cell system. Heat exchanger was simulated using commercial modeling software. The model includes key geometrical and functional parameters. The working conditions of the power unit with SOFC vary due to the several factors, such as load changes, heating and cooling procedures of the stack and others. These issues affect parameters of the incoming streams to the heat exchanger. The mathematical model of the heat exchanger is based on a set of equations which are simultaneously solved in the iterative process. It enables to define conditions in the outlets of both the hot and the cold sides
Wang, Junmei; Cieplak, Piotr; Li, Jie; Cai, Qin; Hsieh, Meng-Juei; Luo, Ray; Duan, Yong
2012-06-21
In the previous publications of this series, we presented a set of Thole induced dipole interaction models using four types of screening functions. In this work, we document our effort to refine the van der Waals parameters for the Thole polarizable models. Following the philosophy of AMBER force field development, the van der Waals (vdW) parameters were tuned for the Thole model with linear screening function to reproduce both the ab initio interaction energies and the experimental densities of pure liquids. An in-house genetic algorithm was applied to maximize the fitness of "chromosomes" which is a function of the root-mean-square errors (RMSE) of interaction energy and liquid density. To efficiently explore the vdW parameter space, a novel approach was developed to estimate the liquid densities for a given vdW parameter set using the mean residue-residue interaction energies through interpolation/extrapolation. This approach allowed the costly molecular dynamics simulations be performed at the end of each optimization cycle only and eliminated the simulations during the cycle. Test results show notable improvements over the original AMBER FF99 vdW parameter set, as indicated by the reduction in errors of the calculated pure liquid densities (d), heats of vaporization (H(vap)), and hydration energies. The average percent error (APE) of the densities of 59 pure liquids was reduced from 5.33 to 2.97%; the RMSE of H(vap) was reduced from 1.98 to 1.38 kcal/mol; the RMSE of solvation free energies of 15 compounds was reduced from 1.56 to 1.38 kcal/mol. For the interaction energies of 1639 dimers, the overall performance of the optimized vdW set is slightly better than the original FF99 vdW set (RMSE of 1.56 versus 1.63 kcal/mol). The optimized vdW parameter set was also evaluated for the exponential screening function used in the Amoeba force field to assess its applicability for different types of screening functions. Encouragingly, comparable performance was
NASA Astrophysics Data System (ADS)
Dubinin, M. N.; Petrova, E. Yu.
2016-07-01
Constraints on the parameter space of theMinimal Supersymmetric StandardModel (MSSM) that are imposed by the experimentally observed mass of the Higgs boson ( m H = 125 GeV) upon taking into account radiative corrections within an effective theory for the Higgs sector in the decoupling limit are examined. It is also shown that simplified approximations for radiative corrections in theMSSM Higgs sector could reduce, to a rather high degree of precision, the dimensionality of the multidimensionalMSSM parameter space to two.
Parametrization of the relativistic {sigma}-{omega} model for nuclear matter
Dadi, Anis ben Ali
2010-08-15
We have investigated the zero-temperature equation of state (EoS) for infinite nuclear matter within the {sigma}-{omega} model at all densities n{sub B} and different proton-neutron asymmetry {eta}{identical_to}(N-Z)/A. We have presented an analytical expression for the compression modulus and found that nuclear matter ceases to saturate at {eta} slightly larger than 0.8. Afterward, we have developed an analytical method to determine the strong coupling constants from the EoS for isospin symmetric nuclear matter, which allow us to reproduce all the saturation properties with high accuracy. For various values of the nucleon effective mass and the compression modulus, we have found that the quartic self-coupling constant G{sub 4} is negative, or positive and very large. Furthermore, we have demonstrated that it is possible (a) to investigate the EoS in terms of n{sub B} and {eta}; and (b) to reproduce all the known saturation properties without G{sub 4}. We have thus concluded that the latter is not necessary in the {sigma}-{omega} model.
Parametric behaviors of CLUBB in simulations of low clouds in the Community Atmosphere Model (CAM)
Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven; Ovchinnikov, Mikhail; A. Bogenschutz, Peter; Gettelman, Andrew; Zhou, Tianjun
2015-07-03
In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained by the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. Furthermore, this study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.
Parametric behaviors of CLUBB in simulations of low clouds in the Community Atmosphere Model (CAM)
Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven; Ovchinnikov, Mikhail; A. Bogenschutz, Peter; Gettelman, Andrew; Zhou, Tianjun
2015-07-03
In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained bymore » the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. Furthermore, this study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.« less
Parametric Behaviors of CLUBB in Simulations of Low Clouds in the Community Atmosphere Model (CAM)
Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven J.; Ovchinnikov, Mikhail; Bogenschutz, Peter; Gettelman, A.; Zhou, Tianjun
2015-07-03
In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained by the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. This study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.
Mathematical models for non-parametric inferences from line transect data
Burnham, K.P.; Anderson, D.R.
1976-01-01
A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)
1978-01-01
The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.
A parametric model for seismic wavelets—with estimation and uncertainty quantification
NASA Astrophysics Data System (ADS)
Skauvold, Jacob; Eidsvik, Jo; Theune, Ulrich
2016-05-01
Wavelet estimation is an essential step in qualitatively and quantitatively analysing and interpreting seismic data. Applications span from seismic data quality assessment to well ties and seismic inversion. Wavelet estimation methods can be roughly separated into two approaches, data driven inversion methods and analytical definitions. We present a new analytical wavelet definition, which is based on Hermite basis functions. This wavelet model contains four parameters, which correspond to wavelet magnitude, phase, wavelet length and bandwidth. One of our main motivations for this development was to define a compact wavelet representation and an intrinsic parameter uncertainty assessment workflow, which allows us to quantify uncertainties in estimated wavelets, as well as the generation of wavelet realizations to be used, for example, in statistical seismic amplitude inversions. We present a statistical workflow to estimate the model parameters and to explore their posterior uncertainties given well log data and seismic amplitude data. This includes sampling the posterior distribution of the four wavelet parameters using Markov Chain Monte Carlo methods. We then discuss the applicability, limitations and challenges of the approach with the help of synthetic data and a North Sea data set with well logs and processed seismic amplitudes, where we also compare our method to Bayesian least-squares and a commercial wavelet estimation routine. Realizations of wavelets based on the optimized parameters and their uncertainties appear to sample the wavelet space well with reasonable variations in wavelet length, phase and amplitude while not introducing random fluctuations or wavelet lobes. The results indicate that the compact wavelet representation allows for an efficient and rather stable wavelet estimation workflow that achieves useful results in the presence of noisy data.
Parametric modeling of exhaust gas emission from natural gas fired gas turbines
Bakken, L.E.; Skogly, L.
1996-07-01
Increased focus on air pollution from gas turbines in the Norwegian sector of the North Sea has resulted in taxes on CO{sub 2}. Statements made by the Norwegian authorities imply regulations and/or taxes on NO{sub x} emissions in the near future. The existing CO{sub 2} tax of NOK 0.82/Sm{sup 3} (US Dollars 0.12/Sm{sup 3}) and possible future tax on NO{sub x} are analyzed mainly with respect to operating and maintenance costs for the gas turbine. Depending on actual tax levels, the machine should be operated on full load/optimum thermal efficiency or part load to reduce specific exhaust emissions. Based on field measurements, exhaust emissions (CO{sub 2}, CO, NO{sub x}, N{sub 2}O, UHC, etc.) are established with respect to load and gas turbine performance, including performance degradation. Different NO{sub x} emission correlations are analyzed based on test results, and a proposed prediction model presented. The impact of machinery performance degradation on emission levels is particularly analyzed. Good agreement is achieved between measured and predicted NO{sub x} emissions from the proposed correlation. To achieve continuous exhaust emission control, the proposed NO{sub x} model is implemented to the on-line condition monitoring system on the Sleipner A platform, rather than introducing sensitive emission sensors in the exhaust gas stack. The on-line condition monitoring system forms an important tool in detecting machinery condition/degradation and air pollution, and achieving optimum energy conservation.
New depositional models for Cretaceous source rocks
Kauffman, E.G.; Villamil, T. )
1993-02-01
The Cretaceous marks one of the greatest periods of source rock development in geologic history, especially in coastal and epi-continental marine basins where the number, duration, and geographic extent of Corg-rich intervals exceeds that of oceanic basins. Large-scale factors regulating Cretaceous source rocks include sealevel, sedimentation rate/type, paleoclimate and marine thermal gradients, paleoceanography (circulation, stratification, chemistry, upwelling, nutrient supply), and surface water productivity. Marine dispositional settings favored as models for Corg concentration include silled and tectonically depressed basins, intersection of OMZ's with shallow continental seas, coastal upwelling, highly stratified shallow seas, and oceanic anoxic events (OAE's). All of these settings are thought to be characterized by stagnant, anoxic/highly dysoxic water masses above the sediment-water interface, and highly stressed benthic environments. This seemingly supported by fine lamination, spare bioturbation, high pyrite and Corg content of most source rocks. But high-resolution (cm-scale) sedimentologic, paleobiologic, and geochemical analyses of Jurassic-Cretaceous source rocks reveal, instead, dynamic benthic environments with active currents, episodically crowded with diverse life in event communities, and persistently characterized by longer term, low diversity resident benthic communities. These characteristics indicate rapidly fluctuating, predominantly dysoxic to oxic waters at and above the sediment-water interface for most Corg-rich black shales. A new model for source rock generation is proposed which retains the redox boundary at or near the sediment-water interface over large areas of seafloor, in part aided by extensive development of benthic microbial mats which may contribute up to 30% of the Corg to marine source rocks.
NASA Astrophysics Data System (ADS)
Hemmings, J. C. P.; Challenor, P. G.; Yool, A.
2015-03-01
Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to capture the dominant biogeochemical dynamics of a complex biological system. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA) coupled with a widely used global ocean model (NEMO). A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of the target model output. In general, chlorophyll records at a
Comparison of a beach parametric morphodynamic model results with in situ measurements
NASA Astrophysics Data System (ADS)
Ferreira, Caroline; Silva, Paulo A.; Baptista, Paulo; Abreu, Tiago
2014-05-01
The south coastal stretch of Aveiro inlet in the Northwest coast of Portugal is subject to a highly energetic wave climate and presents generalized erosion. To characterize the morphodynamic behavior of this coastal stretch it is important to establish the relationship between the hydrodynamic forcing and beach topography changes. Furthermore, it is necessary to develop methods which enable to estimate its behavior at a short and medium term. This work presents a model which estimates the cross-shore sediment transport from the shoaling into the swash zone. The transformation of the waves (shoaling and refraction) as they propagate towards the shore are computed from the incident wave field assuming conservation of the wave energy flux and take into account the tidal level and the beach bathymetry and topography. Wave breaking is described according to Battjes & Janssen (1978) and wave dissipation follows Baldock et al.'s (1998) formulation. The cross-shore sediment transport rates in the shoaling, surf and swash zones are computed from Tinker et al.'s (2009) suspended load shape function as a function of the normalized depth, h/ hb, where hb represents the water depth at wave breaking. The performance of the model was assessed by comparing the computed significant wave height and sediment fluxes with water-level measurements and morphological variations at a transept in the coastal stretch. The hydrodynamic measurements were obtained with pressure transducers placed in the inter-tidal zone during one tidal cycle and topographic surveys with the INSHORE system (Baptista et al., 2011a,b). The results show that the computed sediment fluxes are qualitatively in agreement with the topographic observations, meaning that the parameterized sediment flux shape function provide a good basis for prediction of the beach morphodynamic behavior with low computational cost. References: Baldock, TE, Holmes, P, Bunker, S, Van Weert, P, 1998. Cross-shore hydrodynamics within
Guo, Zhun; Wang, Minghuai; Qian, Yun; Larson, Vincent E.; Ghan, Steven; Ovchinnikov, Mikhail; A. Bogenschutz, Peter; Gettelman, Andrew; Zhou, Tianjun
2015-07-03
In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained bymore » the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. This study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.« less
Jensen, Benjamin D; Bandyopadhyay, Ananyo; Wise, Kristopher E; Odegard, Gregory M
2012-09-11
The development of innovative carbon-based materials can be greatly facilitated by molecular modeling techniques. Although the Reax Force Field (ReaxFF) can be used to simulate the chemical behavior of carbon-based systems, the simulation settings required for accurate predictions have not been fully explored. Using the ReaxFF, molecular dynamics (MD) simulations are used to simulate the chemical behavior of pure carbon and hydrocarbon reactive gases that are involved in the formation of carbon structures such as graphite, buckyballs, amorphous carbon, and carbon nanotubes. It is determined that the maximum simulation time step that can be used in MD simulations with the ReaxFF is dependent on the simulated temperature and selected parameter set, as are the predicted reaction rates. It is also determined that different carbon-based reactive gases react at different rates, and that the predicted equilibrium structures are generally the same for the different ReaxFF parameter sets, except in the case of the predicted formation of large graphitic structures with the Chenoweth parameter set under specific conditions.
NASA Astrophysics Data System (ADS)
Jensen, Benjamin D.
The development of innovative carbon-based materials can be greatly facilitated by molecular modeling techniques. Although the Reax Force Field (ReaxFF) can be used to simulate the chemical behavior of carbon-based systems, the simulation settings required for accurate predictions have not been fully explored. Using the ReaxFF, molecular dynamics (MD) simulations are used to simulate the chemical behavior of pure carbon and hydrocarbon reactive gases that are involved in the formation of carbon structures such as graphite, buckyballs, amorphous carbon, and carbon nanotubes. It is determined that the maximum simulation time step that can be used in MD simulations with the ReaxFF is dependent on the simulated temperature and selected parameter set, as are the predicted reaction rates. It is also determined that different carbon-based reactive gases react at different rates, and that the predicted equilibrium structures are generally the same for the different ReaxFF parameter sets, except in the case of the predicted formation of large graphitic structures with the Chenoweth parameter set under specific conditions.
Parametric Modeling Investigation of a Radially-Staged Low-Emission Aviation Combustor
NASA Technical Reports Server (NTRS)
Heath, Christopher M.
2016-01-01
Aviation gas-turbine combustion demands high efficiency, wide operability and minimal trace gas emissions. Performance critical design parameters include injector geometry, combustor layout, fuel-air mixing and engine cycle conditions. The present investigation explores these factors and their impact on a radially staged low-emission aviation combustor sized for a next-generation 24,000-lbf-thrust engine. By coupling multi-fidelity computational tools, a design exploration was performed using a parameterized annular combustor sector at projected 100% takeoff power conditions. Design objectives included nitrogen oxide emission indices and overall combustor pressure loss. From the design space, an optimal configuration was selected and simulated at 7.1, 30 and 85% part-power operation, corresponding to landing-takeoff cycle idle, approach and climb segments. All results were obtained by solution of the steady-state Reynolds-averaged Navier-Stokes equations. Species concentrations were solved directly using a reduced 19-step reaction mechanism for Jet-A. Turbulence closure was obtained using a nonlinear K-epsilon model. This research demonstrates revolutionary combustor design exploration enabled by multi-fidelity physics-based simulation.
Jensen, Benjamin D; Bandyopadhyay, Ananyo; Wise, Kristopher E; Odegard, Gregory M
2012-09-11
The development of innovative carbon-based materials can be greatly facilitated by molecular modeling techniques. Although the Reax Force Field (ReaxFF) can be used to simulate the chemical behavior of carbon-based systems, the simulation settings required for accurate predictions have not been fully explored. Using the ReaxFF, molecular dynamics (MD) simulations are used to simulate the chemical behavior of pure carbon and hydrocarbon reactive gases that are involved in the formation of carbon structures such as graphite, buckyballs, amorphous carbon, and carbon nanotubes. It is determined that the maximum simulation time step that can be used in MD simulations with the ReaxFF is dependent on the simulated temperature and selected parameter set, as are the predicted reaction rates. It is also determined that different carbon-based reactive gases react at different rates, and that the predicted equilibrium structures are generally the same for the different ReaxFF parameter sets, except in the case of the predicted formation of large graphitic structures with the Chenoweth parameter set under specific conditions. PMID:26605713
NASA Astrophysics Data System (ADS)
Rothfuss, Youri; Vereecken, Harry; Brüggemann, Nicolas
2015-04-01
water processes. An important challenge is to provide models with non-destructive and high resolution isotope data, both in space and time (e.g., using microporous tubing or membrane-based available setups). Moreover, parallel to field studies effort should be made to design specific experiments under controlled conditions, allowing for testing the underlying hypotheses of the above mentioned isotope-enabled SVAT models. Using isotope data obtained from these controlled experiments will improve the characterization of evaporation processes within the soil profile and ameliorate the parametrization of the respective isotope modules.
Light-pollution model for cloudy and cloudless night skies with ground-based light sources.
Kocifaj, Miroslav
2007-05-20
The scalable theoretical model of light pollution for ground sources is presented. The model is successfully employed for simulation of angular behavior of the spectral and integral sky radiance and/or luminance during nighttime. There is no restriction on the number of ground-based light sources or on the spatial distribution of these sources in the vicinity of the measuring point (i.e., both distances and azimuth angles of the light sources are configurable). The model is applicable for real finite-dimensional surface sources with defined spectral and angular radiating properties contrary to frequently used point-source approximations. The influence of the atmosphere on the transmitted radiation is formulated in terms of aerosol and molecular optical properties. Altitude and spectral reflectance of a cloud layer are the main factors introduced for simulation of cloudy and/or overcast conditions. The derived equations are translated into numerically fast code, and it is possible to repeat the entire set of calculations in real time. The parametric character of the model enables its efficient usage by illuminating engineers and/or astronomers in the study of various light-pollution situations. Some examples of numerical runs in the form of graphical results are presented. PMID:17514252
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1995-01-01
Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.
Update on the Electron Source Model
Cowee, Misa; Winske, Dan
2012-07-17
We summarize work done in FY12 on the Los Alamos Electron Source Model (ESM), which predicts the distribution of beta-decay electrons after a high altitude nuclear explosion (HANE) as a function of L, energy, and pitch angle. In the last year we have compared model results with data taken after the Russian 2 HANE test of 1962 and presented results at the HEART conference. We discuss our future plans to continue comparison with HANE data and to develop the code to allow a more complex set of initial conditions.
Parametric nanomechanical amplification at very high frequency.
Karabalin, R B; Feng, X L; Roukes, M L
2009-09-01
Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach. PMID:19736969
Parametric nanomechanical amplification at very high frequency.
Karabalin, R B; Feng, X L; Roukes, M L
2009-09-01
Parametric resonance and amplification are important in both fundamental physics and technological applications. Here we report very high frequency (VHF) parametric resonators and mechanical-domain amplifiers based on nanoelectromechanical systems (NEMS). Compound mechanical nanostructures patterned by multilayer, top-down nanofabrication are read out by a novel scheme that parametrically modulates longitudinal stress in doubly clamped beam NEMS resonators. Parametric pumping and signal amplification are demonstrated for VHF resonators up to approximately 130 MHz and provide useful enhancement of both resonance signal amplitude and quality factor. We find that Joule heating and reduced thermal conductance in these nanostructures ultimately impose an upper limit to device performance. We develop a theoretical model to account for both the parametric response and nonequilibrium thermal transport in these composite nanostructures. The results closely conform to our experimental observations, elucidate the frequency and threshold-voltage scaling in parametric VHF NEMS resonators and sensors, and establish the ultimate sensitivity limits of this approach.
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-01
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting.
Häme, Yrjö; Pollari, Mika
2012-01-01
A novel liver tumor segmentation method for CT images is presented. The aim of this work was to reduce the manual labor and time required in the treatment planning of radiofrequency ablation (RFA), by providing accurate and automated tumor segmentations reliably. The developed method is semi-automatic, requiring only minimal user interaction. The segmentation is based on non-parametric intensity distribution estimation and a hidden Markov measure field model, with application of a spherical shape prior. A post-processing operation is also presented to remove the overflow to adjacent tissue. In addition to the conventional approach of using a single image as input data, an approach using images from multiple contrast phases was developed. The accuracy of the method was validated with two sets of patient data, and artificially generated samples. The patient data included preoperative RFA images and a public data set from "3D Liver Tumor Segmentation Challenge 2008". The method achieved very high accuracy with the RFA data, and outperformed other methods evaluated with the public data set, receiving an average overlap error of 30.3% which represents an improvement of 2.3% points to the previously best performing semi-automatic method. The average volume difference was 23.5%, and the average, the RMS, and the maximum surface distance errors were 1.87, 2.43, and 8.09 mm, respectively. The method produced good results even for tumors with very low contrast and ambiguous borders, and the performance remained high with noisy image data.
Probing the matter and dark energy sources in a viable Big Rip model of the Universe
NASA Astrophysics Data System (ADS)
Kumar, Suresh
2014-08-01
Chevallier-Polarski-Linder (CPL) parametrization for the equation of state (EoS) of dark energy in terms of cosmic redshift or scale factor have been frequently studied in the literature. In this study, we consider cosmic time-based CPL parametrization for the EoS parameter of the effective cosmic fluid that fills the fabric of spatially flat and homogeneous Robertson-Walker (RW) spacetime in General Relativity. The model exhibits two worthy features: (i) It fits the observational data from the latest H(z) and Union 2.1 SN Ia compilations matching the success of ΛCDM model. (ii) It describes the evolution of the Universe from the matter-dominated phase to the recent accelerating phase similar to the ΛCDM model but leads to Big Rip end of the Universe contrary to the everlasting de Sitter expansion in the ΛCDM model. We investigate the matter and dark energy sources in the model, in particular, behavior of the dynamical dark energy responsible for the Big Rip end of Universe.
Light source modeling for automotive lighting devices
NASA Astrophysics Data System (ADS)
Zerhau-Dreihoefer, Harald; Haack, Uwe; Weber, Thomas; Wendt, Dierk
2002-08-01
Automotive lighting devices generally have to meet high standards. For example to avoid discomfort glare for the oncoming traffic, luminous intensities of a low beam headlight must decrease by more than one order of magnitude within a fraction of a degree along the horizontal cutoff-line. At the same time, a comfortable homogeneous illumination of the road requires slowly varying luminous intensities below the cutoff line. All this has to be realized taking into account both, the legal requirements and the customer's stylistic specifications. In order to be able to simulate and optimize devices with a good optical performance different light source models are required. In the early stage of e.g. reflector development simple unstructured models allow a very fast development of the reflectors shape. On the other hand the final simulation of a complex headlamp or signal light requires a sophisticated model of the spectral luminance. In addition to theoretical models based on the light source's geometry, measured luminance data can also be used in the simulation and optimization process.
Markov source model for printed music decoding
NASA Astrophysics Data System (ADS)
Kopec, Gary E.; Chou, Philip A.; Maltz, David A.
1995-03-01
This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.
Source term evaluation for combustion modeling
NASA Technical Reports Server (NTRS)
Sussman, Myles A.
1993-01-01
A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.
Software Model Checking Without Source Code
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Ivers, James
2009-01-01
We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.
A Black-box Modelling Engine for Discharge Produced Plasma Radiation Sources
NASA Astrophysics Data System (ADS)
Zakharov, S. V.; Choi, P.; Krukovskiy, A. Y.; Novikov, V. G.; Zakharov, V. S.; Zhang, Q.
2006-01-01
A Blackbox Modelling Engine (BME), is an instrument based on the adaptation of the RMHD code Z*, integrated into a specific computation environment to provide a turn key simulation instrument and to enable routine plasma modelling without specialist knowledge in numerical computation. Two different operating modes are provided: Detailed Physics mode & Fast Numerics mode. In the Detailed Physics mode, non-stationary, non-equilibrium radiation physics have been introduced to allow the modelling of transient plasmas in experimental geometry. In the Fast Numerics mode, the system architecture and the radiation transport is simplified to significantly accelerate the computation rate. The Fast Numerics mode allows the BME to be used realistically in parametric scanning to explore complex physical set up, before using the Detailed Physics mode. As an example of the results from the BME modelling, the EUV source plasma dynamics in the pulsed capillary discharge are presented.
A uniform parametrization of moment tensors
NASA Astrophysics Data System (ADS)
Tape, Walter; Tape, Carl
2015-09-01
A moment tensor is a 3 × 3 symmetric matrix that expresses an earthquake source. We construct a parametrization of the 5-D space of all moment tensors of unit norm. The coordinates associated with the parametrization are closely related to moment tensor orientations and source types. The parametrization is uniform, in the sense that equal volumes in the coordinate domain of the parametrization correspond to equal volumes of moment tensors. Uniformly distributed points in the coordinate domain therefore give uniformly distributed moment tensors. A cartesian grid in the coordinate domain can be used to search efficiently over moment tensors. We find that uniformly distributed moment tensors have uniformly distributed orientations (eigenframes), but that their source types (eigenvalue triples) are distributed so as to favour double couples.
Numerical models of extragalactic radio sources
NASA Technical Reports Server (NTRS)
Burns, Jack O.; Norman, Michael L.; Clarke, David A.
1991-01-01
When supercomputer-implemented numerical simulations analyzing the nonlinear physics inherent in the hydrodynamic and MHD equations are applied to extragalactic radio sources, many of the complex structures observed on telescopic images are reproduced. Attention is presently given to recently obtained results from 2D and 3D numerical simulations of the formation and evolution of extended radio morphologies; these numerical models allow the exploration of such physical phenomena as the role of magnetic fields in the dynamics and emissivity of extended radio galaxies, intermittent outflow from the cores of active galaxies, fluid-jet instabilities, and the bending of collimated outflows by motion through the intergalactic medium.
The Open Source Snowpack modelling ecosystem
NASA Astrophysics Data System (ADS)
Bavay, Mathias; Fierz, Charles; Egger, Thomas; Lehning, Michael
2016-04-01
As a large number of numerical snow models are available, a few stand out as quite mature and widespread. One such model is SNOWPACK, the Open Source model that is developed at the WSL Institute for Snow and Avalanche Research SLF. Over the years, various tools have been developed around SNOWPACK in order to expand its use or to integrate additional features. Today, the model is part of a whole ecosystem that has evolved to both offer seamless integration and high modularity so each tool can easily be used outside the ecosystem. Many of these Open Source tools experience their own, autonomous development and are successfully used in their own right in other models and applications. There is Alpine3D, the spatially distributed version of SNOWPACK, that forces it with terrain-corrected radiation fields and optionally with blowing and drifting snow. This model can be used on parallel systems (either with OpenMP or MPI) and has been used for applications ranging from climate change to reindeer herding. There is the MeteoIO pre-processing library that offers fully integrated data access, data filtering, data correction, data resampling and spatial interpolations. This library is now used by several other models and applications. There is the SnopViz snow profile visualization library and application that supports both measured and simulated snow profiles (relying on the CAAML standard) as well as time series. This JavaScript application can be used standalone without any internet connection or served on the web together with simulation results. There is the OSPER data platform effort with a data management service (build on the Global Sensor Network (GSN) platform) as well as a data documenting system (metadata management as a wiki). There are several distributed hydrological models for mountainous areas in ongoing development that require very little information about the soil structure based on the assumption that in step terrain, the most relevant information is
Asteroid Models from Multiple Data Sources
NASA Astrophysics Data System (ADS)
Durech, J.; Carry, B.; Delbo, M.; Kaasalainen, M.; Viikinkoski, M.
In the past decade, hundreds of asteroid shape models have been derived using the lightcurve inversion method. At the same time, a new framework of three-dimensional shape modeling based on the combined analysis of widely different data sources -- such as optical lightcurves, disk-resolved images, stellar occultation timings, mid-infrared thermal radiometry, optical interferometry, and radar delay-Doppler data -- has been developed. This multi-data approach allows the determination of most of the physical and surface properties of asteroids in a single, coherent inversion, with spectacular results. We review the main results of asteroid lightcurve inversion and also recent advances in multi-data modeling. We show that models based on remote sensing data were confirmed by spacecraft encounters with asteroids, and we discuss how the multiplication of highly detailed three-dimensional models will help to refine our general knowledge of the asteroid population. The physical and surface properties of asteroids, i.e., their spin, three-dimensional shape, density, thermal inertia, and surface roughness, are among the least known of all asteroid properties. Apart from the albedo and diameter, we have access to the whole picture for only a few hundreds of asteroids. These quantities are nevertheless very important to understand, as they affect the nongravitational Yarkovsky effect responsible for meteorite delivery to Earth, as well as the bulk composition and internal structure of asteroids.
NASA Astrophysics Data System (ADS)
Yoshida, Takato O.; Matsuzawa, Eiji; Matsuo, Tetsumichi; Koide, Yukio; Terakawa, Susumu; Yokokura, Teruo; Hirano, Toru
1995-03-01
A new cancer-treatment model, photodynamic therapy (PDT) combined with a type I topoisomerase inhibitor, camptothecin derivative (CPT-11), against HeLa cell tumors in BALB/c nude mice has been developed using a wide-band tunable coherent light source operated on optical parametric oscillation (OPO parametric tunable laser). The Photosan-3 PDT and CPT-11 combined therapy was remarkably effective, that is the inhibition rate (I.R.) 40 - 80%, as compared to PDT only in vivo. The analysis of HpD (Photosan-3) and CPT-11 effects on cultured HeLa cells in vitro has been studied by a video-enhanced contrast differential interference contrast microscope (VEC-DIC). Photosan-3 with 600 nm light killed cells by mitochondrial damage within 50 min, but not with 700 nm light. CPT-11 with 700 - 400 nm light killed cells within 50 min after nucleolus damage appeared after around 30 min. The localization of CPT-11 in cells was observed as fluorescence images in the nucleus, particularly the nucleoral area produced clear images using an Argus 100.
ENKI - An Open Source environmental modelling platfom
NASA Astrophysics Data System (ADS)
Kolberg, S.; Bruland, O.
2012-04-01
The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface
Self-seeding ring optical parametric oscillator
Smith, Arlee V.; Armstrong, Darrell J.
2005-12-27
An optical parametric oscillator apparatus utilizing self-seeding with an external nanosecond-duration pump source to generate a seed pulse resulting in increased conversion efficiency. An optical parametric oscillator with a ring configuration are combined with a pump that injection seeds the optical parametric oscillator with a nanosecond duration, mJ pulse in the reverse direction as the main pulse. A retroreflecting means outside the cavity injects the seed pulse back into the cavity in the direction of the main pulse to seed the main pulse, resulting in higher conversion efficiency.
Wareham, Alice; Lewandowski, Kuiama S.; Williams, Ann; Dennis, Michael J.; Sharpe, Sally; Vipond, Richard; Silman, Nigel; Ball, Graham
2016-01-01
A temporal study of gene expression in peripheral blood leukocytes (PBLs) from a Mycobacterium tuberculosis primary, pulmonary challenge model Macaca fascicularis has been conducted. PBL samples were taken prior to challenge and at one, two, four and six weeks post-challenge and labelled, purified RNAs hybridised to Operon Human Genome AROS V4.0 slides. Data analyses revealed a large number of differentially regulated gene entities, which exhibited temporal profiles of expression across the time course study. Further data refinements identified groups of key markers showing group-specific expression patterns, with a substantial reprogramming event evident at the four to six week interval. Selected statistically-significant gene entities from this study and other immune and apoptotic markers were validated using qPCR, which confirmed many of the results obtained using microarray hybridisation. These showed evidence of a step-change in gene expression from an ‘early’ FOS-associated response, to a ‘late’ predominantly type I interferon-driven response, with coincident reduction of expression of other markers. Loss of T-cell-associate marker expression was observed in responsive animals, with concordant elevation of markers which may be associated with a myeloid suppressor cell phenotype e.g. CD163. The animals in the study were of different lineages and these Chinese and Mauritian cynomolgous macaque lines showed clear evidence of differing susceptibilities to Tuberculosis challenge. We determined a number of key differences in response profiles between the groups, particularly in expression of T-cell and apoptotic makers, amongst others. These have provided interesting insights into innate susceptibility related to different host `phenotypes. Using a combination of parametric and non-parametric artificial neural network analyses we have identified key genes and regulatory pathways which may be important in early and adaptive responses to TB. Using comparisons
Seo, Seongho; Kim, Su J; Kim, Yu K; Lee, Jee-Young; Jeong, Jae M; Lee, Dong S; Lee, Jae S
2015-12-01
In recent years, several linearized model approaches for fast and reliable parametric neuroreceptor mapping based on dynamic nuclear imaging have been developed from the simplified reference tissue model (SRTM) equation. All the methods share the basic SRTM assumptions, but use different schemes to alleviate the effect of noise in dynamic-image voxels. Thus, this study aimed to compare those approaches in terms of their performance in parametric image generation. We used the basis function method and MRTM2 (multilinear reference tissue model with two parameters), which require a division process to obtain the distribution volume ratio (DVR). In addition, a linear model with the DVR as a model parameter (multilinear SRTM) was used in two forms: one based on linear least squares and the other based on extension of total least squares (TLS). Assessment using simulated and actual dynamic [(11)C]ABP688 positron emission tomography data revealed their equivalence with the SRTM, except for different noise susceptibilities. In the DVR image production, the two multilinear SRTM approaches achieved better image quality and regional compatibility with the SRTM than the others, with slightly better performance in the TLS-based method. PMID:26243707
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
Soulami, Ayoub; Lavender, Curt A.; Paxton, Dean M.; Burkes, Douglas
2015-06-15
Pacific Northwest National Laboratory (PNNL) has been investigating manufacturing processes for the uranium-10% molybdenum alloy plate-type fuel for high-performance research reactors in the United States. This work supports the U.S. Department of Energy National Nuclear Security Administration’s Office of Material Management and Minimization Reactor Conversion Program. This report documents modeling results of PNNL’s efforts to perform finite-element simulations to predict roll-separating forces for various rolling mill geometries for PNNL, Babcock & Wilcox Co., Y-12 National Security Complex, Los Alamos National Laboratory, and Idaho National Laboratory. The model developed and presented in a previous report has been subjected to further validation study using new sets of experimental data generated from a rolling mill at PNNL. Simulation results of both hot rolling and cold rolling of uranium-10% molybdenum coupons have been compared with experimental results. The model was used to predict roll-separating forces at different temperatures and reductions for five rolling mills within the National Nuclear Security Administration Fuel Fabrication Capability project. This report also presents initial results of a finite-element model microstructure-based approach to study the surface roughness at the interface between zirconium and uranium-10% molybdenum.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in
Optimal Parametric Feedback Excitation of Nonlinear Oscillators
NASA Astrophysics Data System (ADS)
Braun, David J.
2016-01-01
An optimal parametric feedback excitation principle is sought, found, and investigated. The principle is shown to provide an adaptive resonance condition that enables unprecedentedly robust movement generation in a large class of oscillatory dynamical systems. Experimental demonstration of the theory is provided by a nonlinear electronic circuit that realizes self-adaptive parametric excitation without model information, signal processing, and control computation. The observed behavior dramatically differs from the one achievable using classical parametric modulation, which is fundamentally limited by uncertainties in model information and nonlinear effects inevitably present in real world applications.
Optimal Parametric Feedback Excitation of Nonlinear Oscillators.
Braun, David J
2016-01-29
An optimal parametric feedback excitation principle is sought, found, and investigated. The principle is shown to provide an adaptive resonance condition that enables unprecedentedly robust movement generation in a large class of oscillatory dynamical systems. Experimental demonstration of the theory is provided by a nonlinear electronic circuit that realizes self-adaptive parametric excitation without model information, signal processing, and control computation. The observed behavior dramatically differs from the one achievable using classical parametric modulation, which is fundamentally limited by uncertainties in model information and nonlinear effects inevitably present in real world applications. PMID:26871336
An open source business model for malaria.
Årdal, Christine; Røttingen, John-Arne
2015-01-01
Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria
Constraining Emission Models of Luminous Blazar Sources
Sikora, Marek; Stawarz, Lukasz; Moderski, Rafal; Nalewajko, Krzysztof; Madejski, Greg; /KIPAC, Menlo Park /SLAC
2009-10-30
Many luminous blazars which are associated with quasar-type active galactic nuclei display broad-band spectra characterized by a large luminosity ratio of their high-energy ({gamma}-ray) and low-energy (synchrotron) spectral components. This large ratio, reaching values up to 100, challenges the standard synchrotron self-Compton models by means of substantial departures from the minimum power condition. Luminous blazars have also typically very hard X-ray spectra, and those in turn seem to challenge hadronic scenarios for the high energy blazar emission. As shown in this paper, no such problems are faced by the models which involve Comptonization of radiation provided by a broad-line-region, or dusty molecular torus. The lack or weakness of bulk Compton and Klein-Nishina features indicated by the presently available data favors production of {gamma}-rays via up-scattering of infrared photons from hot dust. This implies that the blazar emission zone is located at parsec-scale distances from the nucleus, and as such is possibly associated with the extended, quasi-stationary reconfinement shocks formed in relativistic outflows. This scenario predicts characteristic timescales for flux changes in luminous blazars to be days/weeks, consistent with the variability patterns observed in such systems at infrared, optical and {gamma}-ray frequencies. We also propose that the parsec-scale blazar activity can be occasionally accompanied by dissipative events taking place at sub-parsec distances and powered by internal shocks and/or reconnection of magnetic fields. These could account for the multiwavelength intra-day flares occasionally observed in powerful blazars sources.
Combining sources in stable isotope mixing models: alternative methods.
Phillips, Donald L; Newsome, Seth D; Gregg, Jillian W
2005-08-01
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants; or water bodies, and many others. A common problem is having too many sources to allow a unique solution. We discuss two alternative procedures for addressing this problem. One option is a priori to combine sources with similar signatures so the number of sources is small enough to provide a unique solution. Aggregation should be considered only when isotopic signatures of clustered sources are not significantly different, and sources are related so the combined source group has some functional significance. For example, in a food web analysis, lumping several species within a trophic guild allows more interpretable results than lumping disparate food sources, even if they have similar isotopic signatures. One result of combining mixing model sources is increased uncertainty of the combined end-member isotopic signatures and consequently the source contribution estimates; this effect can be quantified using the IsoError model (http://www.epa.gov/wed/pages/models/isotopes/isoerror1_04.htm). As an alternative to lumping sources before a mixing analysis, the IsoSource mixing model (http://www.epa.gov/wed/pages/models/isosource/isosource.htm) can be used to find all feasible solutions of source contributions consistent with isotopic mass balance. While ranges of feasible contributions for each individual source can often be quite broad, contributions from functionally related groups of sources can be summed a posteriori, producing a range of solutions for the aggregate source that may be considerably narrower. A paleo-human dietary analysis example illustrates this method, which involves a terrestrial meat food source, a combination of three terrestrial plant foods, and a combination of three marine foods. In this case, a posteriori aggregation of sources allowed
NASA Technical Reports Server (NTRS)
Stewart, R. B.; Grose, W. L.
1975-01-01
Parametric studies were made with a multilayer atmospheric diffusion model to place quantitative limits on the uncertainty of predicting ground-level toxic rocket-fuel concentrations. Exhaust distributions in the ground cloud, cloud stabilized geometry, atmospheric coefficients, the effects of exhaust plume afterburning of carbon monoxide CO, assumed surface mixing-layer division in the model, and model sensitivity to different meteorological regimes were studied. Large-scale differences in ground-level predictions are quantitatively described. Cloud alongwind growth for several meteorological conditions is shown to be in error because of incorrect application of previous diffusion theory. In addition, rocket-plume calculations indicate that almost all of the rocket-motor carbon monoxide is afterburned to carbon dioxide CO2, thus reducing toxic hazards due to CO. The afterburning is also shown to have a significant effect on cloud stabilization height and on ground-level concentrations of exhaust products.
NASA Astrophysics Data System (ADS)
Zhao, Yingru; Chen, Jincan
A theoretical modeling approach is presented, which describes the behavior of a typical fuel cell-heat engine hybrid system in steady-state operating condition based on an existing solid oxide fuel cell model, to provide useful fundamental design characteristics as well as potential critical problems. The different sources of irreversible losses, such as the electrochemical reaction, electric resistances, finite-rate heat transfer between the fuel cell and the heat engine, and heat-leak from the fuel cell to the environment are specified and investigated. Energy and entropy analyses are used to indicate the multi-irreversible losses and to assess the work potentials of the hybrid system. Expressions for the power output and efficiency of the hybrid system are derived and the performance characteristics of the system are presented and discussed in detail. The effects of the design parameters and operating conditions on the system performance are studied numerically. It is found that there exist certain optimum criteria for some important parameters. The results obtained here may provide a theoretical basis for both the optimal design and operation of real fuel cell-heat engine hybrid systems. This new approach can be easily extended to other fuel cell hybrid systems to develop irreversible models suitable for the investigation and optimization of similar energy conversion settings and electrochemistry systems.
Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K
2015-10-15
We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use.
NASA Astrophysics Data System (ADS)
Delogu, A.; Furini, F.
1991-09-01
Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.
Tong, Zhi; Bogris, Adonis; Lundström, Carl; McKinstrie, C J; Vasilyev, Michael; Karlsson, Magnus; Andrekson, Peter A
2010-07-01
Semi-classical noise characteristics are derived for the cascade of a non-degenerate phase-insensitive (PI) and a phase-sensitive (PS) fiber optical parametric amplifier (FOPA). The analysis is proved to be consistent with the quantum theory under the large-photon number assumption. Based on this, we show that the noise figure (NF) of the PS-FOPA at the second stage can be obtained via relative-intensity-noise (RIN) subtraction method after averaging the signal and idler NFs. Negative signal and idler NFs are measured, and <2 dB NF at >16 dB PS gain is estimated when considering the combined signal and idler input, which is believed to be the lowest measured NF of a non-degenerate PS amplifier to this date. The limitation of the RIN subtraction method attributed to pump transferred noise and Raman phonon induced noise is also discussed.
An Open Source Business Model for Malaria
Årdal, Christine; Røttingen, John-Arne
2015-01-01
Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, ‘closed’ publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more “open source” approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.’ President’s Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new
NASA Technical Reports Server (NTRS)
Brown, James L.
2014-01-01
Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.
Extended source model for diffusive coupling.
González-Ochoa, Héctor O; Flores-Moreno, Roberto; Reyes, Luz M; Femat, Ricardo
2016-01-01
Motivated by the prevailing approach to diffusion coupling phenomena which considers point-like diffusing sources, we derived an analogous expression for the concentration rate of change of diffusively coupled extended containers. The proposed equation, together with expressions based on solutions to the diffusion equation, is intended to be applied to the numerical solution of systems exclusively composed of ordinary differential equations, however is able to account for effects due the finite size of the coupled sources.
Extended source model for diffusive coupling.
González-Ochoa, Héctor O; Flores-Moreno, Roberto; Reyes, Luz M; Femat, Ricardo
2016-01-01
Motivated by the prevailing approach to diffusion coupling phenomena which considers point-like diffusing sources, we derived an analogous expression for the concentration rate of change of diffusively coupled extended containers. The proposed equation, together with expressions based on solutions to the diffusion equation, is intended to be applied to the numerical solution of systems exclusively composed of ordinary differential equations, however is able to account for effects due the finite size of the coupled sources. PMID:26802012
NASA Astrophysics Data System (ADS)
Li, Yun-He; Zhang, Jing-Fei; Zhang, Xin
2014-12-01
Dark energy can modify the dynamics of dark matter if there exists a direct interaction between them. Thus, a measurement of the structure growth, e.g., redshift-space distortions (RSDs), can provide a powerful tool to constrain the interacting dark energy (IDE) models. For the widely studied Q =3 β H ρde model, previous works showed that only a very small coupling [β ˜O (10-3) ] can survive in current RSD data. However, all of these analyses had to assume w >-1 and β >0 due to the existence of the large-scale instability in the IDE scenario. In our recent work [Phys. Rev. D 90, 063005 (2014)], we successfully solved this large-scale instability problem by establishing a parametrized post-Friedmann framework for the IDE scenario. So we, for the first time, have the ability to explore the full parameter space of the IDE models. In this work, we re-examine the observational constraints on the Q =3 β H ρde model within the parametrized post-Friedmann framework. By using the Planck data, the baryon acoustic oscillation data, the JLA sample of supernovae, and the Hubble constant measurement, we get β =-0.01 0-0.033+0.037 (1 σ ). The fit result becomes β =-0.014 8-0.0089+0.0100 (1 σ ) once we further incorporate the RSD data in the analysis. The error of β is substantially reduced with the help of the RSD data. Compared with the previous results, our results show that a negative β is favored by current observations, and a relatively larger interaction rate is permitted by current RSD data.
Stimulated Parametric Emission Microscope Systems
NASA Astrophysics Data System (ADS)
Itoh, Kazuyoshi; Isobe, Keisuke
2006-10-01
We present a novel microscopy technique based on the fourwave mixing (FWM) process that is enhanced by two-photon electronic resonance induced by a pump pulse along with stimulated emission induced by a dump pulse. A Ti:sapphire laser and an optical parametric oscillator are used as light sources for the pump and dump pulses, respectively. We demonstrate that our FWM technique can be used to obtain two-dimensional microscopic images of an unstained leaf of Camellia sinensis and an unlabeled tobacco BY2 Cell.
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur
2015-04-01
There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.
Spectral brilliance of parametric X-rays at the FAST facility
Sen, Tanaji; Seiss, Todd
2015-06-22
We discuss the generation of parametric X-rays in the new photoinjector at the FAST (Fermilab Accelerator Science and Technology) facility in Fermilab. These experiments will be conducted in addition to channeling X-ray radiation experiments. The low emittance electron beam makes this facility a promising source for creating brilliant X-rays. We discuss the theoretical model and present detailed calculations of the intensity spectrum, energy and angular widths and spectral brilliance under different conditions. Furthermore, we report on expected results with parametric X-rays generated while under channeling conditions.
On enhancement of vibration-based energy harvesting by a random parametric excitation
NASA Astrophysics Data System (ADS)
Bobryk, Roman V.; Yurchenko, Daniil
2016-03-01
An electromechanical linear oscillator with a random ambient excitation and telegraphic noise parametric excitation is considered as an energy harvester model. It is shown that a parametric colored excitation can have a dramatic effect on the enhancement of the energy harvesting. A close relation with mean-square stability of the oscillator is established. Four sources of the ambient excitation are considered: the white noise, the Ornstein-Uhlenbeck noise, the harmonic noise and the periodic function. Analytical expressions for stationary electrical net mean power are presented for all the considered cases, confirming the proposed approach.
Parametric Mass Reliability Study
NASA Technical Reports Server (NTRS)
Holt, James P.
2014-01-01
The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.
Discussion of Source Reconstruction Models Using 3D MCG Data
NASA Astrophysics Data System (ADS)
Melis, Massimo De; Uchikawa, Yoshinori
In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.
COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS
Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...
A Simple Double-Source Model for Interference of Capillaries
ERIC Educational Resources Information Center
Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua
2012-01-01
A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An…
NASA Astrophysics Data System (ADS)
Baker, Kirk R.; Kelly, James T.
2014-10-01
Some sources may need to estimate ozone and secondarily formed PM2.5 as part of the permit application process under the Clean Air Act New Source Review program. Photochemical grid models represent state-of-the-science gas- and particle-phase chemistry and provide a realistic chemical and physical environment for assessing changes in air quality resulting from changes in emissions. When using these tools for single source impact assessments, it is important to differentiate a single source impact from other emissions sources and to understand how well contemporary grid model applications capture near-source transport and chemistry. Here for the first time, both source apportionment and source sensitivity approaches (brute-force changes and high-order direct decoupled method) are used in a photochemical grid model to isolate impacts of a specific facility. These single source impacts are compared with in-plume measurements made as part of a well-characterized 1999 TVA Cumberland aircraft plume transect field study. The techniques were able to isolate the impacts of the TVA plume in a manner consistent with observations. The model predicted in-plume concentrations well when the observations were averaged to the grid scale, although peak concentrations of primary pollutants were generally underestimated near the source, possibly due to dilution in the 4-km grid cell.
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Trott, C. M.; Hurley-Walker, N.; Johnston-Hollitt, M.; McKinley, B.; Barry, N.; Beardsley, A. P.; Bowman, J. D.; Briggs, F.; Carroll, P.; Dillon, J. S.; Ewall-Wice, A.; Feng, L.; Gaensler, B. M.; Greenhill, L. J.; Hazelton, B. J.; Hewitt, J. N.; Jacobs, D. C.; Kim, H.-S.; Kittiwisit, P.; Lenc, E.; Line, J.; Loeb, A.; Mitchell, D. A.; Morales, M. F.; Neben, A. R.; Paul, S.; Pindor, B.; Pober, J. C.; Procopio, P.; Riding, J.; Sethi, S. K.; Shankar, N. U.; Subrahmanyan, R.; Sullivan, I. S.; Tegmark, M.; Thyagarajan, N.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Wyithe, J. S. B.
2016-05-01
Experiments that pursue detection of signals from the Epoch of Reionization (EoR) are relying on spectral smoothness of source spectra at low frequencies. This article empirically explores the effect of foreground spectra on EoR experiments by measuring high-resolution full-polarization spectra for the 586 brightest unresolved sources in one of the Murchison Widefield Array (MWA) EoR fields using 45 h of observation. A novel peeling scheme is used to subtract 2500 sources from the visibilities with ionospheric and beam corrections, resulting in the deepest, confusion-limited MWA image so far. The resulting spectra are found to be affected by instrumental effects, which limit the constraints that can be set on source-intrinsic spectral structure. The sensitivity and power-spectrum of the spectra are analysed, and it is found that the spectra of residuals are dominated by point spread function sidelobes from nearby undeconvolved sources. We release a catalogue describing the spectral parameters for each measured source.
Receptor modeling application framework for particle source apportionment.
Watson, John G; Zhu, Tan; Chow, Judith C; Engelbrecht, Johann; Fujita, Eric M; Wilson, William E
2002-12-01
Receptor models infer contributions from particulate matter (PM) source types using multivariate measurements of particle chemical and physical properties. Receptor models complement source models that estimate concentrations from emissions inventories and transport meteorology. Enrichment factor, chemical mass balance, multiple linear regression, eigenvector. edge detection, neural network, aerosol evolution, and aerosol equilibrium models have all been used to solve particulate air quality problems, and more than 500 citations of their theory and application document these uses. While elements, ions, and carbons were often used to apportion TSP, PM10, and PM2.5 among many source types, many of these components have been reduced in source emissions such that more complex measurements of carbon fractions, specific organic compounds, single particle characteristics, and isotopic abundances now need to be measured in source and receptor samples. Compliance monitoring networks are not usually designed to obtain data for the observables, locations, and time periods that allow receptor models to be applied. Measurements from existing networks can be used to form conceptual models that allow the needed monitoring network to be optimized. The framework for using receptor models to solve air quality problems consists of: (1) formulating a conceptual model; (2) identifying potential sources; (3) characterizing source emissions; (4) obtaining and analyzing ambient PM samples for major components and source markers; (5) confirming source types with multivariate receptor models; (6) quantifying source contributions with the chemical mass balance; (7) estimating profile changes and the limiting precursor gases for secondary aerosols; and (8) reconciling receptor modeling results with source models, emissions inventories, and receptor data analyses.
Data Series as a Source for Modelling
NASA Astrophysics Data System (ADS)
Bezruchko, Boris P.; Smirnov, Dmitry A.
When a model is constructed from "first principles", its variables inherit the sense implied in those principles which can be general laws or derived equations, e.g., like Kirchhoff's laws in the theory of electric circuits. When an empirical model is constructed from a time realisation, it is a separate task to reveal relationships between model parameters and object characteristics. It is not always possible to measure all variables entering model equations either in principle or due to technical reasons. So, one has to deal with available data and, probably, perform additional data transformations before constructing a model.
Validation of a rodent model of source memory.
Crystal, Jonathon D; Alford, Wesley T
2014-03-01
Source memory represents the origin (source) of information. Recently, we proposed that rats (Rattus norvegicus) remember the source of information. However, an alternative to source memory is the possibility that rats selectively encoded some, but not all, information rather than retrieving an episodic memory. We directly tested this 'encoding failure' hypothesis. Here, we show that rats remember the source of information, under conditions that cannot be attributed to encoding failure. Moreover, source memory lasted at least seven days but was no longer present 14 days after studying. Our findings suggest that long-lasting source memory may be modelled in non-humans. Our model should facilitate attempts to elucidate the biological underpinnings of source memory impairments in human memory disorders such as Alzheimer's disease.
An optoacoustic point source for acoustic scale model measurements.
Bolaños, Javier Gómez; Pulkki, Ville; Karppinen, Pasi; Hæggström, Edward
2013-04-01
A massless acoustic source is proposed for scale model work. This source is generated by focusing a pulsed laser beam to rapidly heat the air at the focal point. This produces an expanding small plasma ball which generates a sonic impulse that may be used as an acoustic point source. Repeatability, frequency response, and directivity of the source were measured to show that it can serve as a massless point source. The impulse response of a rectangular space was determined using this type of source. A good match was found between the predicted and the measured impulse responses of the space.
Modeling a Common-Source Amplifier Using a Ferroelectric Transistor
NASA Technical Reports Server (NTRS)
Sayyah, Rana; Hunt, Mitchell; MacLeond, Todd C.; Ho, Fat D.
2010-01-01
This paper presents a mathematical model characterizing the behavior of a common-source amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the common-source amplifier is the most widely used amplifier in MOS technology, understanding and modeling the behavior of the FeFET-based common-source amplifier will help in the integration of FeFETs into many circuits.
NASA Astrophysics Data System (ADS)
David, William I. F.; Evans, John S. O.
The rapidity with which powder diffraction data may be collected, not only at neutron and X-ray synchrotron facilities but also in the laboratory, means that the collection of a single diffraction pattern is now the exception rather than the rule. Many experiments involve the collection of hundreds and perhaps many thousands of datasets where a parameter such as temperature or pressure is varied or where time is the variable and life-cycle, synthesis or decomposition processes are monitored or three-dimensional space is scanned and the three-dimensional internal structure of an object is elucidated. In this paper, the origins of parametric diffraction are discussed and the techniques and challenges of parametric powder diffraction analysis are presented. The first parametric measurements were performed around 50 years ago with the development of a modified Guinier camera but it was the automation afforded by neutron diffraction combined with increases in computer speed and memory that established parametric diffraction on a strong footing initially at the ILL, Grenoble in France. The theoretical parameterisation of quantities such as lattice constants and atomic displacement parameters will be discussed and selected examples of parametric diffraction over the past 20 years will be reviewed that highlight the power of the technique.
NASA Astrophysics Data System (ADS)
Pedinotti, Vanessa; Boone, Aaron; Mognard, Nelly; Ricci, Sophie; Biancamaria, Sylvain; Lion, Christine
2013-04-01
Satellite measurements are used for hydrological investigations, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas above 250 x 250 m over continental surfaces between 78°S and 78°N. The purpose of the study presented here is to use SWOT virtual data for the optimization of the parameters of a large scale river routing model, typically employed for global scale applications. The method consists in applying a data assimilation approach, the Best Linear Unbiased Estimator (BLUE) algorithm, to correct uncertain input parameters of the ISBA-TRIP Continental Hydrologic System. In Land Surface Models (LSMs), parameters used to describe hydrological basin characteristics are generally derived from geomorphologic relationships, which might not always be realistic. The study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries and where geopolitical issues restrict the exchange of hydrological data. As a preparation for this study, the model was first evaluated against in-situ and satellite derived datasets within the framework of the AMMA project. Since the SWOT observations are not available yet and also to assess the skills of the assimilation method, the study is carried out in the framework of an Observing System Simulation Experiment (OSSE). Here, we assume that modeling errors are only due to uncertainties in Manning coefficient field. The true Manning coefficient is then supposed to be known and is used to generate synthetic SWOT observations over the period 2002-2003. The satellite measurement errors are estimated using a simple instrument simulator. The impact of the assimilation system on the Niger basin hydrological cycle is then quantified
Data Sources Available for Modeling Environmental Exposures in Older Adults
This report, “Data Sources Available for Modeling Environmental Exposures in Older Adults,” focuses on information sources and data available for modeling environmental exposures in the older U.S. population, defined here to be people 60 years and older, with an emphasis on those...
Modeling of non-Lambertian sources in lighting applications
NASA Astrophysics Data System (ADS)
Bennahmias, Mark; Arik, Engin; Yu, Kevin; Voloshenko, Dmitry; Chua, Kangbin; Pradhan, Ranjit; Forrester, Thomas; Jannson, Tomasz
2007-09-01
The photometric modeling of LEDs as generalized Lambertian sources (GL-Sources) is discussed. Non-Lambertian LED sources, with axial symmetry, have important real-world applications in general lighting. In particular, so-called generalized Lambertian sources, following a cosine to the nth power distribution (n>=1), can be used to describe the luminous output profiles from solid-state lighting devices like LEDs. For such sources, the knowledge of total power (in Lumens [Lms]), the knowledge of the output angular characteristics, as well as source area, is sufficient information to determine all other critical photometric quantities such as: maximum radiant intensity (in Candelas [Cd = Lm/Sr]) and maximum luminance (in nits [nts = Cd/m2]), as well as illuminance (in lux [lx = Lm/m2]). In this paper, we analyze this approach to modeling LEDs in terms of its applicability to real sources.
Eberhard, B.J.; Harbour, J.R.; Plodinec, M.J.
1994-06-01
As part of the DWPF Startup Test Program, a parametric study has been performed to determine a range of welder operating parameters which will produce acceptable final welds for canistered waste forms. The parametric window of acceptable welds defined by this study is 90,000 {plus_minus} 15,000 lb of force, 248,000 {plus_minus} 22,000 amps of current, and 95 {plus_minus} 15 cycles (@ 60 cops) for the time of application of the current.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
NASA Astrophysics Data System (ADS)
Hutcheon, Richard J.; Perrett, Brian J.; Mason, Paul D.
2004-12-01
Optical parametric oscillators (OPOs) using zinc germanium phosphide (ZGP) crystals as the active non-linear medium are important devices for wavelength conversion into the 3 to 5 μm mid-infrared waveband. However, the presence of optical absorption within ZGP at the pump wavelength can lead to detrimental thermo-optic effects (thermal lensing and dephasing) when operated under high average power conditions. In order to characterise the strength of thermal effects within ZGP OPOs a theoretical model is under development based on the commercially available software package GLAD. Pump, signal and idler beams are represented by transverse arrays of complex amplitudes and propagated according to diffraction and kinetics algorithms. The ZGP crystal is modelled as a series of crystal slices, using a split-step technique, with the effects of non-linear conversion, absorption and thermal effects applied to each step in turn. We report modelling predictions obtained to date for the strength of the thermal lens induced in a ZGP crystal on exposure to a 5 Watt Q-switch pulsed high-repetition rate (10 kHz) wavelength doubled Nd:YLF laser at 2.094 μm. Predicted steady-state thermal focal lengths and time constants are compared to experimental results measured for two ZGP crystals, with high and low pump absorption levels. GLAD model predictions for a singly-resonant ZGP OPO in the absence of thermal effects are also compared to predictions from the widely available software package SNLO.
Belloso, Alicia Bueno; García-Bellido, Juan; Sapone, Domenico E-mail: juan.garciabellido@uam.es
2011-10-01
We provide exact solutions to the cosmological matter perturbation equation in a homogeneous FLRW universe with a vacuum energy that can be parametrized by a constant equation of state parameter w and a very accurate approximation for the Ansatz w(a) = w{sub 0}+w{sub a}(1−a). We compute the growth index γ = log f(a)/log Ω{sub m}(a), and its redshift dependence, using the exact and approximate solutions in terms of Legendre polynomials and show that it can be parametrized as γ(a) = γ{sub 0}+γ{sub a}(1−a) in most cases. We then compare four different types of dark energy (DE) models: wΛCDM, DGP, f(R) and a LTB-large-void model, which have very different behaviors at z∼>1. This allows us to study the possibility to differentiate between different DE alternatives using wide and deep surveys like Euclid, which will measure both photometric and spectroscopic redshifts for several hundreds of millions of galaxies up to redshift z ≅ 2. We do a Fisher matrix analysis for the prospects of differentiating among the different DE models in terms of the growth index, taken as a given function of redshift or with a principal component analysis, with a value for each redshift bin for a Euclid-like survey. We use as observables the complete and marginalized power spectrum of galaxies P(k) and the Weak Lensing (WL) power spectrum. We find that, using P(k), one can reach (2%, 5%) errors in (w{sub 0},w{sub a}), and (4%, 12%) errors in (γ{sub 0},γ{sub a}), while using WL we get errors at least twice as large. These estimates allow us to differentiate easily between DGP, f(R) models and ΛCDM, while it would be more difficult to distinguish the latter from a variable equation of state parameter or LTB models using only the growth index.
Nuisance Source Population Modeling for Radiation Detection System Analysis
Sokkappa, P; Lange, D; Nelson, K; Wheeler, R
2009-10-05
A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial source frequencies, but leave the task of
Source apportionment of fine particles in Tennessee using a source-oriented model.
Doraiswamy, Prakash; Davis, Wayne T; Miller, Terry L; Fu, Joshua S
2007-04-01
Source apportionment of fine particles (PM2.5, particulate matter < 2 microm in aerodynamic diameter) is important to identify the source categories that are responsible for the concentrations observed at a particular receptor. Although receptor models have been used to do source apportionment, they do not fully take into account the chemical reactions (including photochemical reactions) involved in the formation of secondary fine particles. Secondary fine particles are formed from photochemical and other reactions involving precursor gases, such as sulfur dioxide, oxides of nitrogen, ammonia, and volatile organic compounds. This paper presents the results of modeling work aimed at developing a source apportionment of primary and secondary PM2.5. On-road mobile source and point source inventories for the state of Tennessee were estimated and compiled. The national emissions inventory for the year 1999 was used for the other states. U.S. Environmental Protection Agency Models3/Community Multi-Scale Air Quality modeling system was used for the photochemical/secondary particulate matter modeling. The modeling domain consisted of a nested 36-12-4-km domain. The 4-km domain covered the entire state of Tennessee. The episode chosen for the modeling runs was August 29 to September 9, 1999. This paper presents the approach used and the results from the modeling and attempts to quantify the contribution of major source categories, such as the on-road mobile sources (including the fugitive dust component) and coal-fired power plants, to observed PM2.5 concentrations in Tennessee. The results of this work will be helpful in policy issues targeted at designing control strategies to meet the PM2.5 National Ambient Air Quality Standards in Tennessee.
Meta-Analysis of Candidate Gene Effects Using Bayesian Parametric and Non-Parametric Approaches
Wu, Xiao-Lin; Gianola, Daniel; Rosa, Guilherme J. M.; Weigel, Kent A.
2014-01-01
Candidate gene (CG) approaches provide a strategy for identification and characterization of major genes underlying complex phenotypes such as production traits and susceptibility to diseases, but the conclusions tend to be inconsistent across individual studies. Meta-analysis approaches can deal with these situations, e.g., by pooling effect-size estimates or combining P values from multiple studies. In this paper, we evaluated the performance of two types of statistical models, parametric and non-parametric, for meta-analysis of CG effects using simulated data. Both models estimated a “central” effect size while taking into account heterogeneity over individual studies. The empirical distribution of study-specific CG effects was multi-modal. The parametric model assumed a normal distribution for the study-specific CG effects whereas the non-parametric model relaxed this assumption by posing a more general distribution with a Dirichlet process prior (DPP). Results indicated that the meta-analysis approaches could reduce false positive or false negative rates by pooling strengths from multiple studies, as compared to individual studies. In addition, the non-parametric, DPP model captured the variation of the “data” better than its parametric counterpart. PMID:25057320
Ying, Qi; Mysliwiec, Mitchell; Kleeman, Michael J
2004-02-15
A three-dimensional source-oriented Eulerian air quality model is developed that can predict source contributions to the visibility reduction. Particulate matter and precursor gases from 14 different sources (crustal material, paved road dust, diesel engines, meat cooking, noncatalyst-equipped gasoline engines, catalyst-equipped gasoline engines, high-sulfur fuel, sea salt, refrigerant losses, residential production, animals, soil and fertilizer application, other anthropogenic sources, and background sources) are tracked though a mathematical simulation of emission, chemical reaction, gas-to-particle conversion, transport, and deposition. A visibility model based on Mie theory is modified to use the calculated source contributions to airborne particulate matter size and composition as well as gas-phase pollutant concentrations to quantify total source contributions to visibility impairment. The combined air quality-visibility model is applied to predict source contributions to visibility reduction in southern California for a typical air pollution episode (September 23-25, 1996). The model successfully predicts a severe visibility reduction in the eastern portion of the South Coast Air Basin where the average daytime visibility is measured to be less than 10 km. In the relatively clean coastal portion of the domain, the model successfully predicts that the average daytime visibility is greater than 65 km. Transportation-related sources directly account for approximately 50% of the visibility reduction (diesel engines approximately 15-20%, catalyst-equipped gasoline engines approximately 10-20%, noncatalyst-equipped gasoline engines approximately 3-5%, crustal and paved road dust approximately 5%) in the region with the most severe visibility impairment. Ammonia emissions from animal sources account for approximately 10-15% of the visibility reduction. PMID:14998023
Modeling the reversible, diffusive sink effect in response to transient contaminant sources.
Zhao, D; Little, J C; Hodgson, A T
2002-09-01
A physically based diffusion model is used to evaluate the sink effect of diffusion-controlled indoor materials and to predict the transient contaminant concentration in indoor air in response to several time-varying contaminant sources. For simplicity, it is assumed the predominant indoor material is a homogeneous slab, initially free of contaminant, and the air within the room is well mixed. The model enables transient volatile organic compound (VOC) concentrations to be predicted based on the material/air partition coefficient (K) and the material-phase diffusion coefficient (D) of the sink. Model predictions are made for three scenarios, each mimicking a realistic situation in a building. Styrene, phenol, and naphthalene are used as representative VOCs. A styrene butadiene rubber (SBR) backed carpet, vinyl flooring (VF), and a polyurethane foam (PUF) carpet cushion are considered as typical indoor sinks. In scenarios involving a sinusoidal VOC input and a double exponential decaying input, the model predicts the sink has a modest impact for SBR/styrene, but the effect increases for VF/phenol and PUF/naphthalene. In contrast, for an episodic chemical spill, SBR is predicted to reduce the peak styrene concentration considerably. A parametric study reveals for systems involving a large equilibrium constant (K), the kinetic constant (D) will govern the shape of the resulting gasphase concentration profile. On the other hand, for systems with a relaxed mass transfer resistance, K will dominate the profile. PMID:12244748
Modeling the reversible sink effect in response to transient contaminant sources
Zhao, Dongye; Little, John C.; Hodgson, Alfred T.
2001-02-01
A physically based diffusion model is used to evaluate the sink effect of diffusion-controlled indoor materials and to predict the transient contaminant concentration in indoor air in response to several time-varying contaminant sources. For simplicity, it is assumed the predominant indoor material is a homogeneous slab, initially free of contaminant, and the air within the room is well mixed. The model enables transient volatile organic compound (VOC) concentrations to be predicted based on the material/air partition coefficient (K) and the material-phase diffusion coefficient (D) of the sink. Model predictions are made for three scenarios, each mimicking a realistic situation in a building. Styrene, phenol, and naphthalene are used as representative VOCs. A styrene butadiene rubber (SBR) backed carpet, vinyl flooring (VF), and a polyurethane foam (PUF) carpet cushion are considered as typical indoor sinks. In scenarios involving a sinusoidal VOC input and a double exponential decaying input, the model predicts the sink has a modest impact for SBR/styrene, but the effect increases for VF/phenol and PUF/naphthalene. In contrast, for an episodic chemical spill, SBR is predicted to reduce the peak styrene concentration considerably. A parametric study reveals for systems involving a large equilibrium constant (K), the kinetic constant (D) will govern the shape of the resulting gas-phase concentration profile. On the other hand, for systems with a relaxed mass transfer resistance, K will dominate the profile.
Neuromagnetic source reconstruction
Lewis, P.S.; Mosher, J.C.; Leahy, R.M.
1994-12-31
In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.
Wolfenstein parametrization reexamined
Xing, Z. )
1995-04-01
The Wolfenstein parametrization of the 3[times]3 Kobayashi-Maskawa (KM) matrix [ital V] is modified by keeping its unitarity up to an accuracy of [ital O]([lambda][sup 6]). This modification can self-consistently lead to the off-diagonal asymmetry of [ital V], [vert bar][ital V][sub [ital i][ital j
NASA Astrophysics Data System (ADS)
Ševecek, Pavel; Broz, Miroslav; Nesvorny, David; Durda, Daniel D.; Asphaug, Erik; Walsh, Kevin J.; Richardson, Derek C.
2016-10-01
Detailed models of asteroid collisions can yield important constrains for the evolution of the Main Asteroid Belt, but the respective parameter space is large and often unexplored. We thus performed a new set of simulations of asteroidal breakups, i.e. fragmentations of intact targets, subsequent gravitational reaccumulation and formation of small asteroid families, focusing on parent bodies with diameters D = 10 km.Simulations were performed with a smoothed-particle hydrodynamics (SPH) code (Benz & Asphaug 1994), combined with an efficient N-body integrator (Richardson et al. 2000). We assumed a number of projectile sizes, impact velocities and impact angles. The rheology used in the physical model does not include friction nor crushing; this allows for a direct comparison to results of Durda et al. (2007). Resulting size-frequency distributions are significantly different from scaled-down simulations with D = 100 km monolithic targets, although they may be even more different for pre-shattered targets.We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions in N-body simulations of small asteroid families. Finally, we discuss various uncertainties related to SPH simulations.
Amjadi Kashani, Mohammad Reza; Nikkhoo, Mohammad; Khalaf, Kinda; Firoozbakhsh, Keikhosrow; Arjmand, Navid; Razmjoo, Arash; Parnianpour, Mohamad
2014-12-01
Osteoporosis is a progressive bone disease characterized by deterioration in the quantity and quality of bone, leading to inferior mechanical properties and an increased risk of fracture. Current assessment of osteoporosis is typically based on bone densitometry tools such as Quantitative Computed Tomography (QCT) and Dual Energy X-ray absorptiometry (DEXA). These assessment modalities mainly rely on estimating the bone mineral density (BMD). Hence present densitometry tools describe only the deterioration of the quantity of bone associated with the disease and not the affected morphology or microstructural changes, resulting in potential incomplete assessment, many undetected patients, and unexplained fractures. In this study, an in-silico parametric model of vertebral trabecular bone incorporating both material and microstructural parameters was developed towards the accurate assessment of osteoporosis and the consequent risk of bone fracture. The model confirms that the mechanical properties such as strength and stiffness of vertebral trabecular tissue are highly influenced by material properties as well as morphology characteristics such as connectivity, which reflects the quality of connected inter-trabecular parts. The FE cellular solid model presented here provides a holistic approach that incorporates both material and microstructural elements associated with the degenerative process, and hence has the potential to provide clinical practitioners and researchers with more accurate assessment method for the degenerative changes leading to inferior mechanical properties and increased fracture risk associated with age and/or disease such as Osteoporosis. PMID:25515229
NASA Astrophysics Data System (ADS)
Arsalis, Alexandros
Detailed thermodynamic, kinetic, geometric, and cost models are developed, implemented, and validated for the synthesis/design and operational analysis of hybrid SOFC-gas turbine-steam turbine systems ranging in size from 1.5 to 10 MWe. The fuel cell model used in this research work is based on a tubular Siemens-Westinghouse-type SOFC, which is integrated with a gas turbine and a heat recovery steam generator (HRSG) integrated in turn with a steam turbine cycle. The current work considers the possible benefits of using the exhaust gases in a HRSG in order to produce steam which drives a steam turbine for additional power output. Four different steam turbine cycles are considered in this research work: a single-pressure, a dual-pressure, a triple pressure, and a triple pressure with reheat. The models have been developed to function both at design (full load) and off-design (partial load) conditions. In addition, different solid oxide fuel cell sizes are examined to assure a proper selection of SOFC size based on efficiency or cost. The thermoeconomic analysis includes cost functions developed specifically for the different system and component sizes (capacities) analyzed. A parametric study is used to determine the most viable system/component syntheses/designs based on maximizing total system efficiency or minimizing total system life cycle cost.
Parametric initial conditions for core-collapse supernova simulations
NASA Astrophysics Data System (ADS)
Suwa, Yudai; Müller, Ewald
2016-08-01
We investigate a method to construct parametrized progenitor models for core-collapse supernova simulations. Different from all modern core-collapse supernova studies, which rely on progenitor models from stellar evolution calculations, we follow the methodology of Baron & Cooperstein to construct initial models. Choosing parametrized spatial distributions of entropy and electron fraction as a function of mass coordinate and solving the equation of hydrostatic equilibrium, we obtain the initial density structures of our progenitor models. First, we calculate structures with parameters fitting broadly the evolutionary model s11.2 of Woosley et al. (2002). We then demonstrate the reliability of our method by performing general relativistic hydrodynamic simulations in spherical symmetry with the isotropic diffusion source approximation to solve the neutrino transport. Our comprehensive parameter study shows that initial models with a small central entropy (≲0.4 kB nucleon-1) can explode even in spherically symmetric simulations. Models with a large entropy (≳6 kB nucleon-1) in the Si/O layer have a rather large explosion energy (˜4 × 1050 erg) at the end of the simulations, which is still rapidly increasing.
Optimization of noncollinear optical parametric amplification
NASA Astrophysics Data System (ADS)
Schimpf, D. N.; Rothardt, J.; Limpert, J.; Tünnermann, A.
2007-02-01
Noncollinearly phase-matched optical parametric amplifiers (NOPAs) - pumped with the green light of a frequency doubled Yb-doped fiber-amplifier system 1, 2 - permit convenient generation of ultrashort pulses in the visible (VIS) and near infrared (NIR) 3. The broad bandwidth of the parametric gain via the noncollinear pump configuration allows amplification of few-cycle optical pulses when seeded with a spectrally flat, re-compressible signal. The short pulses tunable over a wide region in the visible permit transcend of frontiers in physics and lifescience. For instance, the resulting high temporal resolution is of significance for many spectroscopic techniques. Furthermore, the high magnitudes of the peak-powers of the produced pulses allow research in high-field physics. To understand the demands of noncollinear optical parametric amplification using a fiber pump source, it is important to investigate this configuration in detail 4. An analysis provides not only insight into the parametric process but also determines an optimal choice of experimental parameters for the objective. Here, the intention is to design a configuration which yields the shortest possible temporal pulse. As a consequence of this analysis, the experimental setup could be optimized. A number of aspects of optical parametric amplifier performance have been treated analytically and computationally 5, but these do not fully cover the situation under consideration here.
Energy sources in gamma-ray burst models
NASA Technical Reports Server (NTRS)
Taam, Ronald E.
1987-01-01
The current status of energy sources in models of gamma-ray bursts is examined. Special emphasis is placed on the thermonuclear flash model which has been the most developed model to date. Although there is no generally accepted model, if the site for the gamma-ray burst is on a strongly magnetized neutron star, the thermonuclear model can qualitatively explain the energetics of some, but probably not all burst events. The critical issues that may differentiate between the possible sources of energy for gamma-ray bursts are listed and briefly discussed.
A simple double-source model for interference of capillaries
NASA Astrophysics Data System (ADS)
Hou, Zhibo; Zhao, Xiaohong; Xiao, Jinghua
2012-01-01
A simple but physically intuitive double-source model is proposed to explain the interferogram of a laser-capillary system, where two effective virtual sources are used to describe the rays reflected by and transmitted through the capillary. The locations of the two virtual sources are functions of the observing positions on the target screen. An inverse proportionality between the fringes spacing and the capillary radius is derived based on the simple double-source model. This can provide an efficient and precise method to measure a small capillary diameter of micrometre scale. This model could be useful because it presents a fresh perspective on the diffraction of light from a particular geometry (transparent cylinder), which is not straightforward for undergraduates. It also offers an alternative interferometer to perform a different type of measurement, especially for using virtual sources.
Mixing and bottom friction: parametrization and application to the surf zone
NASA Astrophysics Data System (ADS)
Bennis, A.-C.; Dumas, F.; Ardhuin, F.; Blanke, B.; Lepesqueur, J.
2012-04-01
Wave breaking has been observed to impact the bottom boundary layer in surf zones, with potential impacts on bottom friction. Observations in the inner surf zone have also shown a tendency to an underestimation of the wave-induced set-up when using usual model parameterizations. The present study investigates the possible impact of wave breaking on bottom friction and set-up using a recently proposed parameterization of the wave-induced turbulent kinetic energy in the vertical mixing parameterization of the wave-averaged flow. This parametrization proposed by Mellor (2002) allows us to take account the oscillations of the bottom boundary layer with the wave phases thanks to some additional turbulent source terms. First, the behavior of this parameterization, is investigated by comparing phase-resolving and phase-averaged solutions. The hydrodynamical model MARS (Lazure et Dumas, 2008) is used for this, using a modified k-epsilon model to take account the Mellor (2002) parametrization. It is shown that the phase averaged solution strongly overestimates the turbulent kinetic energy, which is similar to the situation of the air flow over waves (Miles 1996). The waves inhibits the turbulence and the wave-averaged parametrization is not able to reproduce correctly this phenomenom. Cases with wave breaking at the surface are simulated in order to study the influence of surface wave breaking on the bottom boundary layer. This parametrization is applied in the surf zone for two differents cases, one for a planar beach and one other for a barred beach with rip currents. The coupled model MARS-WAVEWATCH III is used for this (Bennis et al, 2011) and for a realistic planar beach, the mixing parameterization has only a limited impact on the bottom friction and the wave set-up, unless the bottom roughness is greatly enhanced in very shallow water, or for a spatially varying roughness. The use of the mixing parametrization requires an adjustement of the bottom roughness to fit
Nonpoint source pollution: a distributed water quality modeling approach.
León, L F; Soulis, E D; Kouwen, N; Farquhar, G J
2001-03-01
A distributed water quality model for nonpoint source pollution modeling in agricultural watersheds is described in this paper. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. With data extracted using Geographical Information Systems (GIS) technology for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. The transferability of model parameters to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse modeling. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve nonpoint source modeling at the watershed-scale level.
On source models for 192Ir HDR brachytherapy dosimetry using model based algorithms
NASA Astrophysics Data System (ADS)
Pantelis, Evaggelos; Zourari, Kyveli; Zoros, Emmanouil; Lahanas, Vasileios; Karaiskos, Pantelis; Papagiannis, Panagiotis
2016-06-01
A source model is a prerequisite of all model based dose calculation algorithms. Besides direct simulation, the use of pre-calculated phase space files (phsp source models) and parameterized phsp source models has been proposed for Monte Carlo (MC) to promote efficiency and ease of implementation in obtaining photon energy, position and direction. In this work, a phsp file for a generic 192Ir source design (Ballester et al 2015) is obtained from MC simulation. This is used to configure a parameterized phsp source model comprising appropriate probability density functions (PDFs) and a sampling procedure. According to phsp data analysis 15.6% of the generated photons are absorbed within the source, and 90.4% of the emergent photons are primary. The PDFs for sampling photon energy and direction relative to the source long axis, depend on the position of photon emergence. Photons emerge mainly from the cylindrical source surface with a constant probability over ±0.1 cm from the center of the 0.35 cm long source core, and only 1.7% and 0.2% emerge from the source tip and drive wire, respectively. Based on these findings, an analytical parameterized source model is prepared for the calculation of the PDFs from data of source geometry and materials, without the need for a phsp file. The PDFs from the analytical parameterized source model are in close agreement with those employed in the parameterized phsp source model. This agreement prompted the proposal of a purely analytical source model based on isotropic emission of photons generated homogeneously within the source core with energy sampled from the 192Ir spectrum, and the assignment of a weight according to attenuation within the source. Comparison of single source dosimetry data obtained from detailed MC simulation and the proposed analytical source model show agreement better than 2% except for points lying close to the source longitudinal axis.
Bayesian mixture models for source separation in MEG
NASA Astrophysics Data System (ADS)
Calvetti, Daniela; Homa, Laura; Somersalo, Erkki
2011-11-01
This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal.
Equivalent source modeling of the main field using Magsat data
NASA Technical Reports Server (NTRS)
1980-01-01
Progress is reported on software development for equivalent dipole source modeling of the main magnetic field. This includes a spatial statistical output capability, a subroutine to compute the equivalent spherical harmonic representation from the dipole distribution, and capability to plot the global locations of the dipoles and the values of the source magnetization.
NASA Astrophysics Data System (ADS)
Stepanek, P.; Rodriguez-Solano, C.; Filler, V.; Hugentobler, U.
2011-12-01
The focus of the studies is the analysis of the comparison between two different approaches for LEO satellite orbit estimation employing DORIS measurements. The first one is the reduced-dynamical model, based on the orbit modeling by using the empirical and the pseudo-stochastic parameters. The second approach includes the attitude models and the CNES-developed satellite macromodels, with modeling of non-conservative acceleration, i.e., Sun radiation pressure, Earth radiation pressure and atmospheric drag. Both approaches are used at analysis centers providing DORIS solutions. The reduced-dynamical modeling is currently used by the GOP analysis center, which achieves similar accuracy of the free-network solutions as the other centers utilizing a precise non-conservative force modeling. The GOP works with a modified version of the Bernese GPS Software that has not included the non-conservative modeling. This limitation is now overcome by the new scientific modification of the software, which opens the unique possibility to compare both approaches by using the same software platform. We compare external and internal precision of the estimated orbits. We also analyze the individual satellite free-network DORIS solutions and time-series of derived parameters, i.e., station coordinates, TRF scale, the geocenter variations and the Earth rotation parameters. The studies highlight the main differences in the results that should answer the question whether the modeling of non-conservative forces including the CNES box-wing satellite models actually brings a significant improvement to the DORIS solutions.
Source signature and acoustic field of seismic physical modeling
NASA Astrophysics Data System (ADS)
Lin, Q.; Jackson, C.; Tang, G.; Burbach, G.
2004-12-01
As an important tool of seismic research and exploration, seismic physical modeling simulates the real world data acquisition by scaling the model, acquisition parameters, and some features of the source generated by a transducer. Unlike the numerical simulation where a point source is easily satisfied, the transducer can't be made small enough for approximating the point source in physical modeling, therefore yield different source signature than the sources applied in the field data acquisition. To better und