Paul, Sarbajit; Chang, Junghwan
2017-01-01
This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension. PMID:28671580
Paul, Sarbajit; Chang, Junghwan
2017-07-01
This paper presents a design approach for a magnetic sensor module to detect mover position using the proper orthogonal decomposition-dynamic mode decomposition (POD-DMD)-based nonlinear parametric model order reduction (PMOR). The parameterization of the sensor module is achieved by using the multipolar moment matching method. Several geometric variables of the sensor module are considered while developing the parametric study. The operation of the sensor module is based on the principle of the airgap flux density distribution detection by the Hall Effect IC. Therefore, the design objective is to achieve a peak flux density (PFD) greater than 0.1 T and total harmonic distortion (THD) less than 3%. To fulfill the constraint conditions, the specifications for the sensor module is achieved by using POD-DMD based reduced model. The POD-DMD based reduced model provides a platform to analyze the high number of design models very fast, with less computational burden. Finally, with the final specifications, the experimental prototype is designed and tested. Two different modes, 90° and 120° modes respectively are used to obtain the position information of the linear motor mover. The position information thus obtained are compared with that of the linear scale data, used as a reference signal. The position information obtained using the 120° mode has a standard deviation of 0.10 mm from the reference linear scale signal, whereas the 90° mode position signal shows a deviation of 0.23 mm from the reference. The deviation in the output arises due to the mechanical tolerances introduced into the specification during the manufacturing process. This provides a scope for coupling the reliability based design optimization in the design process as a future extension.
NASA Astrophysics Data System (ADS)
Fujioka, K.; Fujimoto, Y.; Tsubakimoto, K.; Kawanaka, J.; Shoji, I.; Miyanaga, N.
2015-03-01
The refractive index of a potassium dihydrogen phosphate (KDP) crystal strongly depends on the deuteration fraction of the crystal. The wavelength dependence of the phase-matching angle in the near-infrared optical parametric process shows convex and concave characteristics for pure KDP and pure deuterated KDP (DKDP), respectively, when pumped by the second harmonic of Nd- or Yb-doped solid state lasers. Using these characteristics, ultra-broadband phase matching can be realized by optimization of the deuteration fraction. The refractive index of DKDP that was grown with a different deuteration fraction (known as partially deuterated KDP or pDKDP) was measured over a wide wavelength range of 0.4-1.5 μm by the minimum deviation method. The wavelength dispersions of the measured refractive indices were fitted using a modified Sellmeier equation, and the deuteration fraction dependence was analyzed using the Lorentz-Lorenz equation. The wavelength-dependent phase-matching angle for an arbitrary deuteration fraction was then calculated for optical parametric amplification with pumping at a wavelength of 526.5 nm. The results revealed that a refractive index database with precision of more than 2 × 10-5 was necessary for exact evaluation of the phase-matching condition. An ultra-broad gain bandwidth of up to 490 nm will be feasible when using the 68% pDKDP crystal.
Suppression of work fluctuations by optimal control: An approach based on Jarzynski's equality
NASA Astrophysics Data System (ADS)
Xiao, Gaoyang; Gong, Jiangbin
2014-11-01
Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, aspects of work fluctuations will be an important factor in designing nanoscale heat engines. In this work, an optimal control approach directly exploiting Jarzynski's equality is proposed to effectively suppress the fluctuations in the work statistics, for systems (initially at thermal equilibrium) subject to a work protocol but isolated from a bath during the protocol. The control strategy is to minimize the deviations of individual values of e-β W from their ensemble average given by e-β Δ F, where W is the work, β is the inverse temperature, and Δ F is the free energy difference between two equilibrium states. It is further shown that even when the system Hamiltonian is not fully known, it is still possible to suppress work fluctuations through a feedback loop, by refining the control target function on the fly through Jarzynski's equality itself. Numerical experiments are based on linear and nonlinear parametric oscillators. Optimal control results for linear parametric oscillators are also benchmarked with early results based on shortcuts to adiabaticity.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
NASA Astrophysics Data System (ADS)
Shrivastava, Prashant Kumar; Pandey, Arun Kumar
2018-06-01
Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.
NASA Astrophysics Data System (ADS)
Shrivastava, Prashant Kumar; Pandey, Arun Kumar
2018-03-01
The Inconel-718 is one of the most demanding advanced engineering materials because of its superior quality. The conventional machining techniques are facing many problems to cut intricate profiles on these materials due to its minimum thermal conductivity, minimum elastic property and maximum chemical affinity at magnified temperature. The laser beam cutting is one of the advanced cutting method that may be used to achieve the geometrical accuracy with more precision by the suitable management of input process parameters. In this research work, the experimental investigation during the pulsed Nd:YAG laser cutting of Inconel-718 has been carried out. The experiments have been conducted by using the well planned orthogonal array L27. The experimentally measured values of different quality characteristics have been used for developing the second order regression models of bottom kerf deviation (KD), bottom kerf width (KW) and kerf taper (KT). The developed models of different quality characteristics have been utilized as a quality function for single-objective optimization by using particle swarm optimization (PSO) method. The optimum results obtained by the proposed hybrid methodology have been compared with experimental results. The comparison of optimized results with the experimental results shows that an individual improvement of 75%, 12.67% and 33.70% in bottom kerf deviation, bottom kerf width, and kerf taper has been observed. The parametric effects of different most significant input process parameters on quality characteristics have also been discussed.
Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations
NASA Astrophysics Data System (ADS)
Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying
2010-09-01
Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).
Model Adaptation in Parametric Space for POD-Galerkin Models
NASA Astrophysics Data System (ADS)
Gao, Haotian; Wei, Mingjun
2017-11-01
The development of low-order POD-Galerkin models is largely motivated by the expectation to use the model developed with a set of parameters at their native values to predict the dynamic behaviors of the same system under different parametric values, in other words, a successful model adaptation in parametric space. However, most of time, even small deviation of parameters from their original value may lead to large deviation or unstable results. It has been shown that adding more information (e.g. a steady state, mean value of a different unsteady state, or an entire different set of POD modes) may improve the prediction of flow with other parametric states. For a simple case of the flow passing a fixed cylinder, an orthogonal mean mode at a different Reynolds number may stabilize the POD-Galerkin model when Reynolds number is changed. For a more complicated case of the flow passing an oscillatory cylinder, a global POD-Galerkin model is first applied to handle the moving boundaries, then more information (e.g. more POD modes) is required to predicate the flow under different oscillatory frequencies. Supported by ARL.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
Nishiura, Hiroshi
2009-01-01
Determination of the most appropriate quarantine period for those exposed to smallpox is crucial to the construction of an effective preparedness program against a potential bioterrorist attack. This study reanalyzed data on the incubation period distribution of smallpox to allow the optimal quarantine period to be objectively calculated. In total, 131 cases of smallpox were examined; incubation periods were extracted from four different sets of historical data and only cases arising from exposure for a single day were considered. The mean (median and standard deviation (SD)) incubation period was 12.5 (12.0, 2.2) days. Assuming lognormal and gamma distributions for the incubation period, maximum likelihood estimates (and corresponding 95% confidence interval (CI)) of the 95th percentile were 16.4 (95% CI: 15.6, 17.9) and 16.2 (95% CI: 15.5, 17.4) days, respectively. Using a non-parametric method, the 95th percentile point was estimated as 16 (95% CI: 15, 17) days. The upper 95% CIs of the incubation periods at the 90th, 95th and 99th percentiles were shorter than 17, 18 and 23 days, respectively, using both parametric and non-parametric methods. These results suggest that quarantine measures can ensure non-infection among those exposed to smallpox with probabilities higher than 95-99%, if the exposed individuals are quarantined for 18-23 days after the date of contact tracing.
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
Zhou, Dong; Zhang, Hui; Ye, Peiqing
2016-01-01
Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.
A New Control Paradigm for Stochastic Differential Equations
NASA Astrophysics Data System (ADS)
Schmid, Matthias J. A.
This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.
ERIC Educational Resources Information Center
Sobh, Tarek M.; Tibrewal, Abhilasha
2006-01-01
Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…
Ku band low noise parametric amplifier
NASA Technical Reports Server (NTRS)
1976-01-01
A low noise, K sub u-band, parametric amplifier (paramp) was developed. The unit is a spacecraft-qualifiable, prototype, parametric amplifier for eventual application in the shuttle orbiter. The amplifier was required to have a noise temperature of less than 150 K. A noise temperature of less than 120 K at a gain level of 17 db was achieved. A 3-db bandwidth in excess of 350 MHz was attained, while deviation from phase linearity of about + or - 1 degree over 50 MHz was achieved. The paramp operates within specification over an ambient temperature range of -5 C to +50 C. The performance requirements and the operation of the K sub u-band parametric amplifier system are described. The final test results are also given.
A repeatable and scalable fabrication method for sharp, hollow silicon microneedles
NASA Astrophysics Data System (ADS)
Kim, H.; Theogarajan, L. S.; Pennathur, S.
2018-03-01
Scalability and manufacturability are impeding the mass commercialization of microneedles in the medical field. Specifically, microneedle geometries need to be sharp, beveled, and completely controllable, difficult to achieve with microelectromechanical fabrication techniques. In this work, we performed a parametric study using silicon etch chemistries to optimize the fabrication of scalable and manufacturable beveled silicon hollow microneedles. We theoretically verified our parametric results with diffusion reaction equations and created a design guideline for a various set of miconeedles (80-160 µm needle base width, 100-1000 µm pitch, 40-50 µm inner bore diameter, and 150-350 µm height) to show the repeatability, scalability, and manufacturability of our process. As a result, hollow silicon microneedles with any dimensions can be fabricated with less than 2% non-uniformity across a wafer and 5% deviation between different processes. The key to achieving such high uniformity and consistency is a non-agitated HF-HNO3 bath, silicon nitride masks, and surrounding silicon filler materials with well-defined dimensions. Our proposed method is non-labor intensive, well defined by theory, and straightforward for wafer scale mass production, opening doors to a plethora of potential medical and biosensing applications.
Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach
NASA Astrophysics Data System (ADS)
Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.
2017-07-01
The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.
Parametric Blade Study Test Report Rotor Configuration. Number 2
1988-11-01
Incidence Angle (100% N) .............. 51 9 Rotor Relative Inlet Mach Number (100% N) ... 51 1G Rotor Loss Coefficient (100% N) ............. 52 11 Rotor...Diffusion Factor (100% N) ............. 52 12 Rotor Deviation Angle (100% N) .............. 53 13 Stator Incidence Angle (100% N) ............. 53 14...78 50 Stator Deviation Angle (90% N) .............. 79 51 Stator Loss Coefficient (90% N) ............. 79 52 Static Pressure Distribution
GOCI image enhancement using an MTF compensation technique for coastal water applications.
Oh, Eunsong; Choi, Jong-Kuk
2014-11-03
The Geostationary Ocean Color Imager (GOCI) is the first optical sensor in geostationary orbit for monitoring the ocean environment around the Korean Peninsula. This paper discusses on-orbit modulation transfer function (MTF) estimation with the pulse-source method and its compensation results for the GOCI. Additionally, by analyzing the relationship between the MTF compensation effect and the accuracy of the secondary ocean product, we confirmed the optimal MTF compensation parameter for enhancing image quality without variation in the accuracy. In this study, MTF assessment was performed using a natural target because the GOCI system has a spatial resolution of 500 m. For MTF compensation with the Wiener filter, we fitted a point spread function with a Gaussian curve controlled by a standard deviation value (σ). After a parametric analysis for finding the optimal degradation model, the σ value of 0.4 was determined to be an optimal indicator. Finally, the MTF value was enhanced from 0.1645 to 0.2152 without degradation of the accuracy of the ocean color product. Enhanced GOCI images by MTF compensation are expected to recognize small-scale ocean products in coastal areas with sharpened geometric performance.
Zhou, Dong; Zhang, Hui; Ye, Peiqing
2016-01-01
Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator. PMID:27110274
NASA Astrophysics Data System (ADS)
Foyo-Moreno, I.; Vida, J.; Olmo, F. J.; Alados-Arboledas, L.
2000-11-01
Since the discovery of the ozone depletion in Antarctic and the globally declining trend of stratospheric ozone concentration, public and scientific concern has been raised in the last decades. A very important consequence of this fact is the increased broadband and spectral UV radiation in the environment and the biological effects and heath risks that may take place in the near future. The absence of widespread measurements of this radiometric flux has lead to the development and use of alternative estimation procedures such as the parametric approaches. Parametric models compute the radiant energy using available atmospheric parameters. Some parametric models compute the global solar irradiance at surface level by addition of its direct beam and diffuse components. In the present work, we have developed a comparison between two cloudless sky parametrization schemes. Both methods provide an estimation of the solar spectral irradiance that can be integrated spectrally within the limits of interest. For this test we have used data recorded in a radiometric station located at Granada (37.180°N, 3.580°W, 660 m a.m.s.l.), an inland location. The database includes hourly values of the relevant variables covering the years 1994-95. The performance of the models has been tested in relation to their predictive capability of global solar irradiance in the UV range (290-385 nm). After our study, it appears that information concerning the aerosol radiative effects is fundamental in order to obtain a good estimation. The original version of SPCTRAL2 provides estimates of the experimental values with negligible mean bias deviation. This suggests not only the appropriateness of the model but also the convenience of the aerosol features fixed in it to Granada conditions. SMARTS2 model offers increased flexibility concerning the selection of different aerosol models included in the code and provides the best results when the selected models are those considered as urban. Although SMARTS2 provide slightly worse results, both models give estimates of solar ultraviolet irradiance with mean bias deviation below 5%, and root mean square deviation close to experimental errors.
Reference interval computation: which method (not) to choose?
Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C
2012-07-11
When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.
Empirical scoring functions for advanced protein-ligand docking with PLANTS.
Korb, Oliver; Stützle, Thomas; Exner, Thomas E
2009-01-01
In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.
NASA Astrophysics Data System (ADS)
Smetanin, S. N.; Jelínek, M., Jr.; Kubeček, V.; Jelínková, H.
2015-09-01
Optimal conditions of low-threshold collinear parametric Raman comb generation in calcite (CaCO3) are experimentally investigated under 20 ps laser pulse excitation, in agreement with the theoretical study. The collinear parametric Raman generation of the highest number of Raman components in the short calcite crystals corresponding to the optimal condition of Stokes-anti-Stokes coupling was achieved. At the excitation wavelength of 1064 nm, using the optimum-length crystal resulted in the effective multi-octave frequency Raman comb generation containing up to five anti-Stokes and more than four Stokes components (from 674 nm to 1978 nm). The 532 nm pumping resulted in the frequency Raman comb generation from the 477 nm 2nd anti-Stokes up to the 692 nm 4th Stokes component. Using the crystal with a non-optimal length leads to the Stokes components generation only with higher thresholds because of the cascade-like stimulated Raman scattering with suppressed parametric coupling.
Sua, Yong Meng; Chen, Jia-Yang; Huang, Yu-Ping
2018-06-15
We report a wideband optical parametric amplification (OPA) over 14 THz covering telecom S, C, and L bands with observed maximum parametric gain of 38.3 dB. The OPA is realized through cascaded second-harmonic generation and difference-frequency generation (cSHG-DFG) in a 2 cm periodically poled LiNbO 3 (PPLN) waveguide. With tailored cross section geometry, the waveguide is optimally mode matched for efficient cascaded nonlinear wave mixing. We also identify and study the effect of competing nonlinear processes in this cSHG-DFG configuration.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
Creating A Data Base For Design Of An Impeller
NASA Technical Reports Server (NTRS)
Prueger, George H.; Chen, Wei-Chung
1993-01-01
Report describes use of Taguchi method of parametric design to create data base facilitating optimization of design of impeller in centrifugal pump. Data base enables systematic design analysis covering all significant design parameters. Reduces time and cost of parametric optimization of design: for particular impeller considered, one can cover 4,374 designs by computational simulations of performance for only 18 cases.
Pleil, Joachim D
2016-01-01
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around the next step, the choice of using standard error of the mean or the calculated standard deviation to compare or predict measurement results.
NASA Astrophysics Data System (ADS)
Chrismianto, Deddy; Zakki, Ahmad Fauzan; Arswendo, Berlian; Kim, Dong Joon
2015-12-01
Optimization analysis and computational fluid dynamics (CFDs) have been applied simultaneously, in which a parametric model plays an important role in finding the optimal solution. However, it is difficult to create a parametric model for a complex shape with irregular curves, such as a submarine hull form. In this study, the cubic Bezier curve and curve-plane intersection method are used to generate a solid model of a parametric submarine hull form taking three input parameters into account: nose radius, tail radius, and length-height hull ratio ( L/ H). Application program interface (API) scripting is also used to write code in the ANSYS design modeler. The results show that the submarine shape can be generated with some variation of the input parameters. An example is given that shows how the proposed method can be applied successfully to a hull resistance optimization case. The parametric design of the middle submarine type was chosen to be modified. First, the original submarine model was analyzed, in advance, using CFD. Then, using the response surface graph, some candidate optimal designs with a minimum hull resistance coefficient were obtained. Further, the optimization method in goal-driven optimization (GDO) was implemented to find the submarine hull form with the minimum hull resistance coefficient ( C t ). The minimum C t was obtained. The calculated difference in C t values between the initial submarine and the optimum submarine is around 0.26%, with the C t of the initial submarine and the optimum submarine being 0.001 508 26 and 0.001 504 29, respectively. The results show that the optimum submarine hull form shows a higher nose radius ( r n ) and higher L/ H than those of the initial submarine shape, while the radius of the tail ( r t ) is smaller than that of the initial shape.
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
Parametric optimal control of uncertain systems under an optimistic value criterion
NASA Astrophysics Data System (ADS)
Li, Bo; Zhu, Yuanguo
2018-01-01
It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.
Parametric Modeling as a Technology of Rapid Prototyping in Light Industry
NASA Astrophysics Data System (ADS)
Tomilov, I. N.; Grudinin, S. N.; Frolovsky, V. D.; Alexandrov, A. A.
2016-04-01
The paper deals with the parametric modeling method of virtual mannequins for the purposes of design automation in clothing industry. The described approach includes the steps of generation of the basic model on the ground of the initial one (obtained in 3D-scanning process), its parameterization and deformation. The complex surfaces are presented by the wireframe model. The modeling results are evaluated with the set of similarity factors. Deformed models are compared with their virtual prototypes. The results of modeling are estimated by the standard deviation factor.
NASA Technical Reports Server (NTRS)
Hill, Charles S.; Oliveras, Ovidio M.
2011-01-01
Evolution of the 3D strain field during ASTM-D-7078 v-notch rail shear tests on 8-ply quasi-isotropic carbon fiber/epoxy laminates was determined by optical photogrammetry using an ARAMIS system. Specimens having non-optimal geometry and minor discrepancies in dimensional tolerances were shown to display non-symmetry and/or stress concentration in the vicinity of the notch relative to a specimen meeting the requirements of the standard, but resulting shear strength and modulus values remained within acceptable bounds of standard deviation. Based on these results, and reported difficulty machining specimens to the required tolerances using available methods, it is suggested that a parametric study combining analytical methods and experiment may provide rationale to increase the tolerances on some specimen dimensions, reducing machining costs, increasing the proportion of acceptable results, and enabling a wider adoption of the test method.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
NASA Astrophysics Data System (ADS)
Braun, David J.; Sutas, Andrius; Vijayakumar, Sethu
2017-01-01
Theory predicts that parametrically excited oscillators, tuned to operate under resonant condition, are capable of large-amplitude oscillation useful in diverse applications, such as signal amplification, communication, and analog computation. However, due to amplitude saturation caused by nonlinearity, lack of robustness to model uncertainty, and limited sensitivity to parameter modulation, these oscillators require fine-tuning and strong modulation to generate robust large-amplitude oscillation. Here we present a principle of self-tuning parametric feedback excitation that alleviates the above-mentioned limitations. This is achieved using a minimalistic control implementation that performs (i) self-tuning (slow parameter adaptation) and (ii) feedback pumping (fast parameter modulation), without sophisticated signal processing past observations. The proposed approach provides near-optimal amplitude maximization without requiring model-based control computation, previously perceived inevitable to implement optimal control principles in practical application. Experimental implementation of the theory shows that the oscillator self-tunes itself near to the onset of dynamic bifurcation to achieve extreme sensitivity to small resonant parametric perturbations. As a result, it achieves large-amplitude oscillations by capitalizing on the effect of nonlinearity, despite substantial model uncertainties and strong unforeseen external perturbations. We envision the present finding to provide an effective and robust approach to parametric excitation when it comes to real-world application.
Zhang, Dongsheng; Wang, Shiyu; Xiu, Jie
2017-11-01
Elastic wave quality determines the operating performance of traveling wave ultrasonic motor (TWUM). The time-variant circumferential force from the shrink of piezoelectric ceramic is one of the factors that distort the elastic wave. The distorted waveshape deviates from the ideal standard sinusoidal fashion and affects the contact mechanics and driving performance. An analytical dynamic model of ring ultrasonic motor is developed. Based on this model, the piezoelectric parametric effects on the wave distortion and contact mechanics are examined. Multi-scale method is employed to obtain unstable regions and distorted wave response. The unstable region is verified by Floquét theory. Since the waveshape affects the contact mechanism, a contact model involving the distorted waveshape and normal stiffness of the contact layer is established. The contact model is solved by numerical calculation. The results verify that the deformation of the contact layer deviates from sinusoidal waveshape and the pressure distribution is changed, which influences the output characteristics directly. The surface speed within the contact region is averaged such that the rotor speed decreases for lower torque and increases for larger torque. The effects from different parametric strengths, excitation frequencies and pre-pressures on pressure distribution and torque-speed relation are compared. Copyright © 2017 Elsevier B.V. All rights reserved.
Goudriaan, Marije; Van den Hauwe, Marleen; Simon-Martinez, Cristina; Huenaerts, Catherine; Molenaers, Guy; Goemans, Nathalie; Desloovere, Kaat
2018-04-30
Prolonged ambulation is considered important in children with Duchenne muscular dystrophy (DMD). However, previous studies analyzing DMD gait were sensitive to false positive outcomes, caused by uncorrected multiple comparisons, regional focus bias, and inter-component covariance bias. Also, while muscle weakness is often suggested to be the main cause for the altered gait pattern in DMD, this was never verified. Our research question was twofold: 1) are we able to confirm the sagittal kinematic and kinetic gait alterations described in a previous review with statistical non-parametric mapping (SnPM)? And 2) are these gait deviations related to lower limb weakness? We compared gait kinematics and kinetics of 15 children with DMD and 15 typical developing (TD) children (5-17 years), with a two sample Hotelling's T 2 test and post-hoc two-tailed, two-sample t-test. We used canonical correlation analyses to study the relationship between weakness and altered gait parameters. For all analyses, α-level was corrected for multiple comparisons, resulting in α = 0.005. We only found one of the previously reported kinematic deviations: the children with DMD had an increased knee flexion angle during swing (p = 0.0006). Observed gait deviations that were not reported in the review were an increased hip flexion angle during stance (p = 0.0009) and swing (p = 0.0001), altered combined knee and ankle torques (p = 0.0002), and decreased power absorption during stance (p = 0.0001). No relationships between weakness and these gait deviations were found. We were not able to replicate the gait deviations in DMD previously reported in literature, thus DMD gait remains undefined. Further, weakness does not seem to be linearly related to altered gait features. The progressive nature of the disease requires larger study populations and longitudinal analyses to gain more insight into DMD gait and its underlying causes. Copyright © 2018 Elsevier B.V. All rights reserved.
Modelling and multi-parametric control for delivery of anaesthetic agents.
Dua, Pinky; Dua, Vivek; Pistikopoulos, Efstratios N
2010-06-01
This article presents model predictive controllers (MPCs) and multi-parametric model-based controllers for delivery of anaesthetic agents. The MPC can take into account constraints on drug delivery rates and state of the patient but requires solving an optimization problem at regular time intervals. The multi-parametric controller has all the advantages of the MPC and does not require repetitive solution of optimization problem for its implementation. This is achieved by obtaining the optimal drug delivery rates as a set of explicit functions of the state of the patient. The derivation of the controllers relies on using detailed models of the system. A compartmental model for the delivery of three drugs for anaesthesia is developed. The key feature of this model is that mean arterial pressure, cardiac output and unconsciousness of the patient can be simultaneously regulated. This is achieved by using three drugs: dopamine (DP), sodium nitroprusside (SNP) and isoflurane. A number of dynamic simulation experiments are carried out for the validation of the model. The model is then used for the design of model predictive and multi-parametric controllers, and the performance of the controllers is analyzed.
Moses, J; Huang, S-W; Hong, K-H; Mücke, O D; Falcão-Filho, E L; Benedick, A; Ilday, F O; Dergachev, A; Bolger, J A; Eggleton, B J; Kärtner, F X
2009-06-01
We present a 9 GW peak power, three-cycle, 2.2 microm optical parametric chirped-pulse amplification source with 1.5% rms energy and 150 mrad carrier envelope phase fluctuations. These characteristics, in addition to excellent beam, wavefront, and pulse quality, make the source suitable for long-wavelength-driven high-harmonic generation. High stability is achieved by careful optimization of superfluorescence suppression, enabling energy scaling.
Global, Multi-Objective Trajectory Optimization With Parametric Spreading
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Phillips, Sean M.; Hughes, Kyle M.
2017-01-01
Mission design problems are often characterized by multiple, competing trajectory optimization objectives. Recent multi-objective trajectory optimization formulations enable generation of globally-optimal, Pareto solutions via a multi-objective genetic algorithm. A byproduct of these formulations is that clustering in design space can occur in evolving the population towards the Pareto front. This clustering can be a drawback, however, if parametric evaluations of design variables are desired. This effort addresses clustering by incorporating operators that encourage a uniform spread over specified design variables while maintaining Pareto front representation. The algorithm is demonstrated on a Neptune orbiter mission, and enhanced multidimensional visualization strategies are presented.
Strong stabilization servo controller with optimization of performance criteria.
Sarjaš, Andrej; Svečko, Rajko; Chowdhury, Amor
2011-07-01
Synthesis of a simple robust controller with a pole placement technique and a H(∞) metrics is the method used for control of a servo mechanism with BLDC and BDC electric motors. The method includes solving a polynomial equation on the basis of the chosen characteristic polynomial using the Manabe standard polynomial form and parametric solutions. Parametric solutions are introduced directly into the structure of the servo controller. On the basis of the chosen parametric solutions the robustness of a closed-loop system is assessed through uncertainty models and assessment of the norm ‖•‖(∞). The design procedure and the optimization are performed with a genetic algorithm differential evolution - DE. The DE optimization method determines a suboptimal solution throughout the optimization on the basis of a spectrally square polynomial and Šiljak's absolute stability test. The stability of the designed controller during the optimization is being checked with Lipatov's stability condition. Both utilized approaches: Šiljak's test and Lipatov's condition, check the robustness and stability characteristics on the basis of the polynomial's coefficients, and are very convenient for automated design of closed-loop control and for application in optimization algorithms such as DE. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
ERIC Educational Resources Information Center
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Ultra-flat wideband single-pump Raman-enhanced parametric amplification.
Gordienko, V; Stephens, M F C; El-Taher, A E; Doran, N J
2017-03-06
We experimentally optimize a single pump fiber optical parametric amplifier in terms of gain spectral bandwidth and gain variation (GV). We find that optimal performance is achieved with the pump tuned to the zero-dispersion wavelength of dispersion stable highly nonlinear fiber (HNLF). We demonstrate further improvement of parametric gain bandwidth and GV by decreasing the HNLF length. We discover that Raman and parametric gain spectra produced by the same pump may be merged together to enhance overall gain bandwidth, while keeping GV low. Consequently, we report an ultra-flat gain of 9.6 ± 0.5 dB over a range of 111 nm (12.8 THz) on one side of the pump. Additionally, we demonstrate amplification of a 60 Gbit/s QPSK signal tuned over a portion of the available bandwidth with OSNR penalty less than 1 dB for Q2 below 14 dB.
Analyses of ACPL thermal/fluid conditioning system
NASA Technical Reports Server (NTRS)
Stephen, L. A.; Usher, L. H.
1976-01-01
Results of engineering analyses are reported. Initial computations were made using a modified control transfer function where the systems performance was characterized parametrically using an analytical model. The analytical model was revised to represent the latest expansion chamber fluid manifold design, and systems performance predictions were made. Parameters which were independently varied in these computations are listed. Systems predictions which were used to characterize performance are primarily transient computer plots comparing the deviation between average chamber temperature and the chamber temperature requirement. Additional computer plots were prepared. Results of parametric computations with the latest fluid manifold design are included.
From Neutron Star Observables to the Equation of State. I. An Optimal Parametrization
NASA Astrophysics Data System (ADS)
Raithel, Carolyn A.; Özel, Feryal; Psaltis, Dimitrios
2016-11-01
The increasing number and precision of measurements of neutron star masses, radii, and, in the near future, moments of inertia offer the possibility of precisely determining the neutron star equation of state (EOS). One way to facilitate the mapping of observables to the EOS is through a parametrization of the latter. We present here a generic method for optimizing the parametrization of any physically allowed EOS. We use mock EOS that incorporate physically diverse and extreme behavior to test how well our parametrization reproduces the global properties of the stars, by minimizing the errors in the observables of mass, radius, and the moment of inertia. We find that using piecewise polytropes and sampling the EOS with five fiducial densities between ˜1-8 times the nuclear saturation density results in optimal errors for the smallest number of parameters. Specifically, it recreates the radii of the assumed EOS to within less than 0.5 km for the extreme mock EOS and to within less than 0.12 km for 95% of a sample of 42 proposed, physically motivated EOS. Such a parametrization is also able to reproduce the maximum mass to within 0.04 {M}⊙ and the moment of inertia of a 1.338 {M}⊙ neutron star to within less than 10% for 95% of the proposed sample of EOS.
A review of parametric approaches specific to aerodynamic design process
NASA Astrophysics Data System (ADS)
Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li
2018-04-01
Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.
Evaluating forest management policies by parametric linear programing
Daniel I. Navon; Richard J. McConnen
1967-01-01
An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.
Minozzi, M; Bonora, S; Sergienko, A V; Vallone, G; Villoresi, P
2013-02-15
We present an efficient method for optimizing the spatial profile of entangled-photon wave function produced in a spontaneous parametric down conversion process. A deformable mirror that modifies a wavefront of a 404 nm CW diode laser pump interacting with a nonlinear β-barium borate type-I crystal effectively controls the profile of the joint biphoton function. The use of a feedback signal extracted from the biphoton coincidence rate is used to achieve the optimal wavefront shape. The optimization of the two-photon coupling into two, single spatial modes for correlated detection is used for a practical demonstration of this physical principle.
OPCPA front end and contrast optimization for the OMEGA EP kilojoule, picosecond laser
Dorrer, C.; Consentino, A.; Irwin, D.; ...
2015-09-01
OMEGA EP is a large-scale laser system that combines optical parametric amplification and solid-state laser amplification on two beamlines to deliver high-intensity, high-energy optical pulses. The temporal contrast of the output pulse is limited by the front-end parametric fluorescence and other features that are specific to parametric amplification. The impact of the two-crystal parametric preamplifier, pump-intensity noise, and pump-signal timing is experimentally studied. The implementation of a parametric amplifier pumped by a short pump pulse before stretching, further amplification, and recompression to enhance the temporal contrast of the high-energy short pulse is described.
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Appplications of the post-Tolman-Oppenheimer-Volkoff formalism
NASA Astrophysics Data System (ADS)
Silva, Hector O.; Glampedakis, Kostas; Pappas, George; Berti, Emanuele
2017-01-01
Besides their astrophysical interest, neutron stars are promising candidates for testing theories of gravity in the strong-field regime. It is known that, generically, modifications to general relativity affect the bulk properties of neutron stars, e.g. their masses and radii, in a way that depends on the specific choice of theory. In this presentation we review a theory-agnostic approach to model relativistic stars, called the post-Tolman-Oppenheimer-Volkoff formalism. Drawing inspiration from the parametrized post-Newtonian formalism, this framework allows us to describe perturbative deviations from general relativity in the structure of neutrons stars in a parametrized manner. We show that a variety of astrophysical observables (namely the surface redshift, the apparent radius, the Eddington luminosity and the orbital frequency of particles in geodesic motion around neutron stars) can be parametrized using only two parameters.
Model and parametric uncertainty in source-based kinematic models of earthquake ground motion
Hartzell, Stephen; Frankel, Arthur; Liu, Pengcheng; Zeng, Yuehua; Rahman, Shariftur
2011-01-01
Four independent ground-motion simulation codes are used to model the strong ground motion for three earthquakes: 1994 Mw 6.7 Northridge, 1989 Mw 6.9 Loma Prieta, and 1999 Mw 7.5 Izmit. These 12 sets of synthetics are used to make estimates of the variability in ground-motion predictions. In addition, ground-motion predictions over a grid of sites are used to estimate parametric uncertainty for changes in rupture velocity. We find that the combined model uncertainty and random variability of the simulations is in the same range as the variability of regional empirical ground-motion data sets. The majority of the standard deviations lie between 0.5 and 0.7 natural-log units for response spectra and 0.5 and 0.8 for Fourier spectra. The estimate of model epistemic uncertainty, based on the different model predictions, lies between 0.2 and 0.4, which is about one-half of the estimates for the standard deviation of the combined model uncertainty and random variability. Parametric uncertainty, based on variation of just the average rupture velocity, is shown to be consistent in amplitude with previous estimates, showing percentage changes in ground motion from 50% to 300% when rupture velocity changes from 2.5 to 2.9 km/s. In addition, there is some evidence that mean biases can be reduced by averaging ground-motion estimates from different methods.
Automated MRI segmentation for individualized modeling of current flow in the human head.
Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C
2013-12-01
High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Topology-dependent rationality and quantal response equilibria in structured populations
NASA Astrophysics Data System (ADS)
Roman, Sabin; Brede, Markus
2017-05-01
Given that the assumption of perfect rationality is rarely met in the real world, we explore a graded notion of rationality in socioecological systems of networked actors. We parametrize an actors' rationality via their place in a social network and quantify system rationality via the average Jensen-Shannon divergence between the games Nash and logit quantal response equilibria. Previous work has argued that scale-free topologies maximize a system's overall rationality in this setup. Here we show that while, for certain games, it is true that increasing degree heterogeneity of complex networks enhances rationality, rationality-optimal configurations are not scale-free. For the Prisoner's Dilemma and Stag Hunt games, we provide analytic arguments complemented by numerical optimization experiments to demonstrate that core-periphery networks composed of a few dominant hub nodes surrounded by a periphery of very low degree nodes give strikingly smaller overall deviations from rationality than scale-free networks. Similarly, for the Battle of the Sexes and the Matching Pennies games, we find that the optimal network structure is also a core-periphery graph but with a smaller difference in the average degrees of the core and the periphery. These results provide insight on the interplay between the topological structure of socioecological systems and their collective cognitive behavior, with potential applications to understanding wealth inequality and the structural features of the network of global corporate control.
Topology-dependent rationality and quantal response equilibria in structured populations.
Roman, Sabin; Brede, Markus
2017-05-01
Given that the assumption of perfect rationality is rarely met in the real world, we explore a graded notion of rationality in socioecological systems of networked actors. We parametrize an actors' rationality via their place in a social network and quantify system rationality via the average Jensen-Shannon divergence between the games Nash and logit quantal response equilibria. Previous work has argued that scale-free topologies maximize a system's overall rationality in this setup. Here we show that while, for certain games, it is true that increasing degree heterogeneity of complex networks enhances rationality, rationality-optimal configurations are not scale-free. For the Prisoner's Dilemma and Stag Hunt games, we provide analytic arguments complemented by numerical optimization experiments to demonstrate that core-periphery networks composed of a few dominant hub nodes surrounded by a periphery of very low degree nodes give strikingly smaller overall deviations from rationality than scale-free networks. Similarly, for the Battle of the Sexes and the Matching Pennies games, we find that the optimal network structure is also a core-periphery graph but with a smaller difference in the average degrees of the core and the periphery. These results provide insight on the interplay between the topological structure of socioecological systems and their collective cognitive behavior, with potential applications to understanding wealth inequality and the structural features of the network of global corporate control.
Parametric optimization of optical signal detectors employing the direct photodetection scheme
NASA Astrophysics Data System (ADS)
Kirakosiants, V. E.; Loginov, V. A.
1984-08-01
The problem of optimization of the optical signal detection scheme parameters is addressed using the concept of a receiver with direct photodetection. An expression is derived which accurately approximates the field of view (FOV) values obtained by a direct computer minimization of the probability of missing a signal; optimum values of the receiver FOV were found for different atmospheric conditions characterized by the number of coherence spots and the intensity fluctuations of a plane wave. It is further pointed out that the criterion presented can be possibly used for parametric optimization of detectors operating in accordance with the Neumann-Pearson criterion.
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao
2017-12-01
Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
Nonzero θ13 from the Triangular Ansatz and Leptogenesis
NASA Astrophysics Data System (ADS)
Benaoum, H. B.
2012-08-01
Recent experiments indicate a departure from the exact tri-bimaximal mixing by measure ring definitive nonzero value of θ13. Within the framework of type I seesaw mechanism, we reconstruct the triangular Dirac neutrino mass matrix from the μ - τ symmetric mass matrix. The deviation from μ - τ symmetry is then parametrized by adding dimensionless parameters yi in the triangular mass matrix. In this parametrization of the neutrino mass matrix, the nonzero value θ13 is controlled by Δy = y4 - y6. We also calculate the resulting leptogenesis and show that the triangular texture can generate the observed baryon asymmetry in the universe via leptogenesis scenario.
NASA Astrophysics Data System (ADS)
Gao, Dongyang; Zheng, Xiaobing; Li, Jianjun; Hu, Youbo; Xia, Maopeng; Salam, Abdul; Zhang, Peng
2018-03-01
Based on spontaneous parametric downconversion process, we propose a novel self-calibration radiometer scheme which can self-calibrate the degradation of its own response and ultimately monitor the fluctuation of a target radiation. Monitor results were independent of its degradation and not linked to the primary standard detector scale. The principle and feasibility of the proposed scheme were verified by observing bromine-tungsten lamp. A relative standard deviation of 0.39 % was obtained for stable bromine-tungsten lamp. Results show that the proposed scheme is advanced of its principle. The proposed scheme could make a significant breakthrough in the self-calibration issue on the space platform.
NASA Technical Reports Server (NTRS)
Kiess, Thomas E.; Shih, Yan-Hua; Sergienko, A. V.; Alley, Carroll O.
1994-01-01
We report a new two-photon polarization correlation experiment for realizing the Einstein-Podolsky-Rosen-Bohm (EPRB) state and for testing Bell-type inequalities. We use the pair of orthogonally-polarized light quanta generated in Type 2 parametric down conversion. Using 1 nm interference filters in front of our detectors, we observe from the output of a 0.5mm beta - BaB2O4 (BBO) crystal the EPRB correlations in coincidence counts, and measure an associated Bell inequality violation of 22 standard deviations. The quantum state of the photon pair is a polarization analog of the spin-1/2 singlet state.
Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel
NASA Astrophysics Data System (ADS)
Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.
2018-03-01
This research study attempts to create an optimized parametric window by employing Taguchi algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and optimization are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and Taguchi techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these optimized parametric values yield enhanced weld strength with cost and time reduction.
Bifurcation analysis of eight coupled degenerate optical parametric oscillators
NASA Astrophysics Data System (ADS)
Ito, Daisuke; Ueta, Tetsushi; Aihara, Kazuyuki
2018-06-01
A degenerate optical parametric oscillator (DOPO) network realized as a coherent Ising machine can be used to solve combinatorial optimization problems. Both theoretical and experimental investigations into the performance of DOPO networks have been presented previously. However a problem remains, namely that the dynamics of the DOPO network itself can lower the search success rates of globally optimal solutions for Ising problems. This paper shows that the problem is caused by pitchfork bifurcations due to the symmetry structure of coupled DOPOs. Some two-parameter bifurcation diagrams of equilibrium points express the performance deterioration. It is shown that the emergence of non-ground states regarding local minima hampers the system from reaching the ground states corresponding to the global minimum. We then describe a parametric strategy for leading a system to the ground state by actively utilizing the bifurcation phenomena. By adjusting the parameters to break particular symmetry, we find appropriate parameter sets that allow the coherent Ising machine to obtain the globally optimal solution alone.
Korez, Robert; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž
2014-10-01
Gradual degeneration of intervertebral discs of the lumbar spine is one of the most common causes of low back pain. Although conservative treatment for low back pain may provide relief to most individuals, surgical intervention may be required for individuals with significant continuing symptoms, which is usually performed by replacing the degenerated intervertebral disc with an artificial implant. For designing implants with good bone contact and continuous force distribution, the morphology of the intervertebral disc space and vertebral body endplates is of considerable importance. In this study, we propose a method for parametric modeling of the intervertebral disc space in three dimensions (3D) and show its application to computed tomography (CT) images of the lumbar spine. The initial 3D model of the intervertebral disc space is generated according to the superquadric approach and therefore represented by a truncated elliptical cone, which is initialized by parameters obtained from 3D models of adjacent vertebral bodies. In an optimization procedure, the 3D model of the intervertebral disc space is incrementally deformed by adding parameters that provide a more detailed morphometric description of the observed shape, and aligned to the observed intervertebral disc space in the 3D image. By applying the proposed method to CT images of 20 lumbar spines, the shape and pose of each of the 100 intervertebral disc spaces were represented by a 3D parametric model. The resulting mean (±standard deviation) accuracy of modeling was 1.06±0.98mm in terms of radial Euclidean distance against manually defined ground truth points, with the corresponding success rate of 93% (i.e. 93 out of 100 intervertebral disc spaces were modeled successfully). As the resulting 3D models provide a description of the shape of intervertebral disc spaces in a complete parametric form, morphometric analysis was straightforwardly enabled and allowed the computation of the corresponding heights, widths and volumes, as well as of other geometric features that in detail describe the shape of intervertebral disc spaces. Copyright © 2014 Elsevier Ltd. All rights reserved.
This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...
2016-04-30
support contractor , Infoscitex, conducted a series of tests to identify the performance capabilities of the Vertical Impact Device (VID). The VID is a...C. Table 3. AFD Evaluation with Red IMPAC Programmer: Data Summary Showing Means and Standard Deviations Test Cell Drop Ht . (in) Mean Peak
Giant current fluctuations in an overheated single-electron transistor
NASA Astrophysics Data System (ADS)
Laakso, M. A.; Heikkilä, T. T.; Nazarov, Yuli V.
2010-11-01
Interplay of cotunneling and single-electron tunneling in a thermally isolated single-electron transistor leads to peculiar overheating effects. In particular, there is an interesting crossover interval where the competition between cotunneling and single-electron tunneling changes to the dominance of the latter. In this interval, the current exhibits anomalous sensitivity to the effective electron temperature of the transistor island and its fluctuations. We present a detailed study of the current and temperature fluctuations at this interesting point. The methods implemented allow for a complete characterization of the distribution of the fluctuating quantities, well beyond the Gaussian approximation. We reveal and explore the parameter range where, for sufficiently small transistor islands, the current fluctuations become gigantic. In this regime, the optimal value of the current, its expectation value, and its standard deviation differ from each other by parametrically large factors. This situation is unique for transport in nanostructures and for electron transport in general. The origin of this spectacular effect is the exponential sensitivity of the current to the fluctuating effective temperature.
Parametric Amplification For Detecting Weak Optical Signals
NASA Technical Reports Server (NTRS)
Hemmati, Hamid; Chen, Chien; Chakravarthi, Prakash
1996-01-01
Optical-communication receivers of proposed type implement high-sensitivity scheme of optical parametric amplification followed by direct detection for reception of extremely weak signals. Incorporates both optical parametric amplification and direct detection into optimized design enhancing effective signal-to-noise ratios during reception in photon-starved (photon-counting) regime. Eliminates need for complexity of heterodyne detection scheme and partly overcomes limitations imposed on older direct-detection schemes by noise generated in receivers and by limits on quantum efficiencies of photodetectors.
NASA Astrophysics Data System (ADS)
Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.
2018-01-01
Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.
Parametric investigations of plasma characteristics in a remote inductively coupled plasma system
NASA Astrophysics Data System (ADS)
Shukla, Prasoon; Roy, Abhra; Jain, Kunal; Bhoj, Ananth
2016-09-01
Designing a remote plasma system involves source chamber sizing, selection of coils and/or electrodes to power the plasma, designing the downstream tubes, selection of materials used in the source and downstream regions, locations of inlets and outlets and finally optimizing the process parameter space of pressure, gas flow rates and power delivery. Simulations can aid in spatial and temporal plasma characterization in what are often inaccessible locations for experimental probes in the source chamber. In this paper, we report on simulations of a remote inductively coupled Argon plasma system using the modeling platform CFD-ACE +. The coupled multiphysics model description successfully address flow, chemistry, electromagnetics, heat transfer and plasma transport in the remote plasma system. The SimManager tool enables easy setup of parametric simulations to investigate the effect of varying the pressure, power, frequency, flow rates and downstream tube lengths. It can also enable the automatic solution of the varied parameters to optimize a user-defined objective function, which may be the integral ion and radical fluxes at the wafer. The fast run time coupled with the parametric and optimization capabilities can add significant insight and value in design and optimization.
Acoustic attenuation design requirements established through EPNL parametric trades
NASA Technical Reports Server (NTRS)
Veldman, H. F.
1972-01-01
An optimization procedure for the provision of an acoustic lining configuration that is balanced with respect to engine performance losses and lining attenuation characteristics was established using a method which determined acoustic attenuation design requirements through parametric trade studies using the subjective noise unit of effective perceived noise level (EPNL).
A Parametric k-Means Algorithm
Tarpey, Thaddeus
2007-01-01
Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-01-30
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUV max distributions at both pre and post treatment. This study included 57 patients that underwent 18 F-fluorodeoxyglucose ( 18 F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18 F-Fluorothymidine ( 18 F-FLT) PET scans at our institution. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18 F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18 F-FLT PET SUV distributions (P > 0.10). For both 18 F-FDG and 18 F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18 F-FDG and 18 F-FLT where a log transformation was not optimal for providing normal SUV distributions. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
NASA Astrophysics Data System (ADS)
Scarpelli, Matthew; Eickhoff, Jens; Cuna, Enrique; Perlman, Scott; Jeraj, Robert
2018-02-01
The statistical analysis of positron emission tomography (PET) standardized uptake value (SUV) measurements is challenging due to the skewed nature of SUV distributions. This limits utilization of powerful parametric statistical models for analyzing SUV measurements. An ad-hoc approach, which is frequently used in practice, is to blindly use a log transformation, which may or may not result in normal SUV distributions. This study sought to identify optimal transformations leading to normally distributed PET SUVs extracted from tumors and assess the effects of therapy on the optimal transformations. Methods. The optimal transformation for producing normal distributions of tumor SUVs was identified by iterating the Box-Cox transformation parameter (λ) and selecting the parameter that maximized the Shapiro-Wilk P-value. Optimal transformations were identified for tumor SUVmax distributions at both pre and post treatment. This study included 57 patients that underwent 18F-fluorodeoxyglucose (18F-FDG) PET scans (publically available dataset). In addition, to test the generality of our transformation methodology, we included analysis of 27 patients that underwent 18F-Fluorothymidine (18F-FLT) PET scans at our institution. Results. After applying the optimal Box-Cox transformations, neither the pre nor the post treatment 18F-FDG SUV distributions deviated significantly from normality (P > 0.10). Similar results were found for 18F-FLT PET SUV distributions (P > 0.10). For both 18F-FDG and 18F-FLT SUV distributions, the skewness and kurtosis increased from pre to post treatment, leading to a decrease in the optimal Box-Cox transformation parameter from pre to post treatment. There were types of distributions encountered for both 18F-FDG and 18F-FLT where a log transformation was not optimal for providing normal SUV distributions. Conclusion. Optimization of the Box-Cox transformation, offers a solution for identifying normal SUV transformations for when the log transformation is insufficient. The log transformation is not always the appropriate transformation for producing normally distributed PET SUVs.
Parametrically Optimized Carbon Nanotube-Coated Cold Cathode Spindt Arrays
Yuan, Xuesong; Cole, Matthew T.; Zhang, Yu; Wu, Jianqiang; Milne, William I.; Yan, Yang
2017-01-01
Here, we investigate, through parametrically optimized macroscale simulations, the field electron emission from arrays of carbon nanotube (CNT)-coated Spindts towards the development of an emerging class of novel vacuum electron devices. The present study builds on empirical data gleaned from our recent experimental findings on the room temperature electron emission from large area CNT electron sources. We determine the field emission current of the present microstructures directly using particle in cell (PIC) software and present a new CNT cold cathode array variant which has been geometrically optimized to provide maximal emission current density, with current densities of up to 11.5 A/cm2 at low operational electric fields of 5.0 V/μm. PMID:28336845
Parametric optimization of the MVC desalination plant with thermomechanical compressor
NASA Astrophysics Data System (ADS)
Blagin, E. V.; Biryuk, V. V.; Anisimov, M. Y.; Shimanov, A. A.; Gorshkalev, A. A.
2018-03-01
This article deals with parametric optimization of the Mechanical Vapour Compression (MVC) desalination plant with thermomechanical compressor. In this plants thermocompressor is used instead of commonly used centrifugal compressor. Influence of two main parameters was studied. These parameters are: inlet pressure and number of stages. Analysis shows that it is possible to achieve better plant performance in comparison with traditional MVC plant. But is required reducing the number of stages and utilization of low or high initial pressure with power consumption maximum at approximately 20-30 kPa.
NASA Astrophysics Data System (ADS)
AsséMat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank
In part I, we presented the theoretic foundations of the GOAT algorithm for the optimal control of quantum systems. Here in part II, we focus on several applications of GOAT to superconducting qubits architecture. First, we consider a control-Z gate on Xmons qubits with an Erf parametrization of the optimal pulse. We show that a fast and accurate gate can be obtained with only 16 parameters, as compared to hundreds of parameters required in other algorithms. We present numerical evidences that such parametrization should allow an efficient in-situ calibration of the pulse. Next, we consider the flux-tunable coupler by IBM. We show optimization can be carried out in a more realistic model of the system than was employed in the original study, which is expected to further simplify the calibration process. Moreover, GOAT reduced the complexity of the optimal pulse to only 6 Fourier components, composed with analytic wrappers.
Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm
Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed
2008-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581
Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.
Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed
2004-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.
Sohn, Martin Y; Barnes, Bryan M; Silver, Richard M
2018-03-01
Accurate optics-based dimensional measurements of features sized well-below the diffraction limit require a thorough understanding of the illumination within the optical column and of the three-dimensional scattered fields that contain the information required for quantitative metrology. Scatterfield microscopy can pair simulations with angle-resolved tool characterization to improve agreement between the experiment and calculated libraries, yielding sub-nanometer parametric uncertainties. Optimized angle-resolved illumination requires bi-telecentric optics in which a telecentric sample plane defined by a Köhler illumination configuration and a telecentric conjugate back focal plane (CBFP) of the objective lens; scanning an aperture or an aperture source at the CBFP allows control of the illumination beam angle at the sample plane with minimal distortion. A bi-telecentric illumination optics have been designed enabling angle-resolved illumination for both aperture and source scanning modes while yielding low distortion and chief ray parallelism. The optimized design features a maximum chief ray angle at the CBFP of 0.002° and maximum wavefront deviations of less than 0.06 λ for angle-resolved illumination beams at the sample plane, holding promise for high quality angle-resolved illumination for improved measurements of deep-subwavelength structures using deep-ultraviolet light.
NASA Technical Reports Server (NTRS)
Olds, John Robert; Walberg, Gerald D.
1993-01-01
Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.
Bantis, Leonidas E; Nakas, Christos T; Reiser, Benjamin; Myall, Daniel; Dalrymple-Alford, John C
2017-06-01
The three-class approach is used for progressive disorders when clinicians and researchers want to diagnose or classify subjects as members of one of three ordered categories based on a continuous diagnostic marker. The decision thresholds or optimal cut-off points required for this classification are often chosen to maximize the generalized Youden index (Nakas et al., Stat Med 2013; 32: 995-1003). The effectiveness of these chosen cut-off points can be evaluated by estimating their corresponding true class fractions and their associated confidence regions. Recently, in the two-class case, parametric and non-parametric methods were investigated for the construction of confidence regions for the pair of the Youden-index-based optimal sensitivity and specificity fractions that can take into account the correlation introduced between sensitivity and specificity when the optimal cut-off point is estimated from the data (Bantis et al., Biomet 2014; 70: 212-223). A parametric approach based on the Box-Cox transformation to normality often works well while for markers having more complex distributions a non-parametric procedure using logspline density estimation can be used instead. The true class fractions that correspond to the optimal cut-off points estimated by the generalized Youden index are correlated similarly to the two-class case. In this article, we generalize these methods to the three- and to the general k-class case which involves the classification of subjects into three or more ordered categories, where ROC surface or ROC manifold methodology, respectively, is typically employed for the evaluation of the discriminatory capacity of a diagnostic marker. We obtain three- and multi-dimensional joint confidence regions for the optimal true class fractions. We illustrate this with an application to the Trail Making Test Part A that has been used to characterize cognitive impairment in patients with Parkinson's disease.
Geometric Model for a Parametric Study of the Blended-Wing-Body Airplane
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne; Smith, Robert E.; Sadrehaghighi, Ideen; Wiese, Micharl R.
1996-01-01
A parametric model is presented for the blended-wing-body airplane, one concept being proposed for the next generation of large subsonic transports. The model is defined in terms of a small set of parameters which facilitates analysis and optimization during the conceptual design process. The model is generated from a preliminary CAD geometry. From this geometry, airfoil cross sections are cut at selected locations and fitted with analytic curves. The airfoils are then used as boundaries for surfaces defined as the solution of partial differential equations. Both the airfoil curves and the surfaces are generated with free parameters selected to give a good representation of the original geometry. The original surface is compared with the parametric model, and solutions of the Euler equations for compressible flow are computed for both geometries. The parametric model is a good approximation of the CAD model and the computed solutions are qualitatively similar. An optimal NURBS approximation is constructed and can be used by a CAD model for further refinement or modification of the original geometry.
NASA Astrophysics Data System (ADS)
Sibileau, Alberto; Auricchio, Ferdinando; Morganti, Simone; Díez, Pedro
2018-01-01
Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.
NASA Astrophysics Data System (ADS)
Haji Hosseinloo, Ashkan; Turitsyn, Konstantin
2016-04-01
Vibration energy harvesting has been shown as a promising power source for many small-scale applications mainly because of the considerable reduction in the energy consumption of the electronics and scalability issues of the conventional batteries. However, energy harvesters may not be as robust as the conventional batteries and their performance could drastically deteriorate in the presence of uncertainty in their parameters. Hence, study of uncertainty propagation and optimization under uncertainty is essential for proper and robust performance of harvesters in practice. While all studies have focused on expectation optimization, we propose a new and more practical optimization perspective; optimization for the worst-case (minimum) power. We formulate the problem in a generic fashion and as a simple example apply it to a linear piezoelectric energy harvester. We study the effect of parametric uncertainty in its natural frequency, load resistance, and electromechanical coupling coefficient on its worst-case power and then optimize for it under different confidence levels. The results show that there is a significant improvement in the worst-case power of thus designed harvester compared to that of a naively-optimized (deterministically-optimized) harvester.
Scenario based optimization of a container vessel with respect to its projected operating conditions
NASA Astrophysics Data System (ADS)
Wagner, Jonas; Binkowski, Eva; Bronsart, Robert
2014-06-01
In this paper the scenario based optimization of the bulbous bow of the KRISO Container Ship (KCS) is presented. The optimization of the parametrically modeled vessel is based on a statistically developed operational profile generated from noon-to-noon reports of a comparable 3600 TEU container vessel and specific development functions representing the growth of global economy during the vessels service time. In order to consider uncertainties, statistical fluctuations are added. An analysis of these data lead to a number of most probable upcoming operating conditions (OC) the vessel will stay in the future. According to their respective likeliness an objective function for the evaluation of the optimal design variant of the vessel is derived and implemented within the parametrical optimization workbench FRIENDSHIP Framework. In the following this evaluation is done with respect to vessel's calculated effective power based on the usage of potential flow code. The evaluation shows, that the usage of scenarios within the optimization process has a strong influence on the hull form.
The benefits of adaptive parametrization in multi-objective Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John
2010-10-01
In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).
Optimal boundary conditions for ORCA-2 model
NASA Astrophysics Data System (ADS)
Kazantsev, Eugene
2013-08-01
A 4D-Var data assimilation technique is applied to ORCA-2 configuration of the NEMO in order to identify the optimal parametrization of boundary conditions on the lateral boundaries as well as on the bottom and on the surface of the ocean. The influence of boundary conditions on the solution is analyzed both within and beyond the assimilation window. It is shown that the optimal bottom and surface boundary conditions allow us to better represent the jet streams, such as Gulf Stream and Kuroshio. Analyzing the reasons of the jets reinforcement, we notice that data assimilation has a major impact on parametrization of the bottom boundary conditions for u and v. Automatic generation of the tangent and adjoint codes is also discussed. Tapenade software is shown to be able to produce the adjoint code that can be used after a memory usage optimization.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Parametric design of tri-axial nested Helmholtz coils
NASA Astrophysics Data System (ADS)
Abbott, Jake J.
2015-05-01
This paper provides an optimal parametric design for tri-axial nested Helmholtz coils, which are used to generate a uniform magnetic field with controllable magnitude and direction. Circular and square coils, both with square cross section, are considered. Practical considerations such as wire selection, wire-wrapping efficiency, wire bending radius, choice of power supply, and inductance and time response are included. Using the equations provided, a designer can quickly create an optimal set of custom coils to generate a specified field magnitude in the uniform-field region while maintaining specified accessibility to the central workspace. An example case study is included.
Parametric design of tri-axial nested Helmholtz coils.
Abbott, Jake J
2015-05-01
This paper provides an optimal parametric design for tri-axial nested Helmholtz coils, which are used to generate a uniform magnetic field with controllable magnitude and direction. Circular and square coils, both with square cross section, are considered. Practical considerations such as wire selection, wire-wrapping efficiency, wire bending radius, choice of power supply, and inductance and time response are included. Using the equations provided, a designer can quickly create an optimal set of custom coils to generate a specified field magnitude in the uniform-field region while maintaining specified accessibility to the central workspace. An example case study is included.
Parametric design of tri-axial nested Helmholtz coils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, Jake J., E-mail: jake.abbott@utah.edu
This paper provides an optimal parametric design for tri-axial nested Helmholtz coils, which are used to generate a uniform magnetic field with controllable magnitude and direction. Circular and square coils, both with square cross section, are considered. Practical considerations such as wire selection, wire-wrapping efficiency, wire bending radius, choice of power supply, and inductance and time response are included. Using the equations provided, a designer can quickly create an optimal set of custom coils to generate a specified field magnitude in the uniform-field region while maintaining specified accessibility to the central workspace. An example case study is included.
NASA Astrophysics Data System (ADS)
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2017-02-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
NASA Technical Reports Server (NTRS)
Stanley, Douglas O.; Unal, Resit; Joyner, C. R.
1992-01-01
The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization.more » The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.« less
Some Advances in Downscaling Probabilistic Climate Forecasts for Agricultural Decision Support
NASA Astrophysics Data System (ADS)
Han, E.; Ines, A.
2015-12-01
Seasonal climate forecasts, commonly provided in tercile-probabilities format (below-, near- and above-normal), need to be translated into more meaningful information for decision support of practitioners in agriculture. In this paper, we will present two new novel approaches to temporally downscale probabilistic seasonal climate forecasts: one non-parametric and another parametric method. First, the non-parametric downscaling approach called FResampler1 uses the concept of 'conditional block sampling' of weather data to create daily weather realizations of a tercile-based seasonal climate forecasts. FResampler1 randomly draws time series of daily weather parameters (e.g., rainfall, maximum and minimum temperature and solar radiation) from historical records, for the season of interest from years that belong to a certain rainfall tercile category (e.g., being below-, near- and above-normal). In this way, FResampler1 preserves the covariance between rainfall and other weather parameters as if conditionally sampling maximum and minimum temperature and solar radiation if that day is wet or dry. The second approach called predictWTD is a parametric method based on a conditional stochastic weather generator. The tercile-based seasonal climate forecast is converted into a theoretical forecast cumulative probability curve. Then the deviates for each percentile is converted into rainfall amount or frequency or intensity to downscale the 'full' distribution of probabilistic seasonal climate forecasts. Those seasonal deviates are then disaggregated on a monthly basis and used to constrain the downscaling of forecast realizations at different percentile values of the theoretical forecast curve. As well as the theoretical basis of the approaches we will discuss sensitivity analysis (length of data and size of samples) of them. In addition their potential applications for managing climate-related risks in agriculture will be shown through a couple of case studies based on actual seasonal climate forecasts for: rice cropping in the Philippines and maize cropping in India and Kenya.
Rapid Parameterization Schemes for Aircraft Shape Optimization
NASA Technical Reports Server (NTRS)
Li, Wu
2012-01-01
A rapid shape parameterization tool called PROTEUS is developed for aircraft shape optimization. This tool can be applied directly to any aircraft geometry that has been defined in PLOT3D format, with the restriction that each aircraft component must be defined by only one data block. PROTEUS has eight types of parameterization schemes: planform, wing surface, twist, body surface, body scaling, body camber line, shifting/scaling, and linear morphing. These parametric schemes can be applied to two types of components: wing-type surfaces (e.g., wing, canard, horizontal tail, vertical tail, and pylon) and body-type surfaces (e.g., fuselage, pod, and nacelle). These schemes permit the easy setup of commonly used shape modification methods, and each customized parametric scheme can be applied to the same type of component for any configuration. This paper explains the mathematics for these parametric schemes and uses two supersonic configurations to demonstrate the application of these schemes.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Imaging non-Gaussian output fields produced by Josephson parametric amplifiers: experiments
NASA Astrophysics Data System (ADS)
Toyli, D. M.; Venkatramani, A. V.; Boutin, S.; Eddins, A.; Didier, N.; Clerk, A. A.; Blais, A.; Siddiqi, I.
2015-03-01
In recent years, squeezed microwave states have become the focus of intense research motivated by applications in continuous-variables quantum computation and precision qubit measurement. Despite numerous demonstrations of vacuum squeezing with superconducting parametric amplifiers such as the Josephson parametric amplifier (JPA), most experiments have also suggested that the squeezed output field becomes non-ideal at the large (> 10dB) signal gains required for low-noise qubit measurement. Here we describe a systematic experimental study of JPA squeezing performance in this regime for varying lumped-element device designs and pumping methods. We reconstruct the JPA output fields through homodyne detection of the field moments and quantify the deviations from an ideal squeezed state using maximal entropy techniques. These methods provide a powerful diagnostic tool to understand how effects such as gain compression impact JPA squeezing. Our results highlight the importance of weak device nonlinearity for generating highly squeezed states. This work is supported by ARO and ONR.
Analysis of computer-aided techniques for virtual planning in nasoalveolar moulding.
Loeffelbein, D J; Ritschl, L M; Rau, A; Wolff, K-D; Barbarino, M; Pfeifer, S; Schönberger, M; Wintermantel, E
2015-05-01
We compared two methods of planning virtual alveolar moulding as the first step in nasoalveolar moulding to provide the basis for an automated process to fabricate nasoalveolar moulding appliances by using computer-assisted design and computer-aided manufacturing (CAD/CAM). First, the initial intraoral casts taken from seven newborn babies with complete unilateral cleft lip and palate were digitised. This was repeated for the target models after conventional nasoalveolar moulding had been completed. The initial digital model for each patient was then virtually modified by two different modelling techniques to achieve the corresponding target model: parametric and freeform modelling with the software Geomagic(®). The digitally-remodelled casts were quantitatively compared with the actual target model for each patient, and the comparison between the two modified models and the target model showed that freeform modelling of the initial cast was successful (mean (SD) deviation n=7, +0.723 (0.148) to -0.694 (0.157)mm) but needed continuous orientation and was difficult to automate. The results from the parametric modelling (mean (SD) deviation, n=7, +1.168 (0.185) to -1.067 (0.221)mm) were not as good as those from freeform modelling. During parametric modelling, we found some irregularities on the surface, and transverse growth of the maxilla was not accounted for. However, this method seems to be the right one as far as automation is concerned. In addition, an external algorithm must be implemented because the function of the commercial software is limited. Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Parametrization of DFTB3/3OB for Magnesium and Zinc for Chemical and Biological Applications
2015-01-01
We report the parametrization of the approximate density functional theory, DFTB3, for magnesium and zinc for chemical and biological applications. The parametrization strategy follows that established in previous work that parametrized several key main group elements (O, N, C, H, P, and S). This 3OB set of parameters can thus be used to study many chemical and biochemical systems. The parameters are benchmarked using both gas-phase and condensed-phase systems. The gas-phase results are compared to DFT (mostly B3LYP), ab initio (MP2 and G3B3), and PM6, as well as to a previous DFTB parametrization (MIO). The results indicate that DFTB3/3OB is particularly successful at predicting structures, including rather complex dinuclear metalloenzyme active sites, while being semiquantitative (with a typical mean absolute deviation (MAD) of ∼3–5 kcal/mol) for energetics. Single-point calculations with high-level quantum mechanics (QM) methods generally lead to very satisfying (a typical MAD of ∼1 kcal/mol) energetic properties. DFTB3/MM simulations for solution and two enzyme systems also lead to encouraging structural and energetic properties in comparison to available experimental data. The remaining limitations of DFTB3, such as the treatment of interaction between metal ions and highly charged/polarizable ligands, are also discussed. PMID:25178644
Martinez Manzanera, Octavio; Elting, Jan Willem; van der Hoeven, Johannes H.; Maurits, Natasha M.
2016-01-01
In the clinic, tremor is diagnosed during a time-limited process in which patients are observed and the characteristics of tremor are visually assessed. For some tremor disorders, a more detailed analysis of these characteristics is needed. Accelerometry and electromyography can be used to obtain a better insight into tremor. Typically, routine clinical assessment of accelerometry and electromyography data involves visual inspection by clinicians and occasionally computational analysis to obtain objective characteristics of tremor. However, for some tremor disorders these characteristics may be different during daily activity. This variability in presentation between the clinic and daily life makes a differential diagnosis more difficult. A long-term recording of tremor by accelerometry and/or electromyography in the home environment could help to give a better insight into the tremor disorder. However, an evaluation of such recordings using routine clinical standards would take too much time. We evaluated a range of techniques that automatically detect tremor segments in accelerometer data, as accelerometer data is more easily obtained in the home environment than electromyography data. Time can be saved if clinicians only have to evaluate the tremor characteristics of segments that have been automatically detected in longer daily activity recordings. We tested four non-parametric methods and five parametric methods on clinical accelerometer data from 14 patients with different tremor disorders. The consensus between two clinicians regarding the presence or absence of tremor on 3943 segments of accelerometer data was employed as reference. The nine methods were tested against this reference to identify their optimal parameters. Non-parametric methods generally performed better than parametric methods on our dataset when optimal parameters were used. However, one parametric method, employing the high frequency content of the tremor bandwidth under consideration (High Freq) performed similarly to non-parametric methods, but had the highest recall values, suggesting that this method could be employed for automatic tremor detection. PMID:27258018
Lambert-Girard, Simon; Allard, Martin; Piché, Michel; Babin, François
2015-04-01
The development of a novel broadband and tunable optical parametric generator (OPG) is presented. The OPG properties are studied numerically and experimentally in order to optimize the generator's use in a broadband spectroscopic LIDAR operating in the short and mid-infrared. This paper discusses trade-offs to be made on the properties of the pump, crystal, and seeding signal in order to optimize the pulse spectral density and divergence while enabling energy scaling. A seed with a large spectral bandwidth is shown to enhance the pulse-to-pulse stability and optimize the pulse spectral density. A numerical model shows excellent agreement with output power measurements; the model predicts that a pump having a large number of longitudinal modes improves conversion efficiency and pulse stability.
Spacelab mission dependent training parametric resource requirements study
NASA Technical Reports Server (NTRS)
Ogden, D. H.; Watters, H.; Steadman, J.; Conrad, L.
1976-01-01
Training flows were developed for typical missions, resource relationships analyzed, and scheduling optimization algorithms defined. Parametric analyses were performed to study the effect of potential changes in mission model, mission complexity and training time required on the resource quantities required to support training of payload or mission specialists. Typical results of these analyses are presented both in graphic and tabular form.
Parametric vs. non-parametric statistics of low resolution electromagnetic tomography (LORETA).
Thatcher, R W; North, D; Biver, C
2005-01-01
This study compared the relative statistical sensitivity of non-parametric and parametric statistics of 3-dimensional current sources as estimated by the EEG inverse solution Low Resolution Electromagnetic Tomography (LORETA). One would expect approximately 5% false positives (classification of a normal as abnormal) at the P < .025 level of probability (two tailed test) and approximately 1% false positives at the P < .005 level. EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) from 43 normal adult subjects were imported into the Key Institute's LORETA program. We then used the Key Institute's cross-spectrum and the Key Institute's LORETA output files (*.lor) as the 2,394 gray matter pixel representation of 3-dimensional currents at different frequencies. The mean and standard deviation *.lor files were computed for each of the 2,394 gray matter pixels for each of the 43 subjects. Tests of Gaussianity and different transforms were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of parametric vs. non-parametric statistics were compared using a "leave-one-out" cross validation method in which individual normal subjects were withdrawn and then statistically classified as being either normal or abnormal based on the remaining subjects. Log10 transforms approximated Gaussian distribution in the range of 95% to 99% accuracy. Parametric Z score tests at P < .05 cross-validation demonstrated an average misclassification rate of approximately 4.25%, and range over the 2,394 gray matter pixels was 27.66% to 0.11%. At P < .01 parametric Z score cross-validation false positives were 0.26% and ranged from 6.65% to 0% false positives. The non-parametric Key Institute's t-max statistic at P < .05 had an average misclassification error rate of 7.64% and ranged from 43.37% to 0.04% false positives. The nonparametric t-max at P < .01 had an average misclassification rate of 6.67% and ranged from 41.34% to 0% false positives of the 2,394 gray matter pixels for any cross-validated normal subject. In conclusion, adequate approximation to Gaussian distribution and high cross-validation can be achieved by the Key Institute's LORETA programs by using a log10 transform and parametric statistics, and parametric normative comparisons had lower false positive rates than the non-parametric tests.
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
Optimal Operation of a Josephson Parametric Amplifier for Vacuum Squeezing
NASA Astrophysics Data System (ADS)
Malnou, M.; Palken, D. A.; Vale, Leila R.; Hilton, Gene C.; Lehnert, K. W.
2018-04-01
A Josephson parametric amplifier (JPA) can create squeezed states of microwave light, lowering the noise associated with certain quantum measurements. We experimentally study how the JPA's pump influences the phase-sensitive amplification and deamplification of a coherent tone's amplitude when that amplitude is commensurate with vacuum fluctuations. We predict and demonstrate that, by operating the JPA with a single current pump whose power is greater than the value that maximizes gain, the amplifier distortion is reduced and, consequently, squeezing is improved. Optimizing the singly pumped JPA's operation in this fashion, we directly observe 3.87 ±0.03 dB of vacuum squeezing over a bandwidth of 30 MHz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maloney, J. A.; Morozov, V. S.; Derbenev, Ya. S.
Muon colliders have been proposed for the next generation of particle accelerators that study high-energy physics at the energy and intensity frontiers. In this paper we study a possible implementation of muon ionization cooling, Parametric-resonance Ionization Cooling (PIC), in the twin helix channel. The resonant cooling method of PIC offers the potential to reduce emittance beyond that achievable with ionization cooling with ordinary magnetic focusing. We examine optimization of a variety of parameters, study the nonlinear dynamics in the twin helix channel and consider possible methods of aberration correction.
Design of a terahertz parametric oscillator based on a resonant cavity in a terahertz waveguide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saito, K., E-mail: k-saito@material.tohoku.ac.jp; Oyama, Y.; Tanabe, T.
We demonstrate ns-pulsed pumping of terahertz (THz) parametric oscillations in a quasi-triply resonant cavity in a THz waveguide. The THz waves, down converted through parametric interactions between the pump and signal waves at telecom frequencies, are confined to a GaP single mode ridge waveguide. By combining the THz waveguide with a quasi-triply resonant cavity, the nonlinear interactions can be enhanced. A low threshold pump intensity for parametric oscillations can be achieved in the cavity waveguide. The THz output power can be maximized by optimizing the quality factors of the cavity so that an optical to THz photon conversion efficiency, η{submore » p}, of 0.35, which is near the quantum-limit level, can be attained. The proposed THz optical parametric oscillator can be utilized as an efficient and monochromatic THz source.« less
Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.
Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos
2011-07-01
In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.
Yadav, Kartikey K; Dasgupta, Kinshuk; Singh, Dhruva K; Varshney, Lalit; Singh, Harvinderpal
2015-03-06
Polyethersulfone-based beads encapsulating di-2-ethylhexyl phosphoric acid have been synthesized and evaluated for the recovery of rare earth values from the aqueous media. Percentage recovery and the sorption behavior of Dy(III) have been investigated under wide range of experimental parameters using these beads. Taguchi method utilizing L-18 orthogonal array has been adopted to identify the most influential process parameters responsible for higher degree of recovery with enhanced sorption of Dy(III) from chloride medium. Analysis of variance indicated that the feed concentration of Dy(III) is the most influential factor for equilibrium sorption capacity, whereas aqueous phase acidity influences the percentage recovery most. The presence of polyvinyl alcohol and multiwalled carbon nanotube modified the internal structure of the composite beads and resulted in uniform distribution of organic extractant inside polymeric matrix. The experiment performed under optimum process conditions as predicted by Taguchi method resulted in enhanced Dy(III) recovery and sorption capacity by polymeric beads with minimum standard deviation. Copyright © 2015 Elsevier B.V. All rights reserved.
Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-01-01
Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977
Automated MRI segmentation for individualized modeling of current flow in the human head
NASA Astrophysics Data System (ADS)
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-12-01
Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
3-D Vector Flow Estimation With Row-Column-Addressed Arrays.
Holbek, Simon; Christiansen, Thomas Lehrmann; Stuart, Matthias Bo; Beers, Christopher; Thomsen, Erik Vilain; Jensen, Jorgen Arendt
2016-11-01
Simulation and experimental results from 3-D vector flow estimations for a 62 + 62 2-D row-column (RC) array with integrated apodization are presented. A method for implementing a 3-D transverse oscillation (TO) velocity estimator on a 3-MHz RC array is developed and validated. First, a parametric simulation study is conducted, where flow direction, ensemble length, number of pulse cycles, steering angles, transmit/receive apodization, and TO apodization profiles and spacing are varied, to find the optimal parameter configuration. The performance of the estimator is evaluated with respect to relative mean bias ~B and mean standard deviation ~σ . Second, the optimal parameter configuration is implemented on the prototype RC probe connected to the experimental ultrasound scanner SARUS. Results from measurements conducted in a flow-rig system containing a constant laminar flow and a straight-vessel phantom with a pulsating flow are presented. Both an M-mode and a steered transmit sequence are applied. The 3-D vector flow is estimated in the flow rig for four representative flow directions. In the setup with 90° beam-to-flow angle, the relative mean bias across the entire velocity profile is (-4.7, -0.9, 0.4)% with a relative standard deviation of (8.7, 5.1, 0.8)% for ( v x , v y , v z ). The estimated peak velocity is 48.5 ± 3 cm/s giving a -3% bias. The out-of-plane velocity component perpendicular to the cross section is used to estimate volumetric flow rates in the flow rig at a 90° beam-to-flow angle. The estimated mean flow rate in this setup is 91.2 ± 3.1 L/h corresponding to a bias of -11.1%. In a pulsating flow setup, flow rate measured during five cycles is 2.3 ± 0.1 mL/stroke giving a negative 9.7% bias. It is concluded that accurate 3-D vector flow estimation can be obtained using a 2-D RC-addressed array.
Parametric modeling and stagger angle optimization of an axial flow fan
NASA Astrophysics Data System (ADS)
Li, M. X.; Zhang, C. H.; Liu, Y.; Y Zheng, S.
2013-12-01
Axial flow fans are widely used in every field of social production. Improving their efficiency is a sustained and urgent demand of domestic industry. The optimization of stagger angle is an important method to improve fan performance. Parametric modeling and calculation process automation are realized in this paper to improve optimization efficiency. Geometric modeling and mesh division are parameterized based on GAMBIT. Parameter setting and flow field calculation are completed in the batch mode of FLUENT. A control program is developed in Visual C++ to dominate the data exchange of mentioned software. It also extracts calculation results for optimization algorithm module (provided by Matlab) to generate directive optimization control parameters, which as feedback are transferred upwards to modeling module. The center line of the blade airfoil, based on CLARK y profile, is constructed by non-constant circulation and triangle discharge method. Stagger angles of six airfoil sections are optimized, to reduce the influence of inlet shock loss as well as gas leak in blade tip clearance and hub resistance at blade root. Finally an optimal solution is obtained, which meets the total pressure requirement under given conditions and improves total pressure efficiency by about 6%.
Aerodynamic shape optimization of a HSCT type configuration with improved surface definition
NASA Technical Reports Server (NTRS)
Thomas, Almuttil M.; Tiwari, Surendra N.
1994-01-01
Two distinct parametrization procedures of generating free-form surfaces to represent aerospace vehicles are presented. The first procedure is the representation using spline functions such as nonuniform rational b-splines (NURBS) and the second is a novel (geometrical) parametrization using solutions to a suitably chosen partial differential equation. The main idea is to develop a surface which is more versatile and can be used in an optimization process. Unstructured volume grid is generated by an advancing front algorithm and solutions obtained using an Euler solver. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an automatic differentiator precompiler software tool. Aerodynamic shape optimization of a complete aircraft with twenty four design variables is performed. High speed civil transport aircraft (HSCT) configurations are targeted to demonstrate the process.
NASA Astrophysics Data System (ADS)
Salmon, B. P.; Kleynhans, W.; Olivier, J. C.; van den Bergh, F.; Wessels, K. J.
2018-05-01
Humans are transforming land cover at an ever-increasing rate. Accurate geographical maps on land cover, especially rural and urban settlements are essential to planning sustainable development. Time series extracted from MODerate resolution Imaging Spectroradiometer (MODIS) land surface reflectance products have been used to differentiate land cover classes by analyzing the seasonal patterns in reflectance values. The proper fitting of a parametric model to these time series usually requires several adjustments to the regression method. To reduce the workload, a global setting of parameters is done to the regression method for a geographical area. In this work we have modified a meta-optimization approach to setting a regression method to extract the parameters on a per time series basis. The standard deviation of the model parameters and magnitude of residuals are used as scoring function. We successfully fitted a triply modulated model to the seasonal patterns of our study area using a non-linear extended Kalman filter (EKF). The approach uses temporal information which significantly reduces the processing time and storage requirements to process each time series. It also derives reliability metrics for each time series individually. The features extracted using the proposed method are classified with a support vector machine and the performance of the method is compared to the original approach on our ground truth data.
Signal-domain optimization metrics for MPRAGE RF pulse design in parallel transmission at 7 tesla.
Gras, V; Vignaud, A; Mauconduit, F; Luong, M; Amadon, A; Le Bihan, D; Boulant, N
2016-11-01
Standard radiofrequency pulse design strategies focus on minimizing the deviation of the flip angle from a target value, which is sufficient but not necessary for signal homogeneity. An alternative approach, based directly on the signal, here is proposed for the MPRAGE sequence, and is developed in the parallel transmission framework with the use of the k T -points parametrization. The flip angle-homogenizing and the proposed methods were investigated numerically under explicit power and specific absorption rate constraints and tested experimentally in vivo on a 7 T parallel transmission system enabling real time local specific absorption rate monitoring. Radiofrequency pulse performance was assessed by a careful analysis of the signal and contrast between white and gray matter. Despite a slight reduction of the flip angle uniformity, an improved signal and contrast homogeneity with a significant reduction of the specific absorption rate was achieved with the proposed metric in comparison with standard pulse designs. The proposed joint optimization of the inversion and excitation pulses enables significant reduction of the specific absorption rate in the MPRAGE sequence while preserving image quality. The work reported thus unveils a possible direction to increase the potential of ultra-high field MRI and parallel transmission. Magn Reson Med 76:1431-1442, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Ng, Hok K.; Grabbe, Shon; Mukherjee, Avijit
2010-01-01
The optimization of traffic flows in congested airspace with varying convective weather is a challenging problem. One approach is to generate shortest routes between origins and destinations while meeting airspace capacity constraint in the presence of uncertainties, such as weather and airspace demand. This study focuses on development of an optimal flight path search algorithm that optimizes national airspace system throughput and efficiency in the presence of uncertainties. The algorithm is based on dynamic programming and utilizes the predicted probability that an aircraft will deviate around convective weather. It is shown that the running time of the algorithm increases linearly with the total number of links between all stages. The optimal routes minimize a combination of fuel cost and expected cost of route deviation due to convective weather. They are considered as alternatives to the set of coded departure routes which are predefined by FAA to reroute pre-departure flights around weather or air traffic constraints. A formula, which calculates predicted probability of deviation from a given flight path, is also derived. The predicted probability of deviation is calculated for all path candidates. Routes with the best probability are selected as optimal. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. The algorithm can also be extended to automatically adjust its design parameters to satisfy the desired level of reliability.
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
NASA Astrophysics Data System (ADS)
Smetanin, S. N.; Jelínek, M.; Kubeček, V.; Jelínková, H.; Ivleva, L. I.
2016-10-01
A new effect of the pulse shortening of the parametrically generated radiation down to hundreds of picosecond via depletion of pumping of intracavity Raman conversion in the miniature passively Q-switched Nd: SrMoO4 parametric self-Raman laser with the increasing energy of the shortened pulse under pulsed pumping by a high-power laser diode bar is demonstrated. The theoretical estimation of the depletion stage duration of the convertible fundamental laser radiation via intracavity Raman conversion is in agreement with the experimentally demonstrated duration of the parametrically generated pulse. Using the mathematical modeling of the pulse shortening quality and quantity deterioration is disclosed, and the solution ways are found by the optimization of the laser parameters.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
NASA Astrophysics Data System (ADS)
Huang, Qing-Guo; Li, Miao; Li, Xiao-Dong; Wang, Shuang
2009-10-01
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant Λ, where the equation of state (EOS) w and the energy density ρΛ of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant ρΛ in each bin, respectively. It is found that for fitting the Constitution set alone, w and ρΛ will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which ρΛ rapidly decreases at redshift z˜0.331 presents a significant improvement (Δχ2=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant Λ at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant ρΛ model always performs better than a piecewise constant w model; this shows the advantage of using ρΛ, instead of w, to probe the variation of DE.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang Qingguo; Kavli Institute for Theoretical Physics China, Chinese Academy of Sciences, Beijing 100190; Li Miao
2009-10-15
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant {lambda}, where the equation of state (EOS) w and the energy density {rho}{sub {lambda}} of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant {rho}{sub {lambda}} in each bin,more » respectively. It is found that for fitting the Constitution set alone, w and {rho}{sub {lambda}} will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which {rho}{sub {lambda}} rapidly decreases at redshift z{approx}0.331 presents a significant improvement ({delta}{chi}{sup 2}=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant {lambda} at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant {rho}{sub {lambda}} model always performs better than a piecewise constant w model; this shows the advantage of using {rho}{sub {lambda}}, instead of w, to probe the variation of DE.« less
Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...
2014-06-03
A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less
Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan
2014-11-01
This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.
NASA Astrophysics Data System (ADS)
Hannat, Ridha
The aim of this thesis is to apply a new methodology of optimization based on the dual kriging method to a hot air anti-icing system for airplanes wings. The anti-icing system consists of a piccolo tube placed along the span of the wing, in the leading edge area. The hot air is injected through small nozzles and impact on the inner wall of the wing. The objective function targeted by the optimization is the effectiveness of the heat transfer of the anti-icing system. This heat transfer effectiveness is regarded as being the ratio of the wing inner wall heat flux and the sum of all the nozzles heat flows of the anti-icing system. The methodology adopted to optimize an anti-icing system consists of three steps. The first step is to build a database according to the Box-Behnken design of experiment. The objective function is then modeled by the dual kriging method and finally the SQP optimization method is applied. One of the advantages of the dual kriging is that the model passes exactly through all measurement points, but it can also take into account the numerical errors and deviates from these points. Moreover, the kriged model can be updated at each new numerical simulation. These features of the dual kriging seem to give a good tool to build the response surfaces necessary for the anti-icing system optimization. The first chapter presents a literature review and the optimization problem related to the antiicing system. Chapters two, three and four present the three articles submitted. Chapter two is devoted to the validation of CFD codes used to perform the numerical simulations of an anti-icing system and to compute the conjugate heat transfer (CHT). The CHT is calculated by taking into account the external flow around the airfoil, the internal flow in the anti-icing system, and the conduction in the wing. The heat transfer coefficient at the external skin of the airfoil is almost the same if the external flow is taken into account or no. Therefore, only the internal flow is considered in the following articles. Chapter three concerns the design of experiment (DoE) matrix and the construction of a second order parametric model. The objective function model is based on the Box-Behnken DoE. The parametric model that results from numerical simulations serve for comparison with the kriged model of the third article. Chapter four applies the dual kriging method to model the heat transfer effectiveness of the anti-icing system and use the model for optimization. The possibility of including the numerical error in the results is explored. For the test cases studied, introduction of the numerical error in the optimization process does not improve the results. Dual kriging method is also used to model the distribution of the local heat flux and to interpolate the local heat flux corresponding to the optimal design of the anti-icing system.
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; ...
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limitedmore » number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.« less
Thermofluid Analysis of Magnetocaloric Refrigeration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdelaziz, Omar; Gluesenkamp, Kyle R; Vineyard, Edward Allan
While there have been extensive studies on thermofluid characteristics of different magnetocaloric refrigeration systems, a conclusive optimization study using non-dimensional parameters which can be applied to a generic system has not been reported yet. In this study, a numerical model has been developed for optimization of active magnetic refrigerator (AMR). This model is computationally efficient and robust, making it appropriate for running the thousands of simulations required for parametric study and optimization. The governing equations have been non-dimensionalized and numerically solved using finite difference method. A parametric study on a wide range of non-dimensional numbers has been performed. While themore » goal of AMR systems is to improve the performance of competitive parameters including COP, cooling capacity and temperature span, new parameters called AMR performance index-1 have been introduced in order to perform multi objective optimization and simultaneously exploit all these parameters. The multi-objective optimization is carried out for a wide range of the non-dimensional parameters. The results of this study will provide general guidelines for designing high performance AMR systems.« less
Trajectories for High Specific Impulse High Specific Power Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.
Testing the Kerr metric with the iron line and the KRZ parametrization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ni, Yueying; Jiang, Jiachen; Bambi, Cosimo, E-mail: yyni13@fudan.edu.cn, E-mail: jcjiang12@fudan.edu.cn, E-mail: bambi@fudan.edu.cn
The spacetime geometry around astrophysical black holes is supposed to be well approximated by the Kerr metric, but deviations from the Kerr solution are predicted in a number of scenarios involving new physics. Broad iron Kα lines are commonly observed in the X-ray spectrum of black holes and originate by X-ray fluorescence of the inner accretion disk. The profile of the iron line is sensitively affected by the spacetime geometry in the strong gravity region and can be used to test the Kerr black hole hypothesis. In this paper, we extend previous work in the literature. In particular: i )more » as test-metric, we employ the parametrization recently proposed by Konoplya, Rezzolla, and Zhidenko, which has a number of subtle advantages with respect to the existing approaches; ii ) we perform simulations with specific X-ray missions, and we consider NuSTAR as a prototype of current observational facilities and eXTP as an example of the next generation of X-ray observatories. We find a significant difference between the constraining power of NuSTAR and eXTP. With NuSTAR, it is difficult or impossible to constrain deviations from the Kerr metric. With eXTP, in most cases we can obtain quite stringent constraints (modulo we have the correct astrophysical model).« less
Cozzi, Bruno; De Giorgio, Andrea; Peruffo, A; Montelli, S; Panin, M; Bombardi, C; Grandis, A; Pirone, A; Zambenedetti, P; Corain, L; Granato, Alberto
2017-08-01
The architecture of the neocortex classically consists of six layers, based on cytological criteria and on the layout of intra/interlaminar connections. Yet, the comparison of cortical cytoarchitectonic features across different species proves overwhelmingly difficult, due to the lack of a reliable model to analyze the connection patterns of neuronal ensembles forming the different layers. We first defined a set of suitable morphometric cell features, obtained in digitized Nissl-stained sections of the motor cortex of the horse, chimpanzee, and crab-eating macaque. We then modeled them using a quite general non-parametric data representation model, showing that the assessment of neuronal cell complexity (i.e., how a given cell differs from its neighbors) can be performed using a suitable measure of statistical dispersion such as the mean absolute deviation-mean absolute deviation (MAD). Along with the non-parametric combination and permutation methodology, application of MAD allowed not only to estimate, but also to compare and rank the motor cortical complexity across different species. As to the instances presented in this paper, we show that the pyramidal layers of the motor cortex of the horse are far more irregular than those of primates. This feature could be related to the different organizations of the motor system in monodactylous mammals.
NASA Astrophysics Data System (ADS)
Scholten, O.; Trinh, T. N. G.; de Vries, K. D.; Hare, B. M.
2018-01-01
The radio intensity and polarization footprint of a cosmic-ray induced extensive air shower is determined by the time-dependent structure of the current distribution residing in the plasma cloud at the shower front. In turn, the time dependence of the integrated charge-current distribution in the plasma cloud, the longitudinal shower structure, is determined by interesting physics which one would like to extract, such as the location and multiplicity of the primary cosmic-ray collision or the values of electric fields in the atmosphere during thunderstorms. To extract the structure of a shower from its footprint requires solving a complicated inverse problem. For this purpose we have developed a code that semianalytically calculates the radio footprint of an extensive air shower given an arbitrary longitudinal structure. This code can be used in an optimization procedure to extract the optimal longitudinal shower structure given a radio footprint. On the basis of air-shower universality we propose a simple parametrization of the structure of the plasma cloud. This parametrization is based on the results of Monte Carlo shower simulations. Deriving the parametrization also teaches which aspects of the plasma cloud are important for understanding the features seen in the radio-emission footprint. The calculated radio footprints are compared with microscopic CoREAS simulations.
Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD
NASA Astrophysics Data System (ADS)
Viellieber, Mathias; Class, Andreas G.
2013-11-01
Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.
NASA Astrophysics Data System (ADS)
Koyuncu, A.; Cigeroglu, E.; Özgüven, H. N.
2017-10-01
In this study, a new approach is proposed for identification of structural nonlinearities by employing cascaded optimization and neural networks. Linear finite element model of the system and frequency response functions measured at arbitrary locations of the system are used in this approach. Using the finite element model, a training data set is created, which appropriately spans the possible nonlinear configurations space of the system. A classification neural network trained on these data sets then localizes and determines the types of all nonlinearities associated with the nonlinear degrees of freedom in the system. A new training data set spanning the parametric space associated with the determined nonlinearities is created to facilitate parametric identification. Utilizing this data set, initially, a feed forward regression neural network is trained, which parametrically identifies the classified nonlinearities. Then, the results obtained are further improved by carrying out an optimization which uses network identified values as starting points. Unlike identification methods available in literature, the proposed approach does not require data collection from the degrees of freedoms where nonlinear elements are attached, and furthermore, it is sufficiently accurate even in the presence of measurement noise. The application of the proposed approach is demonstrated on an example system with nonlinear elements and on a real life experimental setup with a local nonlinearity.
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
Parametric estimation for reinforced concrete relief shelter for Aceh cases
NASA Astrophysics Data System (ADS)
Atthaillah; Saputra, Eri; Iqbal, Muhammad
2018-05-01
This paper was a work in progress (WIP) to discover a rapid parametric framework for post-disaster permanent shelter’s materials estimation. The intended shelters were reinforced concrete construction with bricks as its wall. Inevitably, in post-disaster cases, design variations were needed to help suited victims condition. It seemed impossible to satisfy a beneficiary with a satisfactory design utilizing the conventional method. This study offered a parametric framework to overcome slow construction-materials estimation issue against design variations. Further, this work integrated parametric tool, which was Grasshopper to establish algorithms that simultaneously model, visualize, calculate and write the calculated data to a spreadsheet in a real-time. Some customized Grasshopper components were created using GHPython scripting for a more optimized algorithm. The result from this study was a partial framework that successfully performed modeling, visualization, calculation and writing the calculated data simultaneously. It meant design alterations did not escalate time needed for modeling, visualization, and material estimation. Further, the future development of the parametric framework will be made open source.
NASA Astrophysics Data System (ADS)
Wang, Dengfeng; Cai, Kefang
2018-04-01
This article presents a hybrid method combining a modified non-dominated sorting genetic algorithm (MNSGA-II) with grey relational analysis (GRA) to improve the static-dynamic performance of a body-in-white (BIW). First, an implicit parametric model of the BIW was built using SFE-CONCEPT software, and then the validity of the implicit parametric model was verified by physical testing. Eight shape design variables were defined for BIW beam structures based on the implicit parametric technology. Subsequently, MNSGA-II was used to determine the optimal combination of the design parameters that can improve the bending stiffness, torsion stiffness and low-order natural frequencies of the BIW without considerable increase in the mass. A set of non-dominated solutions was then obtained in the multi-objective optimization design. Finally, the grey entropy theory and GRA were applied to rank all non-dominated solutions from best to worst to determine the best trade-off solution. The comparison between the GRA and the technique for order of preference by similarity to ideal solution (TOPSIS) illustrated the reliability and rationality of GRA. Moreover, the effectiveness of the hybrid method was verified by the optimal results such that the bending stiffness, torsion stiffness, first order bending and first order torsion natural frequency were improved by 5.46%, 9.30%, 7.32% and 5.73%, respectively, with the mass of the BIW increasing by 1.30%.
Total focusing method (TFM) robustness to material deviations
NASA Astrophysics Data System (ADS)
Painchaud-April, Guillaume; Badeau, Nicolas; Lepage, Benoit
2018-04-01
The total focusing method (TFM) is becoming an accepted nondestructive evaluation method for industrial inspection. What was a topic of discussion in the applied research community just a few years ago is now being deployed in critical industrial applications, such as inspecting welds in pipelines. However, the method's sensitivity to unexpected parametric changes (material and geometric) has not been rigorously assessed. In this article, we investigate the robustness of TFM in relation to unavoidable deviations from modeled nominal inspection component characteristics, such as sound velocities and uncertainties about the parts' internal and external diameters. We also review TFM's impact on the standard inspection modes often encountered in industrial inspections, and we present a theoretical model supported by empirical observations to illustrate the discussion.
Prediction uncertainty and optimal experimental design for learning dynamical systems.
Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P
2016-06-01
Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.
Coupled parametric design of flow control and duct shape
NASA Technical Reports Server (NTRS)
Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)
2009-01-01
A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.
1983-04-11
existing ones. * -37- !I T-472 REFERENCES [1] Avriel, M., W. E. Diewert, S. Schaible and W. T. Ziemba (1981). Introduction to concave and generalized concave...functions. In Generalized Concavity in Optimization and Economics (S. Schaible and W. T. Ziemba , eds.), Academic Press, New York, pp. 21-50. (21 Bank...Optimality conditions involving generalized convex mappings. In Generalized Concavity in Optimization and Economics (S. Schaible and W. T. Ziemba
Parametric study of laser photovoltaic energy converters
NASA Technical Reports Server (NTRS)
Walker, G. H.; Heinbockel, J. H.
1987-01-01
Photovoltaic converters are of interest for converting laser power to electrical power in a space-based laser power system. This paper describes a model for photovoltaic laser converters and the application of this model to a neodymium laser silicon photovoltaic converter system. A parametric study which defines the sensitivity of the photovoltaic parameters is described. An optimized silicon photovoltaic converter has an efficiency greater than 50 percent for 1000 W/sq cm of neodymium laser radiation.
Flowfield characterization and model development in detonation tubes
NASA Astrophysics Data System (ADS)
Owens, Zachary Clark
A series of experiments and numerical simulations are performed to advance the understanding of flowfield phenomena and impulse generation in detonation tubes. Experiments employing laser-based velocimetry, high-speed schlieren imaging and pressure measurements are used to construct a dataset against which numerical models can be validated. The numerical modeling culminates in the development of a two-dimensional, multi-species, finite-rate-chemistry, parallel, Navier-Stokes solver. The resulting model is specifically designed to assess unsteady, compressible, reacting flowfields, and its utility for studying multidimensional detonation structure is demonstrated. A reduced, quasi-one-dimensional model with source terms accounting for wall losses is also developed for rapid parametric assessment. Using these experimental and numerical tools, two primary objectives are pursued. The first objective is to gain an understanding of how nozzles affect unsteady, detonation flowfields and how they can be designed to maximize impulse in a detonation based propulsion system called a pulse detonation engine. It is shown that unlike conventional, steady-flow propulsion systems where converging-diverging nozzles generate optimal performance, unsteady detonation tube performance during a single-cycle is maximized using purely diverging nozzles. The second objective is to identify the primary underlying mechanisms that cause velocity and pressure measurements to deviate from idealized theory. An investigation of the influence of non-ideal losses including wall heat transfer, friction and condensation leads to the development of improved models that reconcile long-standing discrepancies between predicted and measured detonation tube performance. It is demonstrated for the first time that wall condensation of water vapor in the combustion products can cause significant deviations from ideal theory.
Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-06-01
Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.
Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-01-01
Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477
On the estimation algorithm used in adaptive performance optimization of turbofan engines
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn B.
1993-01-01
The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.
NASA Astrophysics Data System (ADS)
Hatano, Hideki; Slater, Richard; Takekawa, Shunji; Kusano, Masahiro; Watanabe, Makoto
2017-07-01
We demonstrate 43% slope efficiency for generation of ∼3200 nm light, a wavelength considered to be ideal for laser induced ultrasound generation in carbon fiber reinforced plastic. High slope efficiency was obtained by optimizing crystal lengths, cavity length and mirror reflectivity using a two crystal optical parametric oscillator+difference frequency mixing (OPO+DFM) nonlinear wavelength conversion scheme. Mid-IR output >12 mJ was obtained from a 1064 nm Nd:YAG pump laser with 12 ns pulse width (FWHM) and containing pulse energy of 43 mJ. A compact, single temperature crystal oven is described along with some suggestions for improving the slope efficiency.
Revisiting dark energy models using differential ages of galaxies
NASA Astrophysics Data System (ADS)
Rani, Nisha; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha; Biesiada, Marek
2017-03-01
In this work, we use a test based on the differential ages of galaxies for distinguishing the dark energy models. As proposed by Jimenez and Loeb in [1], relative ages of galaxies can be used to put constraints on various cosmological parameters. In the same vein, we reconstruct H0dt/dz and its derivative (H0d2t/dz2) using a model independent technique called non-parametric smoothing. Basically, dt/dz is the change in the age of the object as a function of redshift which is directly linked with the Hubble parameter. Hence for reconstruction of this quantity, we use the most recent H(z) data. Further, we calculate H0dt/dz and its derivative for several models like Phantom, Einstein de Sitter (EdS), ΛCDM, Chevallier-Polarski-Linder (CPL) parametrization, Jassal-Bagla-Padmanabhan (JBP) parametrization and Feng-Shen-Li-Li (FSLL) parametrization. We check the consistency of these models with the results of reconstruction obtained in a model independent way from the data. It is observed that H0dt/dz as a tool is not able to distinguish between the ΛCDM, CPL, JBP and FSLL parametrizations but, as expected, EdS and Phantom models show noticeable deviation from the reconstructed results. Further, the derivative of H0dt/dz for various dark energy models is more sensitive at low redshift. It is found that the FSLL model is not consistent with the reconstructed results, however, the ΛCDM model is in concordance with the 3σ region of the reconstruction at redshift z>= 0.3.
Fixed-Order Mixed Norm Designs for Building Vibration Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
2000-01-01
This study investigates the use of H2, mu-synthesis, and mixed H2/mu methods to construct full order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodeled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full order compensators that are robust to both unmodeled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H2 design performance levels while providing the same levels of robust stability as the mu designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H2 designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verba, Roman, E-mail: verrv@ukr.net; Tiberkevich, Vasil; Slavin, Andrei
2015-09-14
The influence of the interfacial Dzyaloshinskii-Moriya interaction (IDMI) on the parametric amplification of spin waves propagating in ultrathin ferromagnetic film is considered theoretically. It is shown that the IDMI changes the relation between the group velocities of the signal and idler spin waves in a parametric amplifier, which may result in the complete vanishing of the reversed idler wave. In the optimized case, the idler spin wave does not propagate from the pumping region at all, which increases the efficiency of the amplification of the signal wave and suppresses the spurious impact of the idler waves on neighboring spin-wave processingmore » devices.« less
Red, Straight, no bends: primordial power spectrum reconstruction from CMB and large-scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravenni, Andrea; Verde, Licia; Cuesta, Antonio J., E-mail: andrea.ravenni@pd.infn.it, E-mail: liciaverde@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu
2016-08-01
We present a minimally parametric, model independent reconstruction of the shape of the primordial power spectrum. Our smoothing spline technique is well-suited to search for smooth features such as deviations from scale invariance, and deviations from a power law such as running of the spectral index or small-scale power suppression. We use a comprehensive set of the state-of the art cosmological data: Planck observations of the temperature and polarisation anisotropies of the cosmic microwave background, WiggleZ and Sloan Digital Sky Survey Data Release 7 galaxy power spectra and the Canada-France-Hawaii Lensing Survey correlation function. This reconstruction strongly supports the evidencemore » for a power law primordial power spectrum with a red tilt and disfavours deviations from a power law power spectrum including small-scale power suppression such as that induced by significantly massive neutrinos. This offers a powerful confirmation of the inflationary paradigm, justifying the adoption of the inflationary prior in cosmological analyses.« less
Red, Straight, no bends: primordial power spectrum reconstruction from CMB and large-scale structure
NASA Astrophysics Data System (ADS)
Ravenni, Andrea; Verde, Licia; Cuesta, Antonio J.
2016-08-01
We present a minimally parametric, model independent reconstruction of the shape of the primordial power spectrum. Our smoothing spline technique is well-suited to search for smooth features such as deviations from scale invariance, and deviations from a power law such as running of the spectral index or small-scale power suppression. We use a comprehensive set of the state-of the art cosmological data: Planck observations of the temperature and polarisation anisotropies of the cosmic microwave background, WiggleZ and Sloan Digital Sky Survey Data Release 7 galaxy power spectra and the Canada-France-Hawaii Lensing Survey correlation function. This reconstruction strongly supports the evidence for a power law primordial power spectrum with a red tilt and disfavours deviations from a power law power spectrum including small-scale power suppression such as that induced by significantly massive neutrinos. This offers a powerful confirmation of the inflationary paradigm, justifying the adoption of the inflationary prior in cosmological analyses.
Efficient model reduction of parametrized systems by matrix discrete empirical interpolation
NASA Astrophysics Data System (ADS)
Negri, Federico; Manzoni, Andrea; Amsallem, David
2015-12-01
In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Siddiqui, Aleem M; Moses, Jeffrey; Hong, Kyung-Han; Lai, Chien-Jen; Kärtner, Franz X
2010-06-15
We show that an enhancement cavity seeded at the full repetition rate of the pump laser can automatically reshape small-signal gain across the interacting pulses in an optical parametric chirped-pulse amplifier for close-to-optimal operation, significantly increasing both the gain bandwidth and the conversion efficiency, in addition to boosting gain for high-repetition-rate amplification. Applied to a degenerate amplifier, the technique can provide an octave-spanning gain bandwidth.
Zhu, Xiang; Zhang, Dianwen
2013-01-01
We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785
Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations
Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad
2013-01-01
Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194
Layout design-based research on optimization and assessment method for shipbuilding workshop
NASA Astrophysics Data System (ADS)
Liu, Yang; Meng, Mei; Liu, Shuang
2013-06-01
The research study proposes to examine a three-dimensional visualization program, emphasizing on improving genetic algorithms through the optimization of a layout design-based standard and discrete shipbuilding workshop. By utilizing a steel processing workshop as an example, the principle of minimum logistic costs will be implemented to obtain an ideological equipment layout, and a mathematical model. The objectiveness is to minimize the total necessary distance traveled between machines. An improved control operator is implemented to improve the iterative efficiency of the genetic algorithm, and yield relevant parameters. The Computer Aided Tri-Dimensional Interface Application (CATIA) software is applied to establish the manufacturing resource base and parametric model of the steel processing workshop. Based on the results of optimized planar logistics, a visual parametric model of the steel processing workshop is constructed, and qualitative and quantitative adjustments then are applied to the model. The method for evaluating the results of the layout is subsequently established through the utilization of AHP. In order to provide a mode of reference to the optimization and layout of the digitalized production workshop, the optimized discrete production workshop will possess a certain level of practical significance.
High Grazing Angle Sea-Clutter Literature Review
2013-03-01
Optimal and sub-optimal detection .................................................................... 37 7.3 Polarimetry ... polarimetry for target detection from high grazing angles. UNCLASSIFIED DSTO-GD-0736 UNCLASSIFIED 36 7.1 Parametric modelling There have not been...relationships were also found to be intrinsically related to Gaussian detection counterparts. 7.3 Polarimetry Early studies by Stacy et al. [45, 46] and
1982-12-21
and W. T. ZIEMBA (1981). Intro- duction to concave and generalized concave functions. In Gener- alized Concavity in Optimization and Economics (S...Schaible and W. T. Ziemba , eds.), pp. 21-50. Academic Press, New York. BANK, B., J. GUDDAT, D. KLATTE, B. KUMMER, and K. TAMMER (1982). Non- Linear
Shuttle cryogenic supply system optimization study. Volume 6: Appendixes
NASA Technical Reports Server (NTRS)
1973-01-01
The optimization of the cryogenic supply system for space shuttles is discussed. The subjects considered are: (1) auxiliary power unit parametric data, (2) propellant acquisition, (3) thermal protection and thermodynamic properties, (4) instrumentation and controls, and (5) initial component redundancy evaluations. Diagrams of the systems are provided. Graphs of the performance capabilities are included.
NASA Astrophysics Data System (ADS)
Lu, Zheng; Chen, Xiaoyi; Zhou, Ying
2018-04-01
A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.
Shape optimization for aerodynamic efficiency and low observability
NASA Technical Reports Server (NTRS)
Vinh, Hoang; Van Dam, C. P.; Dwyer, Harry A.
1993-01-01
Field methods based on the finite-difference approximations of the time-domain Maxwell's equations and the potential-flow equation have been developed to solve the multidisciplinary problem of airfoil shaping for aerodynamic efficiency and low radar cross section (RCS). A parametric study and an optimization study employing the two analysis methods are presented to illustrate their combined capabilities. The parametric study shows that for frontal radar illumination, the RCS of an airfoil is independent of the chordwise location of maximum thickness but depends strongly on the maximum thickness, leading-edge radius, and leadingedge shape. In addition, this study shows that the RCS of an airfoil can be reduced without significant effects on its transonic aerodynamic efficiency by reducing the leading-edge radius and/or modifying the shape of the leading edge. The optimization study involves the minimization of wave drag for a non-lifting, symmetrical airfoil with constraints on the airfoil maximum thickness and monostatic RCS. This optimization study shows that the two analysis methods can be used effectively to design aerodynamically efficient airfoils with certain desired RCS characteristics.
Method for Household Refrigerators Efficiency Increasing
NASA Astrophysics Data System (ADS)
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
Active control of combustion instabilities
NASA Astrophysics Data System (ADS)
Al-Masoud, Nidal A.
A theoretical analysis of active control of combustion thermo-acoustic instabilities is developed in this dissertation. The theoretical combustion model is based on the dynamics of a two-phase flow in a liquid-fueled propulsion system. The formulation is based on a generalized wave equation with pressure as the dependent variable, and accommodates all influences of combustion, mean flow, unsteady motions and control inputs. The governing partial differential equations are converted to an equivalent set of ordinary differential equations using Galerkin's method by expressing the unsteady pressure and velocity fields as functions of normal mode shapes of the chamber. This procedure yields a representation of the unsteady flow field as a system of coupled nonlinear oscillators that is used as a basis for controllers design. Major research attention is focused on the control of longitudinal oscillations with both linear and nonlinear processes being considered. Starting with a linear model using point actuators, the optimal locations of actuators and sensors are developed. The approach relies on the quantitative measures of the degree of controllability and component cost. These criterion are arrived at by considering the energies of the system's inputs and outputs. The optimality criteria for sensor and actuator locations provide a balance between the importance of the lower order (controlled) and the higher (residual) order modes. To address the issue of uncertainties in system's parameter, the minimax principles based controller is used. The minimax corresponds to finding the best controller for the worst parameter deviation. In other words, choosing controller parameters to minimize, and parameter deviation to maximize some quadratic performance metric. Using the minimax-based controller, a remarkable improvement in the control system's ability to handle parameter uncertainties is achieved when compared to the robustness of the regular control schemes such as LQR and LQG. Since the observed instabilities are harmonic, the concept of "harmonic input" is successfully implemented using a parametric controller to eliminate the thermo-acoustic instability. This control scheme relies on the determination of a phase-shift to maximize the energy dissipation and a controller gain to assure stability and minimize a pre-specified performance index. The closed loop control law design is based on finding an optimal phase angle such that the heat release produced by secondary oscillatory fuel injection is out of phase with the mode's pressure oscillations, thus maximizing energy dissipation, and on finding the limits on the controller gain that ensures system stability. The optimal gains are determined using ITA, ISE, ITAE performance indices. Simulations show successful implementation of the proposed technique.
Parametric Study and Optimization of a Piezoelectric Energy Harvester from Flow Induced Vibration
NASA Astrophysics Data System (ADS)
Ashok, P.; Jawahar Chandra, C.; Neeraj, P.; Santhosh, B.
2018-02-01
Self-powered systems have become the need of the hour and several devices and techniques were proposed in favour of this crisis. Among the various sources, vibrations, being the most practical scenario, is chosen in the present study to investigate for the possibility of harvesting energy. Various methods were devised to trap the energy generated by vibrating bodies, which would otherwise be wasted. One such concept is termed as flow-induced vibration which involves the flow of a fluid across a bluff body that oscillates due to a phenomenon known as vortex shedding. These oscillations can be converted into electrical energy by the use of piezoelectric patches. A two degree of freedom system containing a cylinder as the primary mass and a cantilever beam as the secondary mass attached with a piezoelectric circuit, was considered to model the problem. Three wake oscillator models were studied in order to determine the one which can generate results with high accuracy. It was found that Facchinetti model produced better results than the other two and hence a parametric study was performed to determine the favourable range of the controllable variables of the system. A fitness function was formulated and optimization of the selected parameters was done using genetic algorithm. The parametric optimization led to a considerable improvement in the harvested voltage from the system owing to the high displacement of secondary mass.
Enhanced detection and visualization of anomalies in spectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.; Messinger, David W.
2009-05-01
Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.
Mapping the Chevallier-Polarski-Linder parametrization onto physical dark energy Models
NASA Astrophysics Data System (ADS)
Scherrer, Robert J.
2015-08-01
We examine the Chevallier-Polarski-Linder (CPL) parametrization, in the context of quintessence and barotropic dark energy models, to determine the subset of such models to which it can provide a good fit. The CPL parametrization gives the equation of state parameter w for the dark energy as a linear function of the scale factor a , namely w =w0+wa(1 -a ). In the case of quintessence models, we find that over most of the w0, wa parameter space the CPL parametrization maps onto a fairly narrow form of behavior for the potential V (ϕ ), while a one-dimensional subset of parameter space, for which wa=κ (1 +w0) , with κ constant, corresponds to a wide range of functional forms for V (ϕ ). For barotropic models, we show that the functional dependence of the pressure on the density, up to a multiplicative constant, depends only on wi=wa+w0 and not on w0 and wa separately. Our results suggest that the CPL parametrization may not be optimal for testing either type of model.
Comparison of computational methods to model DNA minor groove binders.
Srivastava, Hemant Kumar; Chourasia, Mukesh; Kumar, Devesh; Sastry, G Narahari
2011-03-28
There has been a profound interest in designing small molecules that interact in sequence-selective fashion with DNA minor grooves. However, most in silico approaches have not been parametrized for DNA ligand interaction. In this regard, a systematic computational analysis of 57 available PDB structures of noncovalent DNA minor groove binders has been undertaken. The study starts with a rigorous benchmarking of GOLD, GLIDE, CDOCKER, and AUTODOCK docking protocols followed by developing QSSR models and finally molecular dynamics simulations. In GOLD and GLIDE, the orientation of the best score pose is closer to the lowest rmsd pose, and the deviation in the conformation of various poses is also smaller compared to other docking protocols. Efficient QSSR models were developed with constitutional, topological, and quantum chemical descriptors on the basis of B3LYP/6-31G* optimized geometries, and with this ΔT(m) values of 46 ligands were predicted. Molecular dynamics simulations of the 14 DNA-ligand complexes with Amber 8.0 show that the complexes are stable in aqueous conditions and do not undergo noticeable fluctuations during the 5 ns production run, with respect to their initial placement in the minor groove region.
On l(1): Optimal decentralized performance
NASA Technical Reports Server (NTRS)
Sourlas, Dennis; Manousiouthakis, Vasilios
1993-01-01
In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.
Moderate deviations-based importance sampling for stochastic recursive equations
Dupuis, Paul; Johnson, Dane
2017-11-17
Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
Moderate deviations-based importance sampling for stochastic recursive equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupuis, Paul; Johnson, Dane
Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
Parametric Shape Optimization of Lens-Focused Piezoelectric Ultrasound Transducers.
Thomas, Gilles P L; Chapelon, Jean-Yves; Bera, Jean-Christophe; Lafon, Cyril
2018-05-01
Focused transducers composed of flat piezoelectric ceramic coupled with an acoustic lens present an economical alternative to curved piezoelectric ceramics and are already in use in a variety of fields. Using a displacement/pressure (u/p) mixed finite element formulation combined with parametric level-set functions to implicitly define the boundaries between the materials and the fluid-structure interface, a method to optimize the shape of acoustic lens made of either one or multiple materials is presented. From that method, two 400 kHz focused transducers using acoustic lens were designed and built with different rapid prototyping methods, one of them made with a combination of two materials, and experimental measurements of the pressure field around the focal point are in good agreement with the presented model.
Shape-driven 3D segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2006-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.
NASA Astrophysics Data System (ADS)
Singh, Thingujam Jackson; Samanta, Sutanu
2016-09-01
In the present work an attempt was made towards parametric optimization of drilling bamboo/Kevlar K29 fiber reinforced sandwich composite to minimize the delamination occurred during the drilling process and also to maximize the tensile strength of the drilled composite. The spindle speed and the feed rate of the drilling operation are taken as the input parameters. The influence of these parameters on delamination and tensile strength of the drilled composite studied and analysed using Taguchi GRA and ANOVA technique. The results show that both the response parameters i.e. delamination and tensile strength are more influenced by feed rate than spindle speed. The percentage contribution of feed rate and spindle speed on response parameters are 13.88% and 81.74% respectively.
Variable selection for distribution-free models for longitudinal zero-inflated count responses.
Chen, Tian; Wu, Pan; Tang, Wan; Zhang, Hui; Feng, Changyong; Kowalski, Jeanne; Tu, Xin M
2016-07-20
Zero-inflated count outcomes arise quite often in research and practice. Parametric models such as the zero-inflated Poisson and zero-inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution-free, or semi-parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero-inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)-based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent-centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD-based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Interfacing a quantum dot with a spontaneous parametric down-conversion source
NASA Astrophysics Data System (ADS)
Huber, Tobias; Prilmüller, Maximilian; Sehner, Michael; Solomon, Glenn S.; Predojević, Ana; Weihs, Gregor
2017-09-01
Quantum networks require interfacing stationary and flying qubits. These flying qubits are usually nonclassical states of light. Here we consider two of the leading source technologies for nonclassical light, spontaneous parametric down-conversion and single semiconductor quantum dots. Down-conversion delivers high-grade entangled photon pairs, whereas quantum dots excel at producing single photons. We report on an experiment that joins these two technologies and investigates the conditions under which optimal interference between these dissimilar light sources may be achieved.
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
Parametric amplification in quasi-PT symmetric coupled waveguide structures
NASA Astrophysics Data System (ADS)
Zhong, Q.; Ahmed, A.; Dadap, J. I.; Osgood, R. M., Jr.; El-Ganainy, R.
2016-12-01
The concept of non-Hermitian parametric amplification was recently proposed as a means to achieve an efficient energy conversion throughout the process of nonlinear three wave mixing in the absence of phase matching. Here we investigate this effect in a waveguide coupler arrangement whose characteristics are tailored to introduce passive PT symmetry only for the idler component. By means of analytical solutions and numerical analysis, we demonstrate the utility of these novel schemes and obtain the optimal design conditions for these devices.
A Lunar Surface Operations Simulator
NASA Technical Reports Server (NTRS)
Nayar, H.; Balaram, J.; Cameron, J.; Jain, A.; Lim, C.; Mukherjee, R.; Peters, S.; Pomerantz, M.; Reder, L.; Shakkottai, P.;
2008-01-01
The Lunar Surface Operations Simulator (LSOS) is being developed to support planning and design of space missions to return astronauts to the moon. Vehicles, habitats, dynamic and physical processes and related environment systems are modeled and simulated in LSOS to assist in the visualization and design optimization of systems for lunar surface operations. A parametric analysis tool and a data browser were also implemented to provide an intuitive interface to run multiple simulations and review their results. The simulator and parametric analysis capability are described in this paper.
Dynamic single sideband modulation for realizing parametric loudspeaker
NASA Astrophysics Data System (ADS)
Sakai, Shinichi; Kamakura, Tomoo
2008-06-01
A parametric loudspeaker, that presents remarkably narrow directivity compared with a conventional loudspeaker, is newly produced and examined. To work the loudspeaker optimally, we prototyped digitally a single sideband modulator based on the Weaver method and appropriate signal processing. The processing techniques are to change the carrier amplitude dynamically depending on the envelope of audio signals, and then to operate the square root or fourth root to the carrier amplitude for improving input-output acoustic linearity. The usefulness of the present modulation scheme has been verified experimentally.
Parametric geometric model and shape optimization of an underwater glider with blended-wing-body
NASA Astrophysics Data System (ADS)
Sun, Chunya; Song, Baowei; Wang, Peng
2015-11-01
Underwater glider, as a new kind of autonomous underwater vehicles, has many merits such as long-range, extended-duration and low costs. The shape of underwater glider is an important factor in determining the hydrodynamic efficiency. In this paper, a high lift to drag ratio configuration, the Blended-Wing-Body (BWB), is used to design a small civilian under water glider. In the parametric geometric model of the BWB underwater glider, the planform is defined with Bezier curve and linear line, and the section is defined with symmetrical airfoil NACA 0012. Computational investigations are carried out to study the hydrodynamic performance of the glider using the commercial Computational Fluid Dynamics (CFD) code Fluent. The Kriging-based genetic algorithm, called Efficient Global Optimization (EGO), is applied to hydrodynamic design optimization. The result demonstrates that the BWB underwater glider has excellent hydrodynamic performance, and the lift to drag ratio of initial design is increased by 7% in the EGO process.
Nadal-Serrano, Jose M; Nadal-Serrano, Adolfo; Lopez-Vallejo, Marisa
2017-01-01
This paper focuses on the application of rapid prototyping techniques using additive manufacturing in combination with parametric design to create low-cost, yet accurate and reliable instruments. The methodology followed makes it possible to make instruments with a degree of customization until now available only to a narrow audience, helping democratize science. The proposal discusses a holistic design-for-manufacturing approach that comprises advanced modeling techniques, open-source design strategies, and an optimization algorithm using free parametric software for both professional and educational purposes. The design and fabrication of an instrument for scattering measurement is used as a case of study to present the previous concepts.
Lopez-Vallejo, Marisa
2017-01-01
This paper focuses on the application of rapid prototyping techniques using additive manufacturing in combination with parametric design to create low-cost, yet accurate and reliable instruments. The methodology followed makes it possible to make instruments with a degree of customization until now available only to a narrow audience, helping democratize science. The proposal discusses a holistic design-for-manufacturing approach that comprises advanced modeling techniques, open-source design strategies, and an optimization algorithm using free parametric software for both professional and educational purposes. The design and fabrication of an instrument for scattering measurement is used as a case of study to present the previous concepts. PMID:29112987
Parametric Model of an Aerospike Rocket Engine
NASA Technical Reports Server (NTRS)
Korte, J. J.
2000-01-01
A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.
Parametric Model of an Aerospike Rocket Engine
NASA Technical Reports Server (NTRS)
Korte, J. J.
2000-01-01
A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.
2017-11-20
Coronary; Ischemic; Arrhythmias, Cardiac; Heart Failure; Peripheral Vascular Diseases; Dementia; Stroke; Pulmonary Disease, Chronic Obstructive; Respiratory Insufficiency; Alcoholism; Cancer; Diabetes; Renal Insufficiency
Parametric study of a canard-configured transport using conceptual design optimization
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.
1985-01-01
Constrained-parameter optimization is used to perform optimal conceptual design of both canard and conventional configurations of a medium-range transport. A number of design constants and design constraints are systematically varied to compare the sensitivities of canard and conventional configurations to a variety of technology assumptions. Main-landing-gear location and canard surface high-lift performance are identified as critical design parameters for a statically stable, subsonic, canard-configured transport.
Revisiting dark energy models using differential ages of galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rani, Nisha; Mahajan, Shobhit; Mukherjee, Amitabha
In this work, we use a test based on the differential ages of galaxies for distinguishing the dark energy models. As proposed by Jimenez and Loeb in [1], relative ages of galaxies can be used to put constraints on various cosmological parameters. In the same vein, we reconstruct H {sub 0} {sub dt} / dz and its derivative ( H {sub 0} {sub d} {sup 2} {sup t} / dz {sup 2}) using a model independent technique called non-parametric smoothing . Basically, dt / dz is the change in the age of the object as a function of redshift whichmore » is directly linked with the Hubble parameter. Hence for reconstruction of this quantity, we use the most recent H ( z ) data. Further, we calculate H {sub 0} {sub dt} / dz and its derivative for several models like Phantom, Einstein de Sitter (EdS), ΛCDM, Chevallier-Polarski-Linder (CPL) parametrization, Jassal-Bagla-Padmanabhan (JBP) parametrization and Feng-Shen-Li-Li (FSLL) parametrization. We check the consistency of these models with the results of reconstruction obtained in a model independent way from the data. It is observed that H {sub 0} {sub dt} / dz as a tool is not able to distinguish between the ΛCDM, CPL, JBP and FSLL parametrizations but, as expected, EdS and Phantom models show noticeable deviation from the reconstructed results. Further, the derivative of H {sub 0} {sub dt} / dz for various dark energy models is more sensitive at low redshift. It is found that the FSLL model is not consistent with the reconstructed results, however, the ΛCDM model is in concordance with the 3σ region of the reconstruction at redshift z ≥ 0.3.« less
Schörgendorfer, Angela; Branscum, Adam J; Hanson, Timothy E
2013-06-01
Logistic regression is a popular tool for risk analysis in medical and population health science. With continuous response data, it is common to create a dichotomous outcome for logistic regression analysis by specifying a threshold for positivity. Fitting a linear regression to the nondichotomized response variable assuming a logistic sampling model for the data has been empirically shown to yield more efficient estimates of odds ratios than ordinary logistic regression of the dichotomized endpoint. We illustrate that risk inference is not robust to departures from the parametric logistic distribution. Moreover, the model assumption of proportional odds is generally not satisfied when the condition of a logistic distribution for the data is violated, leading to biased inference from a parametric logistic analysis. We develop novel Bayesian semiparametric methodology for testing goodness of fit of parametric logistic regression with continuous measurement data. The testing procedures hold for any cutoff threshold and our approach simultaneously provides the ability to perform semiparametric risk estimation. Bayes factors are calculated using the Savage-Dickey ratio for testing the null hypothesis of logistic regression versus a semiparametric generalization. We propose a fully Bayesian and a computationally efficient empirical Bayesian approach to testing, and we present methods for semiparametric estimation of risks, relative risks, and odds ratios when parametric logistic regression fails. Theoretical results establish the consistency of the empirical Bayes test. Results from simulated data show that the proposed approach provides accurate inference irrespective of whether parametric assumptions hold or not. Evaluation of risk factors for obesity shows that different inferences are derived from an analysis of a real data set when deviations from a logistic distribution are permissible in a flexible semiparametric framework. © 2013, The International Biometric Society.
Nieuwenhuys, Angela; Papageorgiou, Eirini; Desloovere, Kaat; Molenaers, Guy; De Laet, Tinne
2017-01-01
Experts recently identified 49 joint motion patterns in children with cerebral palsy during a Delphi consensus study. Pattern definitions were therefore the result of subjective expert opinion. The present study aims to provide objective, quantitative data supporting the identification of these consensus-based patterns. To do so, statistical parametric mapping was used to compare the mean kinematic waveforms of 154 trials of typically developing children (n = 56) to the mean kinematic waveforms of 1719 trials of children with cerebral palsy (n = 356), which were classified following the classification rules of the Delphi study. Three hypotheses stated that: (a) joint motion patterns with 'no or minor gait deviations' (n = 11 patterns) do not differ significantly from the gait pattern of typically developing children; (b) all other pathological joint motion patterns (n = 38 patterns) differ from typically developing gait and the locations of difference within the gait cycle, highlighted by statistical parametric mapping, concur with the consensus-based classification rules. (c) all joint motion patterns at the level of each joint (n = 49 patterns) differ from each other during at least one phase of the gait cycle. Results showed that: (a) ten patterns with 'no or minor gait deviations' differed somewhat unexpectedly from typically developing gait, but these differences were generally small (≤3°); (b) all other joint motion patterns (n = 38) differed from typically developing gait and the significant locations within the gait cycle that were indicated by the statistical analyses, coincided well with the classification rules; (c) joint motion patterns at the level of each joint significantly differed from each other, apart from two sagittal plane pelvic patterns. In addition to these results, for several joints, statistical analyses indicated other significant areas during the gait cycle that were not included in the pattern definitions of the consensus study. Based on these findings, suggestions to improve pattern definitions were made.
QCD Axion Dark Matter with a Small Decay Constant
NASA Astrophysics Data System (ADS)
Co, Raymond T.; Hall, Lawrence J.; Harigaya, Keisuke
2018-05-01
The QCD axion is a good dark matter candidate. The observed dark matter abundance can arise from misalignment or defect mechanisms, which generically require an axion decay constant fa˜O (1011) GeV (or higher). We introduce a new cosmological origin for axion dark matter, parametric resonance from oscillations of the Peccei-Quinn symmetry breaking field, that requires fa˜(108- 1011) GeV . The axions may be warm enough to give deviations from cold dark matter in large scale structure.
Quantification of soil water retention parameters using multi-section TDR-waveform analysis
NASA Astrophysics Data System (ADS)
Baviskar, S. M.; Heimovaara, T. J.
2017-06-01
Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.
Borai, Anwar; Ichihara, Kiyoshi; Al Masaud, Abdulaziz; Tamimi, Waleed; Bahijri, Suhad; Armbuster, David; Bawazeer, Ali; Nawajha, Mustafa; Otaibi, Nawaf; Khalil, Haitham; Kawano, Reo; Kaddam, Ibrahim; Abdelaal, Mohamed
2016-05-01
This study is a part of the IFCC-global study to derive reference intervals (RIs) for 28 chemistry analytes in Saudis. Healthy individuals (n=826) aged ≥18 years were recruited using the global study protocol. All specimens were measured using an Architect analyzer. RIs were derived by both parametric and non-parametric methods for comparative purpose. The need for secondary exclusion of reference values based on latent abnormal values exclusion (LAVE) method was examined. The magnitude of variation attributable to gender, ages and regions was calculated by the standard deviation ratio (SDR). Sources of variations: age, BMI, physical exercise and smoking levels were investigated by using the multiple regression analysis. SDRs for gender, age and regional differences were significant for 14, 8 and 2 analytes, respectively. BMI-related changes in test results were noted conspicuously for CRP. For some metabolic related parameters the ranges of RIs by non-parametric method were wider than by the parametric method and RIs derived using the LAVE method were significantly different than those without it. RIs were derived with and without gender partition (BMI, drugs and supplements were considered). RIs applicable to Saudis were established for the majority of chemistry analytes, whereas gender, regional and age RI partitioning was required for some analytes. The elevated upper limits of metabolic analytes reflects the existence of high prevalence of metabolic syndrome in Saudi population.
Parameterization models for pesticide exposure via crop consumption.
Fantke, Peter; Wieland, Peter; Juraske, Ronnie; Shaddick, Gavin; Itoiz, Eva Sevigné; Friedrich, Rainer; Jolliet, Olivier
2012-12-04
An approach for estimating human exposure to pesticides via consumption of six important food crops is presented that can be used to extend multimedia models applied in health risk and life cycle impact assessment. We first assessed the variation of model output (pesticide residues per kg applied) as a function of model input variables (substance, crop, and environmental properties) including their possible correlations using matrix algebra. We identified five key parameters responsible for between 80% and 93% of the variation in pesticide residues, namely time between substance application and crop harvest, degradation half-lives in crops and on crop surfaces, overall residence times in soil, and substance molecular weight. Partition coefficients also play an important role for fruit trees and tomato (Kow), potato (Koc), and lettuce (Kaw, Kow). Focusing on these parameters, we develop crop-specific models by parametrizing a complex fate and exposure assessment framework. The parametric models thereby reflect the framework's physical and chemical mechanisms and predict pesticide residues in harvest using linear combinations of crop, crop surface, and soil compartments. Parametric model results correspond well with results from the complex framework for 1540 substance-crop combinations with total deviations between a factor 4 (potato) and a factor 66 (lettuce). Predicted residues also correspond well with experimental data previously used to evaluate the complex framework. Pesticide mass in harvest can finally be combined with reduction factors accounting for food processing to estimate human exposure from crop consumption. All parametric models can be easily implemented into existing assessment frameworks.
A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.
1998-01-01
This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.
NASA Astrophysics Data System (ADS)
Enescu (Balaş, M. L.; Alexandru, C.
2016-08-01
The paper deals with the optimal design of the control system for a 6-DOF robot used in thin layers deposition. The optimization is based on parametric technique, by modelling the design objective as a numerical function, and then establishing the optimal values of the design variables so that to minimize the objective function. The robotic system is a mechatronic product, which integrates the mechanical device and the controlled operating device.The mechanical device of the robot was designed in the CAD (Computer Aided Design) software CATIA, the 3D-model being then transferred to the MBS (Multi-Body Systems) environment ADAMS/View. The control system was developed in the concurrent engineering concept, through the integration with the MBS mechanical model, by using the DFC (Design for Control) software solution EASY5. The necessary angular motions in the six joints of the robot, in order to obtain the imposed trajectory of the end-effector, have been established by performing the inverse kinematic analysis. The positioning error in each joint of the robot is used as design objective, the optimization goal being to minimize the root mean square during simulation, which is a measure of the magnitude of the positioning error varying quantity.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Belief Propagation Algorithm for Portfolio Optimization Problems
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm. PMID:26305462
Belief Propagation Algorithm for Portfolio Optimization Problems.
Shinzato, Takashi; Yasuda, Muneki
2015-01-01
The typical behavior of optimal solutions to portfolio optimization problems with absolute deviation and expected shortfall models using replica analysis was pioneeringly estimated by S. Ciliberti et al. [Eur. Phys. B. 57, 175 (2007)]; however, they have not yet developed an approximate derivation method for finding the optimal portfolio with respect to a given return set. In this study, an approximation algorithm based on belief propagation for the portfolio optimization problem is presented using the Bethe free energy formalism, and the consistency of the numerical experimental results of the proposed algorithm with those of replica analysis is confirmed. Furthermore, the conjecture of H. Konno and H. Yamazaki, that the optimal solutions with the absolute deviation model and with the mean-variance model have the same typical behavior, is verified using replica analysis and the belief propagation algorithm.
New observational constraints on f ( R ) gravity from cosmic chronometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nunes, Rafael C.; Pan, Supriya; Saridakis, Emmanuel N.
We use the recently released cosmic chronometer data and the latest measured value of the local Hubble parameter, combined with the latest joint light curves of Supernovae Type Ia, and Baryon Acoustic Oscillation distance measurements, in order to impose constraints on the viable and most used f ( R ) gravity models. We consider four f ( R ) models, namely the Hu-Sawicki, the Starobinsky, the Tsujikawa, and the exponential one, and we parametrize them introducing a distortion parameter b that quantifies the deviation from ΛCDM cosmology. Our analysis reveals that a small but non-zero deviation from ΛCDM cosmology ismore » slightly favored, with the corresponding fittings exhibiting very efficient AIC and BIC Information Criteria values. Clearly, f ( R ) gravity is consistent with observations, and it can serve as a candidate for modified gravity.« less
Parametrization and Benchmark of Long-Range Corrected DFTB2 for Organic Molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vuong, Van Quan; Akkarapattiakal Kuriappan, Jissy; Kubillus, Maximilian
In this paper, we present the parametrization and benchmark of long-range corrected second-order density functional tight binding (DFTB), LC-DFTB2, for organic and biological molecules. The LC-DFTB2 model not only improves fundamental orbital energy gaps but also ameliorates the DFT self-interaction error and overpolarization problem, and further improves charge-transfer excited states significantly. Electronic parameters for the construction of the DFTB2 Hamiltonian as well as repulsive potentials were optimized for molecules containing C, H, N, and O chemical elements. We use a semiautomatic parametrization scheme based on a genetic algorithm. With the new parameters, LC-DFTB2 describes geometries and vibrational frequencies of organicmore » molecules similarly well as third-order DFTB3/3OB, the de facto standard parametrization based on a GGA functional. Finally, LC-DFTB2 performs well also for atomization and reaction energies, however, slightly less satisfactorily than DFTB3/3OB.« less
Josephson parametric converter saturation and higher order effects
NASA Astrophysics Data System (ADS)
Liu, G.; Chien, T.-C.; Cao, X.; Lanes, O.; Alpern, E.; Pekker, D.; Hatridge, M.
2017-11-01
Microwave parametric amplifiers based on Josephson junctions have become indispensable components of many quantum information experiments. One key limitation which has not been well predicted by theory is the gain saturation behavior which limits the amplifier's ability to process large amplitude signals. The typical explanation for this behavior in phase-preserving amplifiers based on three-wave mixing, such as the Josephson Parametric Converter, is pump depletion, in which the consumption of pump photons to produce amplification results in a reduction in gain. However, in this work, we present experimental data and theoretical calculations showing that the fourth-order Kerr nonlinearities inherent in Josephson junctions are the dominant factor. The Kerr-based theory has the unusual property of causing saturation to both lower and higher gains, depending on bias conditions. This work presents an efficient methodology for optimizing device performance in the presence of Kerr nonlinearities while retaining device tunability and points to the necessity of controlling higher-order Hamiltonian terms to make further improvements in parametric devices.
Parametrization and Benchmark of Long-Range Corrected DFTB2 for Organic Molecules
Vuong, Van Quan; Akkarapattiakal Kuriappan, Jissy; Kubillus, Maximilian; ...
2017-12-12
In this paper, we present the parametrization and benchmark of long-range corrected second-order density functional tight binding (DFTB), LC-DFTB2, for organic and biological molecules. The LC-DFTB2 model not only improves fundamental orbital energy gaps but also ameliorates the DFT self-interaction error and overpolarization problem, and further improves charge-transfer excited states significantly. Electronic parameters for the construction of the DFTB2 Hamiltonian as well as repulsive potentials were optimized for molecules containing C, H, N, and O chemical elements. We use a semiautomatic parametrization scheme based on a genetic algorithm. With the new parameters, LC-DFTB2 describes geometries and vibrational frequencies of organicmore » molecules similarly well as third-order DFTB3/3OB, the de facto standard parametrization based on a GGA functional. Finally, LC-DFTB2 performs well also for atomization and reaction energies, however, slightly less satisfactorily than DFTB3/3OB.« less
Changing space and sound: Parametric design and variable acoustics
NASA Astrophysics Data System (ADS)
Norton, Christopher William
This thesis examines the potential for parametric design software to create performance based design using acoustic metrics as the design criteria. A former soundstage at the University of Southern California used by the Thornton School of Music is used as a case study for a multiuse space for orchestral, percussion, master class and recital use. The criteria used for each programmatic use include reverberation time, bass ratio, and the early energy ratios of the clarity index and objective support. Using a panelized ceiling as a design element to vary the parameters of volume, panel orientation and type of absorptive material, the relationships between these parameters and the design criteria are explored. These relationships and subsequently derived equations are applied to Grasshopper parametric modeling software for Rhino 3D (a NURBS modeling software). Using the target reverberation time and bass ratio for each programmatic use as input for the parametric model, the genomic optimization function of Grasshopper - Galapagos - is run to identify the optimum ceiling geometry and material distribution.
Multidisciplinary optimization in aircraft design using analytic technology models
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1991-01-01
An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.
Generation of Parametric Equivalent-Area Targets for Design of Low-Boom Supersonic Concepts
NASA Technical Reports Server (NTRS)
Li, Wu; Shields, Elwood
2011-01-01
A tool with an Excel visual interface is developed to generate equivalent-area (A(sub e)) targets that satisfy the volume constraints for a low-boom supersonic configuration. The new parametric Ae target explorer allows users to interactively study the tradeoffs between the aircraft volume constraints and the low-boom characteristics (e.g., loudness) of the ground signature. Moreover, numerical optimization can be used to generate the optimal A(sub e) target for given A(sub e) volume constraints. A case study is used to demonstrate how a generated low-boom Ae target can be matched by a supersonic configuration that includes a fuselage, wing, nacelle, pylon, aft pod, horizontal tail, and vertical tail. The low-boom configuration is verified by sonic-boom analysis with an off-body pressure distribution at three body lengths below the configuration
Shape-Driven 3D Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875
Investigation of Parametric Influence on the Properties of Al6061-SiCp Composite
NASA Astrophysics Data System (ADS)
Adebisi, A. A.; Maleque, M. A.; Bello, K. A.
2017-03-01
The influence of process parameter in stir casting play a major role on the development of aluminium reinforced silicon carbide particle (Al-SiCp) composite. This study aims to investigate the influence of process parameters on wear and density properties of Al-SiCp composite using stir casting technique. Experimental data are generated based on a four-factors-five-level central composite design of response surface methodology. Analysis of variance is utilized to confirm the adequacy and validity of developed models considering the significant model terms. Optimization of the process parameters adequately predicts the Al-SiCp composite properties with stirring speed as the most influencing factor. The aim of optimization process is to minimize wear and maximum density. The multiple objective optimization (MOO) achieved an optimal value of 14 wt% reinforcement fraction (RF), 460 rpm stirring speed (SS), 820 °C processing temperature (PTemp) and 150 secs processing time (PT). Considering the optimum parametric combination, wear mass loss achieved a minimum of 1 x 10-3 g and maximum density value of 2.780g/mm3 with a confidence and desirability level of 95.5%.
System and method of vehicle operating condition management
Sujan, Vivek A.; Vajapeyazula, Phani; Follen, Kenneth; Wu, An; Moffett, Barty L.
2015-10-20
A vehicle operating condition profile can be determined over a given route while also considering imposed constraints such as deviation from time targets, deviation from maximum governed speed limits, etc. Given current vehicle speed, engine state and transmission state, the present disclosure optimally manages the engine map and transmission to provide a recommended vehicle operating condition that optimizes fuel consumption in transitioning from one vehicle state to a target state. Exemplary embodiments provide for offline and online optimizations relative to fuel consumption. The benefit is increased freight efficiency in transporting cargo from source to destination by minimizing fuel consumption and maintaining drivability.
Parametric amplification of a superconducting plasma wave
Rajasekaran, S.; Casandruc, E.; Laplace, Y.; ...
2016-07-11
Many applications in photonics require all-optical manipulation of plasma waves, which can concentrate electromagnetic energy on sub-wavelength length scales. This is difficult in metallic plasmas because of their small optical nonlinearities. Some layered superconductors support Josephson plasma waves, involving oscillatory tunnelling of the superfluid between capacitively coupled planes. Josephson plasma waves are also highly nonlinear, and exhibit striking phenomena such as cooperative emission of coherent terahertz radiation, superconductor–metal oscillations and soliton formation. In this paper, we show that terahertz Josephson plasma waves can be parametrically amplified through the cubic tunnelling nonlinearity in a cuprate superconductor. Finally, parametric amplification is sensitivemore » to the relative phase between pump and seed waves, and may be optimized to achieve squeezing of the order-parameter phase fluctuations or terahertz single-photon devices.« less
NASA Technical Reports Server (NTRS)
Dash, S.; Delguidice, P. D.
1975-01-01
A parametric numerical procedure permitting the rapid determination of the performance of a class of scramjet nozzle configurations is presented. The geometric complexity of these configurations ruled out attempts to employ conventional nozzle design procedures. The numerical program developed permitted the parametric variation of cowl length, turning angles on the cowl and vehicle undersurface and lateral expansion, and was subject to fixed constraints such as the vehicle length and nozzle exit height. The program required uniform initial conditions at the burner exit station and yielded the location of all predominant wave zones, accounting for lateral expansion effects. In addition, the program yielded the detailed pressure distribution on the cowl, vehicle undersurface and fences, if any, and calculated the nozzle thrust, lift and pitching moments.
Parametric, nonparametric and parametric modelling of a chaotic circuit time series
NASA Astrophysics Data System (ADS)
Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.
2000-09-01
The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.
Post-Kerr black hole spectroscopy
NASA Astrophysics Data System (ADS)
Glampedakis, Kostas; Pappas, George; Silva, Hector O.; Berti, Emanuele
2017-09-01
One of the central goals of the newborn field of gravitational wave astronomy is to test gravity in the highly nonlinear, strong field regime characterizing the spacetime of black holes. In particular, "black hole spectroscopy" (the observation and identification of black hole quasinormal mode frequencies in the gravitational wave signal) is expected to become one of the main tools for probing the structure and dynamics of Kerr black holes. In this paper we take a significant step toward that goal by constructing a "post-Kerr" quasinormal mode formalism. The formalism incorporates a parametrized but general perturbative deviation from the Kerr metric and exploits the well-established connection between the properties of the spacetime's circular null geodesics and the fundamental quasinormal mode to provide approximate, eikonal limit formulas for the modes' complex frequencies. The resulting algebraic toolkit can be used in waveform templates for ringing black holes with the purpose of measuring deviations from the Kerr metric. As a first illustrative application of our framework, we consider the Johannsen-Psaltis deformed Kerr metric and compute the resulting deviation in the quasinormal mode frequency relative to the known Kerr result.
DFTB Parameters for the Periodic Table: Part 1, Electronic Structure.
Wahiduzzaman, Mohammad; Oliveira, Augusto F; Philipsen, Pier; Zhechkov, Lyuben; van Lenthe, Erik; Witek, Henryk A; Heine, Thomas
2013-09-10
A parametrization scheme for the electronic part of the density-functional based tight-binding (DFTB) method that covers the periodic table is presented. A semiautomatic parametrization scheme has been developed that uses Kohn-Sham energies and band structure curvatures of real and fictitious homoatomic crystal structures as reference data. A confinement potential is used to tighten the Kohn-Sham orbitals, which includes two free parameters that are used to optimize the performance of the method. The method is tested on more than 100 systems and shows excellent overall performance.
SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).
NASA Astrophysics Data System (ADS)
Erhard, M.; Junghans, A. R.; Nair, C.; Schwengner, R.; Beyer, R.; Klug, J.; Kosev, K.; Wagner, A.; Grosse, E.
2010-03-01
Two methods based on bremsstrahlung were applied to the stable even Mo isotopes for the experimental determination of the photon strength function covering the high excitation energy range above 4 MeV with its increasing level density. Photon scattering was used up to the neutron separation energies Sn and data up to the maximum of the isovector giant resonance (GDR) were obtained by photoactivation. After a proper correction for multistep processes the observed quasicontinuous spectra of scattered photons show a remarkably good match to the photon strengths derived from nuclear photoeffect data obtained previously by neutron detection and corrected in absolute scale by using the new activation results. The combined data form an excellent basis to derive a shape dependence of the E1 strength in the even Mo isotopes with increasing deviation from the N=50 neutron shell (i.e., with the impact of quadrupole deformation and triaxiality). The wide energy coverage of the data allows for a stringent assessment of the dipole sum rule and a test of a novel parametrization developed previously which is based on it. This parametrization for the electric dipole strength function in nuclei with A>80 deviates significantly from prescriptions generally used previously. In astrophysical network calculations it may help to quantify the role the p-process plays in cosmic nucleosynthesis. It also has impact on the accurate analysis of neutron capture data of importance for future nuclear energy systems and waste transmutation.
Optimization of a heat-pipe-cooled space radiator for use with a reactor-powered Stirling engine
NASA Technical Reports Server (NTRS)
Moriarty, Michael P.; French, Edward P.
1987-01-01
The design optimization of a reactor-Stirling heat-pipe-cooled radiator is presented. The radiator is a self-deploying concept that uses individual finned heat pipe 'petals' to reject waste heat from a Stirling engine. Radiator optimization methodology is presented, and the results of a parametric analysis of the radiator design variables for a 100-kW(e) system are given. The additional steps of optiminzing the radiator resulted in a net system mass savings of 3 percent.
Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A
2017-12-01
In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Precision Geolocation of Active Electromagnetic Sensors Using Stationary Magnetic Sensors
2009-09-01
0.0003, 0.0003 ] m TiltMeter Mean Pitch: -1.71576990 and Roll: 0.92591697 LSQ Moment Pitch: 0.00576850 and Roll: -0.35543026 Run #5...Standard deviation of optimized solution: [ 0.0028, 0.0014, 0.0012 ] m TiltMeter Mean Pitch: -1.08757549 and Roll: 1.09065730 LSQ Moment...0.00, 0.00, -434.95 ] Standard deviation of optimized solution: [ 0.0051, 0.0031, 0.0035 ] m TiltMeter Mean Pitch: 0.05301905
Stenneken, Prisca; Egetemeir, Johanna; Schulte-Körne, Gerd; Müller, Hermann J; Schneider, Werner X; Finke, Kathrin
2011-10-01
The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these experimental results, however, points to the need for measures that are sufficiently sensitive to differentiate between impaired and preserved attentional components within a unified framework. This first parameter-based group study of attentional components in developmental dyslexia addresses potentially altered attentional components that have recently been associated with parietal dysfunctions in dyslexia. We aimed to isolate the general attentional resources that might underlie reduced span performance, i.e., either a deficient working memory storage capacity, or a slowing in visual perceptual processing speed, or both. Furthermore, by analysing attentional selectivity in dyslexia, we addressed a potential lateralized abnormality of visual attention, i.e., a previously suggested rightward spatial deviation compared to normal readers. We investigated a group of high-achieving young adults with persisting dyslexia and matched normal readers in an experimental whole report and a partial report of briefly presented letter arrays. Possible deviations in the parametric values of the dyslexic compared to the control group were taken as markers for the underlying deficit. The dyslexic group showed a striking reduction in perceptual processing speed (by 26% compared to controls) while their working memory storage capacity was in the normal range. In addition, a spatial deviation of attentional weighting compared to the control group was confirmed in dyslexic readers, which was larger in participants with a more severe dyslexic disorder. In general, the present study supports the relevance of perceptual processing speed in disorders of written language acquisition and demonstrates that the parametric assessment provides a suitable tool for specifying the underlying deficit within a unitary framework. Copyright © 2011 Elsevier Ltd. All rights reserved.
Technical errors in planar bone scanning.
Naddaf, Sleiman Y; Collier, B David; Elgazzar, Abdelhamid H; Khalil, Magdy M
2004-09-01
Optimal technique for planar bone scanning improves image quality, which in turn improves diagnostic efficacy. Because planar bone scanning is one of the most frequently performed nuclear medicine examinations, maintaining high standards for this examination is a daily concern for most nuclear medicine departments. Although some problems such as patient motion are frequently encountered, the degraded images produced by many other deviations from optimal technique are rarely seen in clinical practice and therefore may be difficult to recognize. The objectives of this article are to list optimal techniques for 3-phase and whole-body bone scanning, to describe and illustrate a selection of deviations from these optimal techniques for planar bone scanning, and to explain how to minimize or avoid such technical errors.
Giżyńska, Marta K.; Kukołowicz, Paweł F.; Kordowski, Paweł
2014-01-01
Aim The aim of this work is to present a method of beam weight and wedge angle optimization for patients with prostate cancer. Background 3D-CRT is usually realized with forward planning based on a trial and error method. Several authors have published a few methods of beam weight optimization applicable to the 3D-CRT. Still, none on these methods is in common use. Materials and methods Optimization is based on the assumption that the best plan is achieved if dose gradient at ICRU point is equal to zero. Our optimization algorithm requires beam quality index, depth of maximum dose, profiles of wedged fields and maximum dose to femoral heads. The method was tested for 10 patients with prostate cancer, treated with the 3-field technique. Optimized plans were compared with plans prepared by 12 experienced planners. Dose standard deviation in target volume, and minimum and maximum doses were analyzed. Results The quality of plans obtained with the proposed optimization algorithms was comparable to that prepared by experienced planners. Mean difference in target dose standard deviation was 0.1% in favor of the plans prepared by planners for optimization of beam weights and wedge angles. Introducing a correction factor for patient body outline for dose gradient at ICRU point improved dose distribution homogeneity. On average, a 0.1% lower standard deviation was achieved with the optimization algorithm. No significant difference in mean dose–volume histogram for the rectum was observed. Conclusions Optimization shortens very much time planning. The average planning time was 5 min and less than a minute for forward and computer optimization, respectively. PMID:25337411
Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538
Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar
2015-01-01
In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.
Performance evaluation of coherent Ising machines against classical neural networks
NASA Astrophysics Data System (ADS)
Haribara, Yoshitaka; Ishikawa, Hitoshi; Utsunomiya, Shoko; Aihara, Kazuyuki; Yamamoto, Yoshihisa
2017-12-01
The coherent Ising machine is expected to find a near-optimal solution in various combinatorial optimization problems, which has been experimentally confirmed with optical parametric oscillators and a field programmable gate array circuit. The similar mathematical models were proposed three decades ago by Hopfield et al in the context of classical neural networks. In this article, we compare the computational performance of both models.
Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...
2015-12-04
Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less
Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-01-01
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282
Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-07-18
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.
NASA Astrophysics Data System (ADS)
Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan
2018-06-01
This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
Chen, Xiaozhong; He, Kunjin; Chen, Zhengming
2017-01-01
The present study proposes an integrated computer-aided approach combining femur surface modeling, fracture evidence recover plate creation, and plate modification in order to conduct a parametric investigation of the design of custom plate for a specific patient. The study allows for improving the design efficiency of specific plates on the patients' femur parameters and the fracture information. Furthermore, the present approach will lead to exploration of plate modification and optimization. The three-dimensional (3D) surface model of a detailed femur and the corresponding fixation plate were represented with high-level feature parameters, and the shape of the specific plate was recursively modified in order to obtain the optimal plate for a specific patient. The proposed approach was tested and verified on a case study, and it could be helpful for orthopedic surgeons to design and modify the plate in order to fit the specific femur anatomy and the fracture information.
NASA Astrophysics Data System (ADS)
Dai, Quanqi; Harne, Ryan L.
2018-01-01
The vibrations of mechanical systems and structures are often a combination of periodic and random motions. Emerging interest to exploit nonlinearities in vibration energy harvesting systems for charging microelectronics may be challenged by such reality due to the potential to transition between favorable and unfavorable dynamic regimes for DC power delivery. Therefore, a need exists to devise an optimization method whereby charging power from nonlinear energy harvesters remains maximized when excitation conditions are neither purely harmonic nor purely random, which have been the attention of past research. This study meets the need by building from an analytical approach that characterizes the dynamic response of nonlinear energy harvesting platforms subjected to combined harmonic and stochastic base accelerations. Here, analytical expressions are formulated and validated to optimize charging power while the influences of the relative proportions of excitation types are concurrently assessed. It is found that about a 2 times deviation in optimal resistive loads can reduce the charging power by 20% when the system is more prominently driven by harmonic base accelerations, whereas a greater proportion of stochastic excitation results in a 11% reduction in power for the same resistance deviation. In addition, the results reveal that when the frequency of a predominantly harmonic excitation deviates by 50% from optimal conditions the charging power reduces by 70%, whereas the same frequency deviation for a more stochastically dominated excitation reduce total DC power by only 20%. These results underscore the need for maximizing direct current power delivery for nonlinear energy harvesting systems in practical operating environments.
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
NASA Technical Reports Server (NTRS)
Rhoads, James E.; Rigby, Jane Rebecca; Malhotra, Sangeeta; Allam, Sahar; Carilli, Chris; Combes, Francoise; Finkelstein, Keely; Finkelstein, Steven; Frye, Brenda; Gerin, Maryvonne;
2014-01-01
We report on two regularly rotating galaxies at redshift z approx. = 2, using high-resolution spectra of the bright [C microns] 158 micrometers emission line from the HIFI instrument on the Herschel Space Observatory. Both SDSS090122.37+181432.3 ("S0901") and SDSSJ120602.09+514229.5 ("the Clone") are strongly lensed and show the double-horned line profile that is typical of rotating gas disks. Using a parametric disk model to fit the emission line profiles, we find that S0901 has a rotation speed of v sin(i) approx. = 120 +/- 7 kms(sup -1) and a gas velocity dispersion of (standard deviation)g < 23 km s(sup -1) (1(standard deviation)). The best-fitting model for the Clone is a rotationally supported disk having v sin(i) approx. = 79 +/- 11 km s(sup -1) and (standard deviation)g 4 kms(sup -1) (1(standard deviation)). However, the Clone is also consistent with a family of dispersion-dominated models having (standard deviation)g = 92 +/- 20 km s(sup -1). Our results showcase the potential of the [C microns] line as a kinematic probe of high-redshift galaxy dynamics: [C microns] is bright, accessible to heterodyne receivers with exquisite velocity resolution, and traces dense star-forming interstellar gas. Future [C microns] line observations with ALMA would offer the further advantage of spatial resolution, allowing a clearer separation between rotation and velocity dispersion.
Unity-Efficiency Parametric Down-Conversion via Amplitude Amplification.
Niu, Murphy Yuezhen; Sanders, Barry C; Wong, Franco N C; Shapiro, Jeffrey H
2017-03-24
We propose an optical scheme, employing optical parametric down-converters interlaced with nonlinear sign gates (NSGs), that completely converts an n-photon Fock-state pump to n signal-idler photon pairs when the down-converters' crystal lengths are chosen appropriately. The proof of this assertion relies on amplitude amplification, analogous to that employed in Grover search, applied to the full quantum dynamics of single-mode parametric down-conversion. When we require that all Grover iterations use the same crystal, and account for potential experimental limitations on crystal-length precision, our optimized conversion efficiencies reach unity for 1≤n≤5, after which they decrease monotonically for n values up to 50, which is the upper limit of our numerical dynamics evaluations. Nevertheless, our conversion efficiencies remain higher than those for a conventional (no NSGs) down-converter.
Testing General Relativity with the Shadow Size of Sgr A(*).
Johannsen, Tim; Broderick, Avery E; Plewa, Philipp M; Chatzopoulos, Sotiris; Doeleman, Sheperd S; Eisenhauer, Frank; Fish, Vincent L; Genzel, Reinhard; Gerhard, Ortwin; Johnson, Michael D
2016-01-22
In general relativity, the angular radius of the shadow of a black hole is primarily determined by its mass-to-distance ratio and depends only weakly on its spin and inclination. If general relativity is violated, however, the shadow size may also depend strongly on parametric deviations from the Kerr metric. Based on a reconstructed image of Sagittarius A^{*} (Sgr A^{*}) from a simulated one-day observing run of a seven-station Event Horizon Telescope (EHT) array, we employ a Markov chain Monte Carlo algorithm to demonstrate that such an observation can measure the angular radius of the shadow of Sgr A^{*} with an uncertainty of ∼1.5 μas (6%). We show that existing mass and distance measurements can be improved significantly when combined with upcoming EHT measurements of the shadow size and that tight constraints on potential deviations from the Kerr metric can be obtained.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
Machining fixture layout optimization using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Dou, Jianping; Wang, Xingsong; Wang, Lei
2011-05-01
Optimization of fixture layout (locator and clamp locations) is critical to reduce geometric error of the workpiece during machining process. In this paper, the application of particle swarm optimization (PSO) algorithm is presented to minimize the workpiece deformation in the machining region. A PSO based approach is developed to optimize fixture layout through integrating ANSYS parametric design language (APDL) of finite element analysis to compute the objective function for a given fixture layout. Particle library approach is used to decrease the total computation time. The computational experiment of 2D case shows that the numbers of function evaluations are decreased about 96%. Case study illustrates the effectiveness and efficiency of the PSO based optimization approach.
Optimization of space manufacturing systems
NASA Technical Reports Server (NTRS)
Akin, D. L.
1979-01-01
Four separate analyses are detailed: transportation to low earth orbit, orbit-to-orbit optimization, parametric analysis of SPS logistics based on earth and lunar source locations, and an overall program option optimization implemented with linear programming. It is found that smaller vehicles are favored for earth launch, with the current Space Shuttle being right at optimum payload size. Fully reusable launch vehicles represent a savings of 50% over the Space Shuttle; increased reliability with less maintenance could further double the savings. An optimization of orbit-to-orbit propulsion systems using lunar oxygen for propellants shows that ion propulsion is preferable by a 3:1 cost margin over a mass driver reaction engine at optimum values; however, ion engines cannot yet operate in the lower exhaust velocity range where the optimum lies, and total program costs between the two systems are ambiguous. Heavier payloads favor the use of a MDRE. A parametric model of a space manufacturing facility is proposed, and used to analyze recurring costs, total costs, and net present value discounted cash flows. Parameters studied include productivity, effects of discounting, materials source tradeoffs, economic viability of closed-cycle habitats, and effects of varying degrees of nonterrestrial SPS materials needed from earth. Finally, candidate optimal scenarios are chosen, and implemented in a linear program with external constraints in order to arrive at an optimum blend of SPS production strategies in order to maximize returns.
NASA Astrophysics Data System (ADS)
Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier
Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.
Non-linear auto-regressive models for cross-frequency coupling in neural time series
Tallot, Lucille; Grabot, Laetitia; Doyère, Valérie; Grenier, Yves; Gramfort, Alexandre
2017-01-01
We address the issue of reliably detecting and quantifying cross-frequency coupling (CFC) in neural time series. Based on non-linear auto-regressive models, the proposed method provides a generative and parametric model of the time-varying spectral content of the signals. As this method models the entire spectrum simultaneously, it avoids the pitfalls related to incorrect filtering or the use of the Hilbert transform on wide-band signals. As the model is probabilistic, it also provides a score of the model “goodness of fit” via the likelihood, enabling easy and legitimate model selection and parameter comparison; this data-driven feature is unique to our model-based approach. Using three datasets obtained with invasive neurophysiological recordings in humans and rodents, we demonstrate that these models are able to replicate previous results obtained with other metrics, but also reveal new insights such as the influence of the amplitude of the slow oscillation. Using simulations, we demonstrate that our parametric method can reveal neural couplings with shorter signals than non-parametric methods. We also show how the likelihood can be used to find optimal filtering parameters, suggesting new properties on the spectrum of the driving signal, but also to estimate the optimal delay between the coupled signals, enabling a directionality estimation in the coupling. PMID:29227989
Feature selection and classification of multiparametric medical images using bagging and SVM
NASA Astrophysics Data System (ADS)
Fan, Yong; Resnick, Susan M.; Davatzikos, Christos
2008-03-01
This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.
NASA Astrophysics Data System (ADS)
Speck, Thomas; Engel, Andreas; Seifert, Udo
2012-12-01
We study the large deviation function for the entropy production rate in two driven one-dimensional systems: the asymmetric random walk on a discrete lattice and Brownian motion in a continuous periodic potential. We compare two approaches: using the Donsker-Varadhan theory and using the Freidlin-Wentzell theory. We show that the wings of the large deviation function are dominated by a single optimal trajectory: either in the forward direction (positive rate) or in the backward direction (negative rate). The joining of the two branches at zero entropy production implies a non-differentiability and thus the appearance of a ‘kink’. However, around zero entropy production, many trajectories contribute and thus the ‘kink’ is smeared out.
QCD Axion Dark Matter with a Small Decay Constant.
Co, Raymond T; Hall, Lawrence J; Harigaya, Keisuke
2018-05-25
The QCD axion is a good dark matter candidate. The observed dark matter abundance can arise from misalignment or defect mechanisms, which generically require an axion decay constant f_{a}∼O(10^{11}) GeV (or higher). We introduce a new cosmological origin for axion dark matter, parametric resonance from oscillations of the Peccei-Quinn symmetry breaking field, that requires f_{a}∼(10^{8}-10^{11}) GeV. The axions may be warm enough to give deviations from cold dark matter in large scale structure.
Astrophysical applications of the post-Tolman-Oppenheimer-Volkoff formalism
NASA Astrophysics Data System (ADS)
Glampedakis, Kostas; Pappas, George; Silva, Hector O.; Berti, Emanuele
2016-08-01
The bulk properties of spherically symmetric stars in general relativity can be obtained by integrating the Tolman-Oppenheimer-Volkoff (TOV) equations. In previous work [K. Glampedakis, G. Pappas, H. O. Silva, and E. Berti, Phys. Rev. D 92, 024056 (2015)], we developed a "post-TOV" formalism—inspired by parametrized post-Newtonian theory—which allows us to classify in a parametrized, phenomenological form all possible perturbative deviations from the structure of compact stars in general relativity that may be induced by modified gravity at second post-Newtonian order. In this paper we extend the formalism to deal with the stellar exterior, and we compute several potential astrophysical observables within the post-TOV formalism: the surface redshift zs, the apparent radius Rapp, the Eddington luminosity at infinity LE∞ and the orbital frequencies. We show that, at leading order, all of these quantities depend on just two post-TOV parameters μ1 and χ , and we discuss the possibility to measure (or set upper bounds on) these parameters.
Parametric Loop Division for 3D Localization in Wireless Sensor Networks
Ahmad, Tanveer
2017-01-01
Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper operations. In this paper, we proposed a three-dimensional (3D) localization scheme for WSNs based on the well-known parametric Loop division (PLD) algorithm. The proposed scheme localizes a sensor node in a region bounded by a network of anchor nodes. By iteratively shrinking that region towards its center point, the proposed scheme provides better localization accuracy as compared to existing schemes. Furthermore, it is cost-effective and independent of environmental irregularity. We provide an analytical framework for the proposed scheme and find its lower bound accuracy. Simulation results shows that the proposed algorithm provides an average localization accuracy of 0.89 m with a standard deviation of 1.2 m. PMID:28737714
Beyond-proximity-force-approximation Casimir force between two spheres at finite temperature
NASA Astrophysics Data System (ADS)
Bimonte, Giuseppe
2018-04-01
A recent experiment [J. L. Garrett, D. A. T. Somers, and J. N. Munday, Phys. Rev. Lett. 120, 040401 (2018), 10.1103/PhysRevLett.120.040401] measured for the first time the gradient of the Casimir force between two gold spheres at room temperature. The theoretical analysis of the data was carried out using the standard proximity force approximation (PFA). A fit of the data, using a parametrization of the force valid for the sphere-plate geometry, was used by the authors to place a bound on deviations from PFA. Motivated by this work, we compute the Casimir force between two gold spheres at finite temperature. The semianalytic formula for the Casimir force that we construct is valid for all separations, and can be easily used to interpret future experiments in both the sphere-plate and sphere-sphere configurations. We describe the correct parametrization of the corrections to PFA for two spheres that should be used in data analysis.
Consistency among distance measurements: transparency, BAO scale and accelerated expansion
NASA Astrophysics Data System (ADS)
Avgoustidis, Anastasios; Verde, Licia; Jimenez, Raul
2009-06-01
We explore consistency among different distance measures, including Supernovae Type Ia data, measurements of the Hubble parameter, and determination of the Baryon acoustic oscillation scale. We present new constraints on the cosmic transparency combining H(z) data together with the latest Supernovae Type Ia data compilation. This combination, in the context of a flat ΛCDM model, improves current constraints by nearly an order of magnitude although the constraints presented here are parametric rather than non-parametric. We re-examine the recently reported tension between the Baryon acoustic oscillation scale and Supernovae data in light of possible deviations from transparency, concluding that the source of the discrepancy may most likely be found among systematic effects of the modelling of the low redshift data or a simple ~ 2-σ statistical fluke, rather than in exotic physics. Finally, we attempt to draw model-independent conclusions about the recent accelerated expansion, determining the acceleration redshift to be zacc = 0.35+0.20-0.13 (1-σ).
A parametric study of single-wall carbon nanotube growth by laser ablation
NASA Technical Reports Server (NTRS)
Arepalli, Sivaram; Holmes, William A.; Nikolaev, Pavel; Hadjiev, Victor G.; Scott, Carl D.
2004-01-01
Results of a parametric study of carbon nanotube production by the double-pulse laser oven process are presented. The effect of various operating parameters on the production of single-wall carbon nanotubes (SWCNTs) is estimated by characterizing the nanotube material using analytical techniques, including scanning electron microscopy, transmission electron microscopy, thermo gravimetric analysis and Raman spectroscopy. The study included changing the sequence of the laser pulses, laser energy, pulse separation, type of buffer gas used, operating pressure, flow rate, inner tube diameter, as well as its material, and oven temperature. It was found that the material quality and quantity improve with deviation from normal operation parameters such as laser energy density higher than 1.5 J/cm2, pressure lower than 67 kPa, and flow rates higher than 100 sccm. Use of helium produced mainly small diameter tubes and a lower yield. The diameter of SWCNTs decreases with decreasing oven temperature and lower flow rates.
Quartz-enhanced photo-acoustic spectroscopy for breath analyses
NASA Astrophysics Data System (ADS)
Petersen, Jan C.; Lamard, Laurent; Feng, Yuyang; Focant, Jeff-F.; Peremans, Andre; Lassen, Mikael
2017-03-01
An innovative and novel quartz-enhanced photoacoustic spectroscopy (QEPAS) sensor for highly sensitive and selective breath gas analysis is introduced. The QEPAS sensor consists of two acoustically coupled micro- resonators (mR) with an off-axis 20 kHz quartz tuning fork (QTF). The complete acoustically coupled mR system is optimized based on finite element simulations and experimentally verified. Due to the very low fabrication costs the QEPAS sensor presents a clear breakthrough in the field of photoacoustic spectroscopy by introducing novel disposable gas chambers in order to avoid cleaning after each test. The QEPAS sensor is pumped resonantly by a nanosecond pulsed single-mode mid-infrared optical parametric oscillator (MIR OPO). Spectroscopic measurements of methane and methanol in the 3.1 μm to 3.7 μm wavelength region is conducted. Demonstrating a resolution bandwidth of 1 cm-1. An Allan deviation analysis shows that the detection limit at optimum integration time for the QEPAS sensor is 32 ppbv@190s for methane and that the background noise is solely due to the thermal noise of the QTF. Spectra of both individual molecules as well as mixtures of molecules were measured and analyzed. The molecules are representative of exhaled breath gasses that are bio-markers for medical diagnostics.
CuBe: parametric modeling of 3D foveal shape using cubic Bézier
Yadav, Sunil Kumar; Motamedi, Seyedamirhosein; Oberwahrenbrock, Timm; Oertel, Frederike Cosima; Polthier, Konrad; Paul, Friedemann; Kadas, Ella Maria; Brandt, Alexander U.
2017-01-01
Optical coherence tomography (OCT) allows three-dimensional (3D) imaging of the retina, and is commonly used for assessing pathological changes of fovea and macula in many diseases. Many neuroinflammatory conditions are known to cause modifications to the fovea shape. In this paper, we propose a method for parametric modeling of the foveal shape. Our method exploits invariant features of the macula from OCT data and applies a cubic Bézier polynomial along with a least square optimization to produce a best fit parametric model of the fovea. Additionally, we provide several parameters of the foveal shape based on the proposed 3D parametric modeling. Our quantitative and visual results show that the proposed model is not only able to reconstruct important features from the foveal shape, but also produces less error compared to the state-of-the-art methods. Finally, we apply the model in a comparison of healthy control eyes and eyes from patients with neuroinflammatory central nervous system disorders and optic neuritis, and show that several derived model parameters show significant differences between the two groups. PMID:28966857
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giammichele, N.; Fontaine, G.; Brassard, P.
We present a prescription for parametrizing the chemical profile in the core of white dwarfs in light of the recent discovery that pulsation modes may sometimes be deeply confined in some cool pulsating white dwarfs. Such modes may be used as unique probes of the complicated chemical stratification that results from several processes that occurred in previous evolutionary phases of intermediate-mass stars. This effort is part of our ongoing quest for more credible and realistic seismic models of white dwarfs using static, parametrized equilibrium structures. Inspired by successful techniques developed in design optimization fields (such as aerodynamics), we exploit Akimamore » splines for the tracing of the chemical profile of oxygen (carbon) in the core of a white dwarf model. A series of tests are then presented to better seize the precision and significance of the results that can be obtained in an asteroseismological context. We also show that the new parametrization passes an essential basic test, as it successfully reproduces the chemical stratification of a full evolutionary model.« less
NASA Astrophysics Data System (ADS)
Giammichele, N.; Charpinet, S.; Fontaine, G.; Brassard, P.
2017-01-01
We present a prescription for parametrizing the chemical profile in the core of white dwarfs in light of the recent discovery that pulsation modes may sometimes be deeply confined in some cool pulsating white dwarfs. Such modes may be used as unique probes of the complicated chemical stratification that results from several processes that occurred in previous evolutionary phases of intermediate-mass stars. This effort is part of our ongoing quest for more credible and realistic seismic models of white dwarfs using static, parametrized equilibrium structures. Inspired by successful techniques developed in design optimization fields (such as aerodynamics), we exploit Akima splines for the tracing of the chemical profile of oxygen (carbon) in the core of a white dwarf model. A series of tests are then presented to better seize the precision and significance of the results that can be obtained in an asteroseismological context. We also show that the new parametrization passes an essential basic test, as it successfully reproduces the chemical stratification of a full evolutionary model.
NASA Astrophysics Data System (ADS)
Ni, Yong; Song, Zhaoqiang; Jiang, Hongyuan; Yu, Shu-Hong; He, Linghui
2015-08-01
How nacreous nanocomposites with optimal combinations of stiffness, strength and toughness depend on constituent property and microstructure parameters is studied using a nonlinear shear-lag model. We show that the interfacial elasto-plasticity and the overlapping length between bricks dependent on the brick size and brick staggering mode significantly affect the nonuniformity of the shear stress, the stress-transfer efficiency and thus the failure path. There are two characteristic lengths at which the strength and toughness are optimized respectively. Simultaneous optimization of the strength and toughness is achieved by matching these lengths as close as possible in the nacreous nanocomposite with regularly staggered brick-and-mortar (BM) structure where simultaneous uniform failures of the brick and interface occur. In the randomly staggered BM structure, as the overlapping length is distributed, the nacreous nanocomposite turns the simultaneous uniform failure into progressive interface or brick failure with moderate decrease of the strength and toughness. Specifically there is a parametric range at which the strength and toughness are insensitive to the brick staggering randomness. The obtained results propose a parametric selection guideline based on the length matching for rational design of nacreous nanocomposites. Such guideline explains why nacre is strong and tough while most artificial nacreous nanocomposites aere not.
NASA Astrophysics Data System (ADS)
Vasilkin, Andrey
2018-03-01
The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.
Integrated modeling for parametric evaluation of smart x-ray optics
NASA Astrophysics Data System (ADS)
Dell'Agostino, S.; Riva, M.; Spiga, D.; Basso, S.; Civitani, Marta
2014-08-01
This work is developed in the framework of AXYOM project, which proposes to study the application of a system of piezoelectric actuators to grazing-incidence X-ray telescope optic prototypes: thin glass or plastic foils, in order to increase their angular resolution. An integrated optomechanical model has been set up to evaluate the performances of X-ray optics under deformation induced by Piezo Actuators. Parametric evaluation has been done looking at different number and position of actuators to optimize the outcome. Different evaluations have also been done over the actuator types, considering Flexible Piezoceramic, Multi Fiber Composites piezo actuators, and PVDF.
Dispersion management for a sub-10-fs, 10 TW optical parametric chirped-pulse amplifier.
Tavella, Franz; Nomura, Yutaka; Veisz, Laszlo; Pervak, Vladimir; Marcinkevicius, Andrius; Krausz, Ferenc
2007-08-01
We report the amplification of three-cycle, 8.5 fs optical pulses in a near-infrared noncollinear optical parametric chirped-pulse amplifier (OPCPA) up to energies of 80 mJ. Improved dispersion management in the amplifier by means of a combination of reflection grisms and a chirped-mirror stretcher allowed us to recompress the amplified pulses to within 6% of their Fourier limit. The novel ultrabroad, ultraprecise dispersion control technology presented in this work opens the way to scaling multiterawatt technology to even shorter pulses by optimizing the OPCPA bandwidth.
Transfer pricing in hospitals and efficiency of physicians: the case of anesthesia services.
Kuntz, Ludwig; Vera, Antonio
2005-01-01
The objective is to investigate theoretically and empirically how the efficiency of the physicians involved in anesthesia and surgery can be optimized by the introduction of transfer pricing for anesthesia services. The anesthesiology data of approximately 57,000 operations carried out at the University Hospital Hamburg-Eppendorf (UKE) in Germany in the period from 2000 to 2002 are analyzed using parametric and non-parametric methods. The principal finding of the empirical analysis is that the efficiency of the physicians involved in anesthesia and surgery at the UKE improved after the introduction of transfer pricing.
Multicutter machining of compound parametric surfaces
NASA Astrophysics Data System (ADS)
Hatna, Abdelmadjid; Grieve, R. J.; Broomhead, P.
2000-10-01
Parametric free forms are used in industries as disparate as footwear, toys, sporting goods, ceramics, digital content creation, and conceptual design. Optimizing tool path patterns and minimizing the total machining time is a primordial issue in numerically controlled (NC) machining of free form surfaces. We demonstrate in the present work that multi-cutter machining can achieve as much as 60% reduction in total machining time for compound sculptured surfaces. The given approach is based upon the pre-processing as opposed to the usual post-processing of surfaces for the detection and removal of interference followed by precise tracking of unmachined areas.
Remote auditing of radiotherapy facilities using optically stimulated luminescence dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lye, Jessica, E-mail: jessica.lye@arpansa.gov.au; Dunn, Leon; Kenny, John
Purpose: On 1 July 2012, the Australian Clinical Dosimetry Service (ACDS) released its Optically Stimulated Luminescent Dosimeter (OSLD) Level I audit, replacing the previous TLD based audit. The aim of this work is to present the results from this new service and the complete uncertainty analysis on which the audit tolerances are based. Methods: The audit release was preceded by a rigorous evaluation of the InLight® nanoDot OSLD system from Landauer (Landauer, Inc., Glenwood, IL). Energy dependence, signal fading from multiple irradiations, batch variation, reader variation, and dose response factors were identified and quantified for each individual OSLD. The detectorsmore » are mailed to the facility in small PMMA blocks, based on the design of the existing Radiological Physics Centre audit. Modeling and measurement were used to determine a factor that could convert the dose measured in the PMMA block, to dose in water for the facility's reference conditions. This factor is dependent on the beam spectrum. The TPR{sub 20,10} was used as the beam quality index to determine the specific block factor for a beam being audited. The audit tolerance was defined using a rigorous uncertainty calculation. The audit outcome is then determined using a scientifically based two tiered action level approach. Audit outcomes within two standard deviations were defined as Pass (Optimal Level), within three standard deviations as Pass (Action Level), and outside of three standard deviations the outcome is Fail (Out of Tolerance). Results: To-date the ACDS has audited 108 photon beams with TLD and 162 photon beams with OSLD. The TLD audit results had an average deviation from ACDS of 0.0% and a standard deviation of 1.8%. The OSLD audit results had an average deviation of −0.2% and a standard deviation of 1.4%. The relative combined standard uncertainty was calculated to be 1.3% (1σ). Pass (Optimal Level) was reduced to ≤2.6% (2σ), and Fail (Out of Tolerance) was reduced to >3.9% (3σ) for the new OSLD audit. Previously with the TLD audit the Pass (Optimal Level) and Fail (Out of Tolerance) were set at ≤4.0% (2σ) and >6.0% (3σ). Conclusions: The calculated standard uncertainty of 1.3% at one standard deviation is consistent with the measured standard deviation of 1.4% from the audits and confirming the suitability of the uncertainty budget derived audit tolerances. The OSLD audit shows greater accuracy than the previous TLD audit, justifying the reduction in audit tolerances. In the TLD audit, all outcomes were Pass (Optimal Level) suggesting that the tolerances were too conservative. In the OSLD audit 94% of the audits have resulted in Pass (Optimal level) and 6% of the audits have resulted in Pass (Action Level). All Pass (Action level) results have been resolved with a repeat OSLD audit, or an on-site ion chamber measurement.« less
Dickie, David Alexander; Job, Dominic E.; Gonzalez, David Rodriguez; Shenkin, Susan D.; Wardlaw, Joanna M.
2015-01-01
Introduction Neurodegenerative disease diagnoses may be supported by the comparison of an individual patient’s brain magnetic resonance image (MRI) with a voxel-based atlas of normal brain MRI. Most current brain MRI atlases are of young to middle-aged adults and parametric, e.g., mean ±standard deviation (SD); these atlases require data to be Gaussian. Brain MRI data, e.g., grey matter (GM) proportion images, from normal older subjects are apparently not Gaussian. We created a nonparametric and a parametric atlas of the normal limits of GM proportions in older subjects and compared their classifications of GM proportions in Alzheimer’s disease (AD) patients. Methods Using publicly available brain MRI from 138 normal subjects and 138 subjects diagnosed with AD (all 55–90 years), we created: a mean ±SD atlas to estimate parametrically the percentile ranks and limits of normal ageing GM; and, separately, a nonparametric, rank order-based GM atlas from the same normal ageing subjects. GM images from AD patients were then classified with respect to each atlas to determine the effect statistical distributions had on classifications of proportions of GM in AD patients. Results The parametric atlas often defined the lower normal limit of the proportion of GM to be negative (which does not make sense physiologically as the lowest possible proportion is zero). Because of this, for approximately half of the AD subjects, 25–45% of voxels were classified as normal when compared to the parametric atlas; but were classified as abnormal when compared to the nonparametric atlas. These voxels were mainly concentrated in the frontal and occipital lobes. Discussion To our knowledge, we have presented the first nonparametric brain MRI atlas. In conditions where there is increasing variability in brain structure, such as in old age, nonparametric brain MRI atlases may represent the limits of normal brain structure more accurately than parametric approaches. Therefore, we conclude that the statistical method used for construction of brain MRI atlases should be selected taking into account the population and aim under study. Parametric methods are generally robust for defining central tendencies, e.g., means, of brain structure. Nonparametric methods are advisable when studying the limits of brain structure in ageing and neurodegenerative disease. PMID:26023913
NASA Astrophysics Data System (ADS)
Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim
2016-12-01
A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.
Parametric tests of a 40-Ah bipolar nickel-hydrogen battery
NASA Technical Reports Server (NTRS)
Cataldo, R. L.
1986-01-01
A series of tests were performed to characterize battery performance relating to certain operating parameters which include charge current, discharge current, temperature, and pressure. The parameters were varied to confirm battery design concepts and to determine optimal operating conditions.
Cheng, Xianfu; Lin, Yuqun
2014-01-01
The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.
Digital multi-channel stabilization of four-mode phase-sensitive parametric multicasting.
Liu, Lan; Tong, Zhi; Wiberg, Andreas O J; Kuo, Bill P P; Myslivets, Evgeny; Alic, Nikola; Radic, Stojan
2014-07-28
Stable four-mode phase-sensitive (4MPS) process was investigated as a means to enhance two-pump driven parametric multicasting conversion efficiency (CE) and signal to noise ratio (SNR). Instability of multi-beam, phase sensitive (PS) device that inherently behaves as an interferometer, with output subject to ambient induced fluctuations, was addressed theoretically and experimentally. A new stabilization technique that controls phases of three input waves of the 4MPS multicaster and maximizes CE was developed and described. Stabilization relies on digital phase-locked loop (DPLL) specifically was developed to control pump phases to guarantee stable 4MPS operation that is independent of environmental fluctuations. The technique also controls a single (signal) input phase to optimize the PS-induced improvement of the CE and SNR. The new, continuous-operation DPLL has allowed for fully stabilized PS parametric broadband multicasting, demonstrating CE improvement over 20 signal copies in excess of 10 dB.
Parametric Characterization of TES Detectors Under DC Bias
NASA Technical Reports Server (NTRS)
Chiao, Meng P.; Smith, Stephen James; Kilbourne, Caroline A.; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Ewin, Audrey J.;
2016-01-01
The X-ray integrated field unit (X-IFU) in European Space Agency's (ESA's) Athena mission will be the first high-resolution X-ray spectrometer in space using a large-format transition-edge sensor microcalorimeter array. Motivated by optimization of detector performance for X-IFU, we have conducted an extensive campaign of parametric characterization on transition-edge sensor (TES) detectors with nominal geometries and physical properties in order to establish sensitivity trends relative to magnetic field, dc bias on detectors, operating temperature, and to improve our understanding of detector behavior relative to its fundamental properties such as thermal conductivity, heat capacity, and transition temperature. These results were used for validation of a simple linear detector model in which a small perturbation can be introduced to one or multiple parameters to estimate the error budget for X-IFU. We will show here results of our parametric characterization of TES detectors and briefly discuss the comparison with the TES model.
Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-01-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.
Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-10-21
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.
Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael
2016-01-01
Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.
NASA Astrophysics Data System (ADS)
Beller, S.; Monteiller, V.; Combe, L.; Operto, S.; Nolet, G.
2018-02-01
Full-waveform inversion (FWI) is not yet a mature imaging technology for lithospheric imaging from teleseismic data. Therefore, its promise and pitfalls need to be assessed more accurately according to the specifications of teleseismic experiments. Three important issues are related to (1) the choice of the lithospheric parametrization for optimization and visualization, (2) the initial model and (3) the acquisition design, in particular in terms of receiver spread and sampling. These three issues are investigated with a realistic synthetic example inspired by the CIFALPS experiment in the Western Alps. Isotropic elastic FWI is implemented with an adjoint-state formalism and aims to update three parameter classes by minimization of a classical least-squares difference-based misfit function. Three different subsurface parametrizations, combining density (ρ) with P and S wave speeds (Vp and Vs) , P and S impedances (Ip and Is), or elastic moduli (λ and μ) are first discussed based on their radiation patterns before their assessment by FWI. We conclude that the (ρ, λ, μ) parametrization provides the FWI models that best correlate with the true ones after recombining a posteriori the (ρ, λ, μ) optimization parameters into Ip and Is. Owing to the low frequency content of teleseismic data, 1-D reference global models as PREM provide sufficiently accurate initial models for FWI after smoothing that is necessary to remove the imprint of the layering. Two kinds of station deployments are assessed: coarse areal geometry versus dense linear one. We unambiguously conclude that a coarse areal geometry should be favoured as it dramatically increases the penetration in depth of the imaging as well as the horizontal resolution. This results because the areal geometry significantly increases local wavenumber coverage, through a broader sampling of the scattering and dip angles, compared to a linear deployment.
Yu, Wenbao; Park, Taesung
2014-01-01
It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.
Revisiting the Distance Duality Relation using a non-parametric regression method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, Akshay; Mahajan, Shobhit; Mukherjee, Amitabha
2016-07-01
The interdependence of luminosity distance, D {sub L} and angular diameter distance, D {sub A} given by the distance duality relation (DDR) is very significant in observational cosmology. It is very closely tied with the temperature-redshift relation of Cosmic Microwave Background (CMB) radiation. Any deviation from η( z )≡ D {sub L} / D {sub A} (1+ z ){sup 2} =1 indicates a possible emergence of new physics. Our aim in this work is to check the consistency of these relations using a non-parametric regression method namely, LOESS with SIMEX. This technique avoids dependency on the cosmological model and worksmore » with a minimal set of assumptions. Further, to analyze the efficiency of the methodology, we simulate a dataset of 020 points of η ( z ) data based on a phenomenological model η( z )= (1+ z ){sup ε}. The error on the simulated data points is obtained by using the temperature of CMB radiation at various redshifts. For testing the distance duality relation, we use the JLA SNe Ia data for luminosity distances, while the angular diameter distances are obtained from radio galaxies datasets. Since the DDR is linked with CMB temperature-redshift relation, therefore we also use the CMB temperature data to reconstruct η ( z ). It is important to note that with CMB data, we are able to study the evolution of DDR upto a very high redshift z = 2.418. In this analysis, we find no evidence of deviation from η=1 within a 1σ region in the entire redshift range used in this analysis (0 < z ≤ 2.418).« less
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN
NASA Astrophysics Data System (ADS)
Peter, Josephine; Doloi, B.; Bhattacharyya, B.
2011-01-01
The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.
Large deviations and portfolio optimization
NASA Astrophysics Data System (ADS)
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
NASA Technical Reports Server (NTRS)
Prakash, OM, II
1991-01-01
Three linear controllers are desiged to regulate the end effector of the Space Shuttle Remote Manipulator System (SRMS) operating in Position Hold Mode. In this mode of operation, jet firings of the Orbiter can be treated as disturbances while the controller tries to keep the end effector stationary in an orbiter-fixed reference frame. The three design techniques used include: the Linear Quadratic Regulator (LQR), H2 optimization, and H-infinity optimization. The nonlinear SRMS is linearized by modelling the effects of the significant nonlinearities as uncertain parameters. Each regulator design is evaluated for robust stability in light of the parametric uncertanties using both the small gain theorem with an H-infinity norm and the less conservative micro-analysis test. All three regulator designs offer significant improvement over the current system on the nominal plant. Unfortunately, even after dropping performance requirements and designing exclusively for robust stability, robust stability cannot be achieved. The SRMS suffers from lightly damped poles with real parametric uncertainties. Such a system renders the micro-analysis test, which allows for complex peturbations, too conservative.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Optimal Design of a Traveling-Wave Kinetic Inductance Amplifier Operated in Three-Wave Mixing Mode
NASA Astrophysics Data System (ADS)
Erickson, Robert; Bal, Mustafa; Ku, Ksiang-Sheng; Wu, Xian; Pappas, David
In the presence of a DC bias, an injected pump, of frequency fP, and a signal, of frequency fS, undergo parametric three-way mixing (3WM) within a traveling-wave kinetic inductance (KIT) amplifier, producing an idler product of frequency fI =fP -fS . Periodic frequency stops are engineered into the coplanar waveguide of the device to enhance signal amplification. With fP placed just above the first frequency stop gap, 3WM broadband signal gain is achieved with maximum gain at fS =fP / 2 . Within a theory of the dispersion of traveling waves in the presence of these engineered loadings, which accounts for this broadband signal gain, we show how an optimal frequency-stop design may be constructed to achieve maximum signal amplification. The optimization approach we describe can be applied to the design of other nonlinear traveling-wave parametric amplifiers. This work was supported by the Army Research Office and the Laboratory for Physical Sciences under EAO221146, EAO241777, and the NIST Quantum Initiative. RPE acknowledges Grant 60NANB14D024 from the US Department of Commerce, NIST.
Geometry Modeling and Grid Generation for Design and Optimization
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
Geometry modeling and grid generation (GMGG) have played and will continue to play an important role in computational aerosciences. During the past two decades, tremendous progress has occurred in GMGG; however, GMGG is still the biggest bottleneck to routine applications for complicated Computational Fluid Dynamics (CFD) and Computational Structures Mechanics (CSM) models for analysis, design, and optimization. We are still far from incorporating GMGG tools in a design and optimization environment for complicated configurations. It is still a challenging task to parameterize an existing model in today's Computer-Aided Design (CAD) systems, and the models created are not always good enough for automatic grid generation tools. Designers may believe their models are complete and accurate, but unseen imperfections (e.g., gaps, unwanted wiggles, free edges, slivers, and transition cracks) often cause problems in gridding for CSM and CFD. Despite many advances in grid generation, the process is still the most labor-intensive and time-consuming part of the computational aerosciences for analysis, design, and optimization. In an ideal design environment, a design engineer would use a parametric model to evaluate alternative designs effortlessly and optimize an existing design for a new set of design objectives and constraints. For this ideal environment to be realized, the GMGG tools must have the following characteristics: (1) be automated, (2) provide consistent geometry across all disciplines, (3) be parametric, and (4) provide sensitivity derivatives. This paper will review the status of GMGG for analysis, design, and optimization processes, and it will focus on some emerging ideas that will advance the GMGG toward the ideal design environment.
Li, Kewei; Sun, Wei
2017-03-01
In this study, we developed a computational framework to investigate the impact of leaflet geometry of a transcatheter aortic valve (TAV) on the leaflet stress distribution, aiming at optimizing TAV leaflet design to reduce its peak stress. Utilizing a generic TAV model developed previously [Li and Sun, Annals of Biomedical Engineering, 2010. 38(8): 2690-2701], we first parameterized the 2D leaflet geometry by mathematical equations, then by perturbing the parameters of the equations, we could automatically generate a new leaflet design, remesh the 2D leaflet model and build a 3D leaflet model from the 2D design via a Python script. Approximately 500 different leaflet designs were investigated by simulating TAV closure under the nominal circular deployment and physiological loading conditions. From the simulation results, we identified a new leaflet design that could reduce the previously reported valve peak stress by about 5%. The parametric analysis also revealed that increasing the free edge width had the highest overall impact on decreasing the peak stress. A similar computational analysis was further performed for a TAV deployed in an abnormal, asymmetric elliptical configuration. We found that a minimal free edge height of 0.46 mm should be adopted to prevent central backflow leakage. This increase of the free edge height resulted in an increase of the leaflet peak stress. Furthermore, the parametric study revealed a complex response surface for the impact of the leaflet geometric parameters on the peak stress, underscoring the importance of performing a numerical optimization to obtain the optimal TAV leaflet design. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A parallel-architecture parametric equalizer for air-coupled capacitive ultrasonic transducers.
McSweeney, Sean G; Wright, William M D
2012-01-01
Parametric equalization is rarely applied to ultrasonic transducer systems, for which it could be used on either the transmitter or the receiver to achieve a desired response. An optimized equalizer with both bump and cut capabilities would be advantageous for ultrasonic systems in applications in which variations in the transducer performance or the properties of the propagating medium produce a less-than-desirable signal. Compensation for non-ideal transducer response could be achieved using equalization on a device-by-device basis. Additionally, calibration of ultrasonic systems in the field could be obtained by offline optimization of equalization coefficients. In this work, a parametric equalizer for ultrasonic applications has been developed using multiple bi-quadratic filter elements arranged in a novel parallel arrangement to increase the flexibility of the equalization. The equalizer was implemented on a programmable system-on-chip (PSOC) using a small number of parallel 4th-order infinite impulse response switchedcapacitor band-pass filters. Because of the interdependency of the required coefficients for the switched capacitors, particle swarm optimization (PSO) was used to determine the optimum values. The response of a through-transmission system using air-coupled capacitive ultrasonic transducers was then equalized to idealized Hamming function or brick-wall frequencydomain responses. In each case, there was excellent agreement between the equalized signals and the theoretical model, and the fidelity of the time-domain response was maintained. The bandwidth and center frequency response of the system were significantly improved. It was also shown that the equalizer could be used on either the transmitter or the receiver, and the system could compensate for the effects of transmitterreceiver misalignment. © 2012 IEEE
Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-04-01
The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.
Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-01-01
The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402
Parametric Study of Sealant Nozzle
NASA Astrophysics Data System (ADS)
Yamamoto, Yoshimi
It has become apparent in recent years the advancement of manufacturing processes in the aerospace industry. Sealant nozzles are a critical device in the use of fuel tank applications for optimal bonds and for ground service support and repair. Sealants has always been a challenging area for optimizing and understanding the flow patterns. A parametric study was conducted to better understand geometric effects of sealant flow and to determine whether the sealant rheology can be numerically modeled. The Star-CCM+ software was used to successfully develop the parametric model, material model, physics continua, and simulate the fluid flow for the sealant nozzle. The simulation results of Semco sealant nozzles showed the geometric effects of fluid flow patterns and the influences from conical area reduction, tip length, inlet diameter, and tip angle parameters. A smaller outlet diameter induced maximum outlet velocity at the exit, and contributed to a high pressure drop. The conical area reduction, tip angle and inlet diameter contributed most to viscosity variation phenomenon. Developing and simulating 2 different flow models (Segregated Flow and Viscous Flow) proved that both can be used to obtain comparable velocity and pressure drop results, however; differences are seen visually in the non-uniformity of the velocity and viscosity fields for the Viscous Flow Model (VFM). A comprehensive simulation setup for sealant nozzles was developed so other analysts can utilize the data.
PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems
NASA Astrophysics Data System (ADS)
Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai
2017-09-01
In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Lumped parametric model of the human ear for sound transmission.
Feng, Bin; Gan, Rong Z
2004-09-01
A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.
Parametric study for the optimization of ionic liquid pretreatment of corn stover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papa, Gabriella; Feldman, Taya; Sale, Kenneth L.
A parametric study of the efficacy of the ionic liquid (IL) pretreatment (PT) of corn stover (CS) using 1-ethyl-3-methylimidazolium acetate ([C 2C 1Im][OAc] ) and cholinium lysinate ([Ch][Lys] ) was conducted. The impact of 50% and 15% biomass loading for milled and non-milled CS on IL-PT was evaluated, as well the impact of 20 and 5 mg enzyme/g glucan on saccharification efficiency. The glucose and xylose released were generated from 32 conditions – 2 ionic liquids (ILs), 2 temperatures, 2 particle sizes (S), 2 solid loadings, and 2 enzyme loadings. Statistical analysis indicates that sugar yields were correlated with lignin andmore » xylan removal and depends on the factors, where S did not explain variation in sugar yields. Both ILs were effective in pretreating large particle sized CS, without compromising sugar yields. The knowledge from material and energy balances is an essential step in directing optimization of sugar recovery at desirable process conditions.« less
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils
Boyes, Richard G.; Gunter, Jeff L.; Frost, Chris; Janke, Andrew L.; Yeatman, Thomas; Hill, Derek L.G.; Bernstein, Matt A.; Thompson, Paul M.; Weiner, Michael W.; Schuff, Norbert; Alexander, Gene E.; Killiany, Ronald J.; DeCarli, Charles; Jack, Clifford R.; Fox, Nick C.
2008-01-01
Measures of structural brain change based on longitudinal MR imaging are increasingly important but can be degraded by intensity non-uniformity. This non-uniformity can be more pronounced at higher field strengths, or when using multichannel receiver coils. We assessed the ability of the non-parametric non-uniform intensity normalization (N3) technique to correct non-uniformity in 72 volumetric brain MR scans from the preparatory phase of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Normal elderly subjects (n = 18) were scanned on different 3-T scanners with a multichannel phased array receiver coil at baseline, using magnetization prepared rapid gradient echo (MP-RAGE) and spoiled gradient echo (SPGR) pulse sequences, and again 2 weeks later. When applying N3, we used five brain masks of varying accuracy and four spline smoothing distances (d = 50, 100, 150 and 200 mm) to ascertain which combination of parameters optimally reduces the non-uniformity. We used the normalized white matter intensity variance (standard deviation/mean) to ascertain quantitatively the correction for a single scan; we used the variance of the normalized difference image to assess quantitatively the consistency of the correction over time from registered scan pairs. Our results showed statistically significant (p < 0.01) improvement in uniformity for individual scans and reduction in the normalized difference image variance when using masks that identified distinct brain tissue classes, and when using smaller spline smoothing distances (e.g., 50-100 mm) for both MP-RAGE and SPGR pulse sequences. These optimized settings may assist future large-scale studies where 3-T scanners and phased array receiver coils are used, such as ADNI, so that intensity non-uniformity does not influence the power of MR imaging to detect disease progression and the factors that influence it. PMID:18063391
An instrumented spatial linkage for measuring knee joint kinematics.
Rosvold, Joshua M; Atarod, Mohammad; Frank, Cyril B; Shrive, Nigel G
2016-01-01
In this study, the design and development of a highly accurate instrumented spatial linkage (ISL) for kinematic analysis of the ovine stifle joint is described. The ovine knee is a promising biomechanical model of the human knee joint. The ISL consists of six digital rotational encoders providing six degrees of freedom (6-DOF) to its motion. The ISL makes use of the complete and parametrically continuous (CPC) kinematic modeling method to describe the kinematic relationship between encoder readings and the relative positions and orientation of its two ends. The CPC method is useful when calibrating the ISL, because a small change in parameters corresponds to a small change in calculated positions and orientations and thus a smaller optimization error, compared to other kinematic models. The ISL is attached rigidly to the femur and the tibia for motion capture, and the CPC kinematic model is then employed to transform the angle sensor readings to relative motion of the two ends of the linkage, and thereby, the stifle joint motion. The positional accuracy for ISL after calibration and optimization was 0.3±0.2mm (mean +/- standard deviation). The ISL was also evaluated dynamically to ensure that accurate results were maintained, and achieved an accuracy of 0.1mm. Compared to the traditional motion capture methods, this system provides increased accuracy, reduced processing time, and ease of use. Future work will be on the application of the ISL to the ovine gait and determination of in vivo joint motions and tissue loads. Accurate measurement of knee joint kinematics is essential in understanding injury mechanisms and development of potential preventive or treatment strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
Wu, Xin-Ping; Gagliardi, Laura; Truhlar, Donald G
2018-01-17
Metal-organic frameworks (MOFs) are materials with applications in catalysis, gas separations, and storage. Quantum mechanical (QM) calculations can provide valuable guidance to understand and predict their properties. In order to make the calculations faster, rather than modeling these materials as periodic (infinite) systems, it is useful to construct finite models (called cluster models) and use subsystem methods such as fragment methods or combined quantum mechanical and molecular mechanical (QM/MM) methods. Here we employ a QM/MM methodology to study one particular MOF that has been of widespread interest because of its wide pores and good solvent and thermal stability, namely NU-1000, which contains hexanuclear zirconium nodes and 1,3,6,8-tetrakis(p-benzoic acid)pyrene (TBAPy 4- ) linkers. A modified version of the Bristow-Tiana-Walsh transferable force field has been developed to allow QM/MM calculations on NU-1000; we call the new parametrization the NU1T force field. We consider isomeric structures corresponding to various proton topologies of the [Zr 6 (μ 3 -O) 8 O 8 H 16 ] 8+ node of NU-1000, and we compute their relative energies using a QM/MM scheme designed for the present kind of problem. We compared the results to full quantum mechanical (QM) energy calculations and found that the QM/MM models can reproduce the full QM relative energetics (which span a range of 334 kJ mol -1 ) with a mean unsigned deviation (MUD) of only 2 kJ mol -1 . Furthermore, we found that the structures optimized by QM/MM are nearly identical to their full QM optimized counterparts.
Clinical knowledge-based inverse treatment planning
NASA Astrophysics Data System (ADS)
Yang, Yong; Xing, Lei
2004-11-01
Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.
Parametric weight evaluation of joined wings by structural optimization
NASA Technical Reports Server (NTRS)
Miura, Hirokazu; Shyu, Albert T.; Wolkovitch, Julian
1988-01-01
Joined-wing aircraft employ tandem wings having positive and negative sweep and dihedral, arranged to form diamond shapes in both plan and front views. An optimization method was applied to study the effects of joined-wing geometry parameters on structural weight. The lightest wings were obtained by increasing dihedral and taper ratio, decreasing sweep and span, increasing fraction of airfoil chord occupied by structural box, and locating the joint inboard of the front wing tip.
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
SU-E-T-436: Fluence-Based Trajectory Optimization for Non-Coplanar VMAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smyth, G; Bamber, JC; Bedford, JL
2015-06-15
Purpose: To investigate a fluence-based trajectory optimization technique for non-coplanar VMAT for brain cancer. Methods: Single-arc non-coplanar VMAT trajectories were determined using a heuristic technique for five patients. Organ at risk (OAR) volume intersected during raytracing was minimized for two cases: absolute volume and the sum of relative volumes weighted by OAR importance. These trajectories and coplanar VMAT formed starting points for the fluence-based optimization method. Iterative least squares optimization was performed on control points 24° apart in gantry rotation. Optimization minimized the root-mean-square (RMS) deviation of PTV dose from the prescription (relative importance 100), maximum dose to the brainstemmore » (10), optic chiasm (5), globes (5) and optic nerves (5), plus mean dose to the lenses (5), hippocampi (3), temporal lobes (2), cochleae (1) and brain excluding other regions of interest (1). Control point couch rotations were varied in steps of up to 10° and accepted if the cost function improved. Final treatment plans were optimized with the same objectives in an in-house planning system and evaluated using a composite metric - the sum of optimization metrics weighted by importance. Results: The composite metric decreased with fluence-based optimization in 14 of the 15 plans. In the remaining case its overall value, and the PTV and OAR components, were unchanged but the balance of OAR sparing differed. PTV RMS deviation was improved in 13 cases and unchanged in two. The OAR component was reduced in 13 plans. In one case the OAR component increased but the composite metric decreased - a 4 Gy increase in OAR metrics was balanced by a reduction in PTV RMS deviation from 2.8% to 2.6%. Conclusion: Fluence-based trajectory optimization improved plan quality as defined by the composite metric. While dose differences were case specific, fluence-based optimization improved both PTV and OAR dosimetry in 80% of cases.« less
1980-08-01
varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of
Lorentz Symmetry Violations from Matter-Gravity Couplings with Lunar Laser Ranging
NASA Astrophysics Data System (ADS)
Bourgoin, A.; Le Poncin-Lafitte, C.; Hees, A.; Bouquillon, S.; Francou, G.; Angonin, M.-C.
2017-11-01
The standard-model extension (SME) is an effective field theory framework aiming at parametrizing any violation to the Lorentz symmetry (LS) in all sectors of physics. In this Letter, we report the first direct experimental measurement of SME coefficients performed simultaneously within two sectors of the SME framework using lunar laser ranging observations. We consider the pure gravitational sector and the classical point-mass limit in the matter sector of the minimal SME. We report no deviation from general relativity and put new realistic stringent constraints on LS violations improving up to 3 orders of magnitude previous estimations.
Cao, Zheng; Nampalliwar, Sourabh; Bambi, Cosimo; Dauser, Thomas; García, Javier A
2018-02-02
Recently, we have extended the x-ray reflection model relxill to test the spacetime metric in the strong gravitational field of astrophysical black holes. In the present Letter, we employ this extended model to analyze XMM-Newton, NuSTAR, and Swift data of the supermassive black hole in 1H0707-495 and test deviations from a Kerr metric parametrized by the Johannsen deformation parameter α_{13}. Our results are consistent with the hypothesis that the spacetime metric around the black hole in 1H0707-495 is described by the Kerr solution.
NASA Technical Reports Server (NTRS)
Stokes, R. L.
1979-01-01
Electrical characterization tests were performed on two different manufactured types of integrated circuits. The devices were subjected to functional and AC and DC parametric tests at ambient temperatures of -55 C, -20 C, 25 C, 85 C, and 125 C. The data were analyzed and tabulated to show the effect of operating conditions on performance and to indicate parameter deviations among devices in each group. Accuracy was given precedence over test time efficiency where practical, and tests were designed to measure worst case performance.
The neural basis of financial risk taking.
Kuhnen, Camelia M; Knutson, Brian
2005-09-01
Investors systematically deviate from rationality when making financial decisions, yet the mechanisms responsible for these deviations have not been identified. Using event-related fMRI, we examined whether anticipatory neural activity would predict optimal and suboptimal choices in a financial decision-making task. We characterized two types of deviations from the optimal investment strategy of a rational risk-neutral agent as risk-seeking mistakes and risk-aversion mistakes. Nucleus accumbens activation preceded risky choices as well as risk-seeking mistakes, while anterior insula activation preceded riskless choices as well as risk-aversion mistakes. These findings suggest that distinct neural circuits linked to anticipatory affect promote different types of financial choices and indicate that excessive activation of these circuits may lead to investing mistakes. Thus, consideration of anticipatory neural mechanisms may add predictive power to the rational actor model of economic decision making.
The optimal input optical pulse shape for the self-phase modulation based chirp generator
NASA Astrophysics Data System (ADS)
Zachinyaev, Yuriy; Rumyantsev, Konstantin
2018-04-01
The work is aimed to obtain the optimal shape of the input optical pulse for the proper functioning of the self-phase modulation based chirp generator allowing to achieve high values of chirp frequency deviation. During the research, the structure of the device based on self-phase modulation effect using has been analyzed. The influence of the input optical pulse shape of the transmitting optical module on the chirp frequency deviation has been studied. The relationship between the frequency deviation of the generated chirp and frequency linearity for the three options for implementation of the pulse shape has been also estimated. The results of research are related to the development of the theory of radio processors based on fiber-optic structures and can be used in radars, secure communications, geolocation and tomography.
NASA Astrophysics Data System (ADS)
Gautam, Girish Dutt; Pandey, Arun Kumar
2018-03-01
Kevlar is the most popular aramid fiber and most commonly used in different technologically advanced industries for various applications. But the precise cutting of Kevlar composite laminates is a difficult task. The conventional cutting methods face various defects such as delamination, burr formation, fiber pullout with poor surface quality and their mechanical performance is greatly affected by these defects. The laser beam machining may be an alternative of the conventional cutting processes due to its non-contact nature, requirement of low specific energy with higher production rate. But this process also faces some problems that may be minimized by operating the machine at optimum parameters levels. This research paper examines the effective utilization of the Nd:YAG laser cutting system on difficult-to-cut Kevlar-29 composite laminates. The objective of the proposed work is to find the optimum process parameters settings for getting the minimum kerf deviations at both sides. The experiments have been conducted on Kevlar-29 composite laminates having thickness 1.25 mm by using Box-Benkhen design with two center points. The experimental data have been used for the optimization by using the proposed methodology. For the optimization, Teaching learning Algorithm based approach has been employed to obtain the minimum kerf deviation at bottom and top sides. A self coded Matlab program has been developed by using the proposed methodology and this program has been used for the optimization. Finally, the confirmation tests have been performed to compare the experimental and optimum results obtained by the proposed methodology. The comparison results show that the machining performance in the laser beam cutting process has been remarkably improved through proposed approach. Finally, the influence of different laser cutting parameters such as lamp current, pulse frequency, pulse width, compressed air pressure and cutting speed on top kerf deviation and bottom kerf deviation during the Nd:YAG laser cutting of Kevlar-29 laminates have been discussed.
On the importance of image formation optics in the design of infrared spectroscopic imaging systems
Mayerich, David; van Dijk, Thomas; Walsh, Michael; Schulmerich, Matthew; Carney, P. Scott
2014-01-01
Infrared spectroscopic imaging provides micron-scale spatial resolution with molecular contrast. While recent work demonstrates that sample morphology affects the recorded spectrum, considerably less attention has been focused on the effects of the optics, including the condenser and objective. This analysis is extremely important, since it will be possible to understand effects on recorded data and provides insight for reducing optical effects through rigorous microscope design. Here, we present a theoretical description and experimental results that demonstrate the effects of commonly-employed cassegranian optics on recorded spectra. We first combine an explicit model of image formation and a method for quantifying and visualizing the deviations in recorded spectra as a function of microscope optics. We then verify these simulations with measurements obtained from spatially heterogeneous samples. The deviation of the computed spectrum from the ideal case is quantified via a map which we call a deviation map. The deviation map is obtained as a function of optical elements by systematic simulations. Examination of deviation maps demonstrates that the optimal optical configuration for minimal deviation is contrary to prevailing practice in which throughput is maximized for an instrument without a sample. This report should be helpful for understanding recorded spectra as a function of the optics, the analytical limits of recorded data determined by the optical design, and potential routes for optimization of imaging systems. PMID:24936526
On the importance of image formation optics in the design of infrared spectroscopic imaging systems.
Mayerich, David; van Dijk, Thomas; Walsh, Michael J; Schulmerich, Matthew V; Carney, P Scott; Bhargava, Rohit
2014-08-21
Infrared spectroscopic imaging provides micron-scale spatial resolution with molecular contrast. While recent work demonstrates that sample morphology affects the recorded spectrum, considerably less attention has been focused on the effects of the optics, including the condenser and objective. This analysis is extremely important, since it will be possible to understand effects on recorded data and provides insight for reducing optical effects through rigorous microscope design. Here, we present a theoretical description and experimental results that demonstrate the effects of commonly-employed cassegranian optics on recorded spectra. We first combine an explicit model of image formation and a method for quantifying and visualizing the deviations in recorded spectra as a function of microscope optics. We then verify these simulations with measurements obtained from spatially heterogeneous samples. The deviation of the computed spectrum from the ideal case is quantified via a map which we call a deviation map. The deviation map is obtained as a function of optical elements by systematic simulations. Examination of deviation maps demonstrates that the optimal optical configuration for minimal deviation is contrary to prevailing practice in which throughput is maximized for an instrument without a sample. This report should be helpful for understanding recorded spectra as a function of the optics, the analytical limits of recorded data determined by the optical design, and potential routes for optimization of imaging systems.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
A physiology-based parametric imaging method for FDG-PET data
NASA Astrophysics Data System (ADS)
Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele
2017-12-01
Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.
Tutsoy, Onder; Barkana, Duygun Erol; Tugal, Harun
2018-05-01
In this paper, an adaptive controller is developed for discrete time linear systems that takes into account parametric uncertainty, internal-external non-parametric random uncertainties, and time varying control signal delay. Additionally, the proposed adaptive control is designed in such a way that it is utterly model free. Even though these properties are studied separately in the literature, they are not taken into account all together in adaptive control literature. The Q-function is used to estimate long-term performance of the proposed adaptive controller. Control policy is generated based on the long-term predicted value, and this policy searches an optimal stabilizing control signal for uncertain and unstable systems. The derived control law does not require an initial stabilizing control assumption as in the ones in the recent literature. Learning error, control signal convergence, minimized Q-function, and instantaneous reward are analyzed to demonstrate the stability and effectiveness of the proposed adaptive controller in a simulation environment. Finally, key insights on parameters convergence of the learning and control signals are provided. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance
NASA Technical Reports Server (NTRS)
Woolley, Ryan C.
2014-01-01
The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.
Optical realization of optimal symmetric real state quantum cloning machine
NASA Astrophysics Data System (ADS)
Hu, Gui-Yu; Zhang, Wen-Hai; Ye, Liu
2010-01-01
We present an experimentally uniform linear optical scheme to implement the optimal 1→2 symmetric and optimal 1→3 symmetric economical real state quantum cloning machine of the polarization state of the single photon. This scheme requires single-photon sources and two-photon polarization entangled state as input states. It also involves linear optical elements and three-photon coincidence. Then we consider the realistic realization of the scheme by using the parametric down-conversion as photon resources. It is shown that under certain condition, the scheme is feasible by current experimental technology.
Lörincz, András; Póczos, Barnabás
2003-06-01
In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.
Optimal second order sliding mode control for nonlinear uncertain systems.
Das, Madhulika; Mahanta, Chitralekha
2014-07-01
In this paper, a chattering free optimal second order sliding mode control (OSOSMC) method is proposed to stabilize nonlinear systems affected by uncertainties. The nonlinear optimal control strategy is based on the control Lyapunov function (CLF). For ensuring robustness of the optimal controller in the presence of parametric uncertainty and external disturbances, a sliding mode control scheme is realized by combining an integral and a terminal sliding surface. The resulting second order sliding mode can effectively reduce chattering in the control input. Simulation results confirm the supremacy of the proposed optimal second order sliding mode control over some existing sliding mode controllers in controlling nonlinear systems affected by uncertainty. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Phase Transition in Protocols Minimizing Work Fluctuations
NASA Astrophysics Data System (ADS)
Solon, Alexandre P.; Horowitz, Jordan M.
2018-05-01
For two canonical examples of driven mesoscopic systems—a harmonically trapped Brownian particle and a quantum dot—we numerically determine the finite-time protocols that optimize the compromise between the standard deviation and the mean of the dissipated work. In the case of the oscillator, we observe a collection of protocols that smoothly trade off between average work and its fluctuations. However, for the quantum dot, we find that as we shift the weight of our optimization objective from average work to work standard deviation, there is an analog of a first-order phase transition in protocol space: two distinct protocols exchange global optimality with mixed protocols akin to phase coexistence. As a result, the two types of protocols possess qualitatively different properties and remain distinct even in the infinite duration limit: optimal-work-fluctuation protocols never coalesce with the minimal-work protocols, which therefore never become quasistatic.
Optimization of a middle atmosphere diagnostic scheme
NASA Astrophysics Data System (ADS)
Akmaev, Rashid A.
1997-06-01
A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.
Algorithm for parametric community detection in networks.
Bettinelli, Andrea; Hansen, Pierre; Liberti, Leo
2012-07-01
Modularity maximization is extensively used to detect communities in complex networks. It has been shown, however, that this method suffers from a resolution limit: Small communities may be undetectable in the presence of larger ones even if they are very dense. To alleviate this defect, various modifications of the modularity function have been proposed as well as multiresolution methods. In this paper we systematically study a simple model (proposed by Pons and Latapy [Theor. Comput. Sci. 412, 892 (2011)] and similar to the parametric model of Reichardt and Bornholdt [Phys. Rev. E 74, 016110 (2006)]) with a single parameter α that balances the fraction of within community edges and the expected fraction of edges according to the configuration model. An exact algorithm is proposed to find optimal solutions for all values of α as well as the corresponding successive intervals of α values for which they are optimal. This algorithm relies upon a routine for exact modularity maximization and is limited to moderate size instances. An agglomerative hierarchical heuristic is therefore proposed to address parametric modularity detection in large networks. At each iteration the smallest value of α for which it is worthwhile to merge two communities of the current partition is found. Then merging is performed and the data are updated accordingly. An implementation is proposed with the same time and space complexity as the well-known Clauset-Newman-Moore (CNM) heuristic [Phys. Rev. E 70, 066111 (2004)]. Experimental results on artificial and real world problems show that (i) communities are detected by both exact and heuristic methods for all values of the parameter α; (ii) the dendrogram summarizing the results of the heuristic method provides a useful tool for substantive analysis, as illustrated particularly on a Les Misérables data set; (iii) the difference between the parametric modularity values given by the exact method and those given by the heuristic is moderate; (iv) the heuristic version of the proposed parametric method, viewed as a modularity maximization tool, gives better results than the CNM heuristic for large instances.
Optimization of composite tiltrotor wings with extensions and winglets
NASA Astrophysics Data System (ADS)
Kambampati, Sandilya
Tiltrotors suffer from an aeroelastic instability during forward flight called whirl flutter. Whirl flutter is caused by the whirling motion of the rotor, characterized by highly coupled wing-rotor-pylon modes of vibration. Whirl flutter is a major obstacle for tiltrotors in achieving high-speed flight. The conventional approach to assure adequate whirl flutter stability margins for tiltrotors is to design the wings with high torsional stiffness, typically using 23% thickness-to-chord ratio wings. However, the large aerodynamic drag associated with these high thickness-to-chord ratio wings decreases aerodynamic efficiency and increases fuel consumption. Wingtip devices such as wing extensions and winglets have the potential to increase the whirl flutter characteristics and the aerodynamic efficiency of a tiltrotor. However, wing-tip devices can add more weight to the aircraft. In this study, multi-objective parametric and optimization methodologies for tiltrotor aircraft with wing extensions and winglets are investigated. The objectives are to maximize aircraft aerodynamic efficiency while minimizing weight penalty due to extensions and winglets, subject to whirl flutter constraints. An aeroelastic model that predicts the whirl flutter speed and a wing structural model that computes strength and weight of a composite wing are developed. An existing aerodynamic model (that predicts the aerodynamic efficiency) is merged with the developed structural and aeroelastic models for the purpose of conducting parametric and optimization studies. The variables of interest are the wing thickness and structural properties, and extension and winglet planform variables. The Bell XV-15 tiltrotor aircraft the chosen as the parent aircraft for this study. Parametric studies reveal that a wing extension of span 25% of the inboard wing increases the whirl flutter speed by 10% and also increases the aircraft aerodynamic efficiency by 8%. Structurally tapering the wing of a tiltrotor equipped with an extension and a winglet can increase the whirl flutter speed by 15% while reducing the wing weight by 7.5%. The baseline design for the optimization is the optimized wing with no extension or winglet. The optimization studies reveal that the optimum design for a cruise speed of 250 knots has an increased aerodynamic efficiency of 7% over the baseline design for only a weight penalty of 3% - thus a better transport range of 5.5% more than the baseline. The optimal design for a cruise speed of 300 knots has an increased aerodynamic efficiency of 5%, a weight penalty of 2.5%, and a better transport range of 3.5% more than the baseline.
Trajectory Optimization of Electric Aircraft Subject to Subsystem Thermal Constraints
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Chin, Jeffrey C.; Schnulo, Sydney L.; Burt, Jonathan M.; Gray, Justin S.
2017-01-01
Electric aircraft pose a unique design challenge in that they lack a simple way to reject waste heat from the power train. While conventional aircraft reject most of their excess heat in the exhaust stream, for electric aircraft this is not an option. To examine the implications of this challenge on electric aircraft design and performance, we developed a model of the electric subsystems for the NASA X-57 electric testbed aircraft. We then coupled this model with a model of simple 2D aircraft dynamics and used a Legendre-Gauss-Lobatto collocation optimal control approach to find optimal trajectories for the aircraft with and without thermal constraints. The results show that the X-57 heat rejection systems are well designed for maximum-range and maximum-efficiency flight, without the need to deviate from an optimal trajectory. Stressing the thermal constraints by reducing the cooling capacity or requiring faster flight has a minimal impact on performance, as the trajectory optimization technique is able to find flight paths which honor the thermal constraints with relatively minor deviations from the nominal optimal trajectory.
Parameter assessment for virtual Stackelberg game in aerodynamic shape optimization
NASA Astrophysics Data System (ADS)
Wang, Jing; Xie, Fangfang; Zheng, Yao; Zhang, Jifa
2018-05-01
In this paper, parametric studies of virtual Stackelberg game (VSG) are conducted to assess the impact of critical parameters on aerodynamic shape optimization, including design cycle, split of design variables and role assignment. Typical numerical cases, including the inverse design and drag reduction design of airfoil, have been carried out. The numerical results confirm the effectiveness and efficiency of VSG. Furthermore, the most significant parameters are identified, e.g. the increase of design cycle can improve the optimization results but it will also add computational burden. These studies will maximize the productivity of the effort in aerodynamic optimization for more complicated engineering problems, such as the multi-element airfoil and wing-body configurations.
Rocket ascent G-limited moment-balanced optimization program (RAGMOP)
NASA Technical Reports Server (NTRS)
Lyons, J. T.; Woltosz, W. S.; Abercrombie, G. E.; Gottlieb, R. G.
1972-01-01
This document describes the RAGMOP (Rocket Ascent G-limited Momentbalanced Optimization Program) computer program for parametric ascent trajectory optimization. RAGMOP computes optimum polynomial-form attitude control histories, launch azimuth, engine burn-time, and gross liftoff weight for space shuttle type vehicles using a search-accelerated, gradient projection parameter optimization technique. The trajectory model available in RAGMOP includes a rotating oblate earth model, the option of input wind tables, discrete and/or continuous throttling for the purposes of limiting the thrust acceleration and/or the maximum dynamic pressure, limitation of the structural load indicators (the product of dynamic pressure with angle-of-attack and sideslip angle), and a wide selection of intermediate and terminal equality constraints.
Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peter, Josephine; Doloi, B.; Bhattacharyya, B.
The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actualmore » experimental observations.« less
Lucchesi, David M; Peron, Roberto
2010-12-03
The pericenter shift of a binary system represents a suitable observable to test for possible deviations from the newtonian inverse-square law in favor of new weak interactions between macroscopic objects. We analyzed 13 years of tracking data of the LAGEOS satellites with GEODYN II software but with no models for general relativity. From the fit of LAGEOS II pericenter residuals we have been able to obtain a 99.8% agreement with the predictions of Einstein's theory. This result may be considered as a 99.8% measurement in the field of the Earth of the combination of the γ and β parameters of general relativity, and it may be used to constrain possible deviations from the inverse-square law in favor of new weak interactions parametrized by a Yukawa-like potential with strength α and range λ. We obtained |α| ≲ 1 × 10(-11), a huge improvement at a range of about 1 Earth radius.
NASA Technical Reports Server (NTRS)
Bollman, W. E.; Chadwick, C.
1982-01-01
A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Astrophysics Data System (ADS)
Hendricks, R. C.; McDonald, G.
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Mcdonald, G.
1981-01-01
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Comparative study on the performance of textural image features for active contour segmentation.
Moraru, Luminita; Moldovanu, Simona
2012-07-01
We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
NASA Astrophysics Data System (ADS)
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
NASA Astrophysics Data System (ADS)
Qin, Wei; Miranowicz, Adam; Li, Peng-Bo; Lü, Xin-You; You, J. Q.; Nori, Franco
2018-03-01
We propose an experimentally feasible method for enhancing the atom-field coupling as well as the ratio between this coupling and dissipation (i.e., cooperativity) in an optical cavity. It exploits optical parametric amplification to exponentially enhance the atom-cavity interaction and, hence, the cooperativity of the system, with the squeezing-induced noise being completely eliminated. Consequently, the atom-cavity system can be driven from the weak-coupling regime to the strong-coupling regime for modest squeezing parameters, and even can achieve an effective cooperativity much larger than 100. Based on this, we further demonstrate the generation of steady-state nearly maximal quantum entanglement. The resulting entanglement infidelity (which quantifies the deviation of the actual state from a maximally entangled state) is exponentially smaller than the lower bound on the infidelities obtained in other dissipative entanglement preparations without applying squeezing. In principle, we can make an arbitrarily small infidelity. Our generic method for enhancing atom-cavity interaction and cooperativities can be implemented in a wide range of physical systems, and it can provide diverse applications for quantum information processing.
Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws
NASA Astrophysics Data System (ADS)
Barré, J.; Bernardin, C.; Chetrite, R.
2018-02-01
We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.
Role of a Standardized Prism Under Cover Test in the Assessment of Dissociated Vertical Deviation.
Klaehn, Lindsay D; Hatt, Sarah R; Leske, David A; Holmes, Jonathan M
2018-03-01
Dissociated vertical deviation (DVD) is commonly measured using a prism and alternate cover test (PACT), but some providers use a prism under cover test (PUCT). The aim of this study was to compare a standardized PUCT measurement with a PACT measurement, for assessing the magnitude of DVD. Thirty-six patients with a clinical diagnosis of DVD underwent measurement of the angle of deviation with the PACT, fixing with the habitually fixing eye, and with PUCT, fixing both right and left eyes. The PUCT was standardized, using a 10-second cover for each prism magnitude, until the deviation was neutralized. The magnitude of hyperdeviation by PACT and PUCT was compared for the non-fixing eye, using paired non-parametric tests. The frequency of discrepancies more than 4 prism diopters (PD) between PACT and PUCT was calculated. The magnitude of hyperdeviation was greater when measured with PUCT (range 8PD hypodeviation to 20PD hyperdeviation) vs. PACT (18PD hypodeviation to 25PD hyperdeviation) with a median difference of 4.5PD (range -5PD to 21PD); P < 0.0001. Eighteen (50%) of 36 measurements elicited >4PD hyperdeviation (or >4PD less hypodeviation) by PUCT than by PACT. A standardized 10-second PUCT yields greater values than a prism and alternate cover test in the majority of patients with DVD, providing better quantification of the severity of DVD, which may be important for management decisions.
Schroeder, A A; Ford, N L; Coil, J M
2017-03-01
To determine whether post space preparation deviated from the root canal preparation in canals filled with Thermafil, GuttaCore or warm vertically compacted gutta-percha. Forty-two extracted human permanent maxillary lateral incisors were decoronated, and their root canals instrumented using a standardized protocol. Samples were divided into three groups and filled with Thermafil (Dentsply Tulsa Dental Specialties, Johnson City, TN, USA), GuttaCore (Dentsply Tulsa Dental Specialties) or warm vertically compacted gutta-percha, before post space preparation was performed with a GT Post drill (Dentsply Tulsa Dental Specialties). Teeth were scanned using micro-computed tomography after root filling and again after post space preparation. Scans were examined for number of samples with post space deviation, linear deviation of post space preparation and minimum root thickness before and after post space preparation. Parametric data were analysed with one-way analysis of variance (anova) or one-tailed paired Student's t-tests, whilst nonparametric data were analysed with Fisher's exact test. Deviation occurred in eight of forty-two teeth (19%), seven of fourteen from the Thermafil group (50%), one of fourteen from the GuttaCore group (7%), and none from the gutta-percha group. Deviation occurred significantly more often in the Thermafil group than in each of the other two groups (P < 0.05). Linear deviation of post space preparation was greater in the Thermafil group than in both of the other groups and was significantly greater than that of the gutta-percha group (P < 0.05). Minimum root thickness before post space preparation was significantly greater than it was after post space preparation for all groups (P < 0.01). The differences between the Thermafil, GuttaCore and gutta-percha groups in the number of samples with post space deviation and in linear deviation of post space preparation were associated with the presence or absence of a carrier as well as the different carrier materials. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Parametric Study of Biconic Re-Entry Vehicles
NASA Technical Reports Server (NTRS)
Steele, Bryan; Banks, Daniel W.; Whitmore, Stephen A.
2007-01-01
An optimization based on hypersonic aerodynamic performance and volumetric efficiency was accomplished for a range of biconic configurations. Both axisymmetric and quasi-axisymmetric geometries (bent and flattened) were analyzed. The aerodynamic optimization wag based on hypersonic simple Incidence angle analysis tools. The range of configurations included those suitable for r lunar return trajectory with a lifting aerocapture at Earth and an overall volume that could support a nominal crew. The results yielded five configurations that had acceptable aerodynamic performance and met overall geometry and size limitations
NASA Astrophysics Data System (ADS)
Salmani, Majid; Büskens, Christof
2011-11-01
In this article, after describing a procedure to construct trajectories for a spacecraft in the four-body model, a method to correct the trajectory violations is presented. To construct the trajectories, periodic orbits as the solutions of the three-body problem are used. On the other hand, the bicircular model based on the Sun-Earth rotating frame governs the dynamics of the spacecraft and other bodies. A periodic orbit around the first libration-point L1 is the destination of the mission which is one of the equilibrium points in the Sun-Earth/Moon three-body problem. In the way to reach such a far destination, there are a lot of disturbances such as solar radiation and winds that make the plans untrustworthy. However, the solar radiation pressure is considered in the system dynamics. To prevail over these difficulties, considering the whole transfer problem as an optimal control problem makes the designer to be able to correct the unavoidable violations from the pre-designed trajectory and strategies. The optimal control problem is solved by a direct method, transcribing it into a nonlinear programming problem. This transcription gives an unperturbed optimal trajectory and its sensitivities with respect perturbations. Modeling these perturbations as parameters embedded in a parametric optimal control problem, one can take advantage of the parametric sensitivity analysis of nonlinear programming problem to recalculate the optimal trajectory with a very smaller amount of computation costs. This is obtained by evaluating a first-order Taylor expansion of the perturbed solution in an iterative process which is aimed to achieve an admissible solution. At the end, the numerical results show the applicability of the presented method.
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
NASA Astrophysics Data System (ADS)
Hinze, J. F.; Klein, S. A.; Nellis, G. F.
2015-12-01
Mixed refrigerant (MR) working fluids can significantly increase the cooling capacity of a Joule-Thomson (JT) cycle. The optimization of MRJT systems has been the subject of substantial research. However, most optimization techniques do not model the recuperator in sufficient detail. For example, the recuperator is usually assumed to have a heat transfer coefficient that does not vary with the mixture. Ongoing work at the University of Wisconsin-Madison has shown that the heat transfer coefficients for two-phase flow are approximately three times greater than for a single phase mixture when the mixture quality is between 15% and 85%. As a result, a system that optimizes a MR without also requiring that the flow be in this quality range may require an extremely large recuperator or not achieve the performance predicted by the model. To ensure optimal performance of the JT cycle, the MR should be selected such that it is entirely two-phase within the recuperator. To determine the optimal MR composition, a parametric study was conducted assuming a thermodynamically ideal cycle. The results of the parametric study are graphically presented on a contour plot in the parameter space consisting of the extremes of the qualities that exist within the recuperator. The contours show constant values of the normalized refrigeration power. This ‘map’ shows the effect of MR composition on the cycle performance and it can be used to select the MR that provides a high cooling load while also constraining the recuperator to be two phase. The predicted best MR composition can be used as a starting point for experimentally determining the best MR.
Experiment in multiple-criteria energy policy analysis
NASA Astrophysics Data System (ADS)
Ho, J. K.
1980-07-01
An international panel of energy analysts participated in an experiment to use HOPE (holistic preference evaluation): an interactive parametric linear programming method for multiple criteria optimization. The criteria of cost, environmental effect, crude oil, and nuclear fuel were considered, according to BESOM: an energy model for the US in the year 2000.
Fundamental Studies in Blow-Down and Cryogenic Cooling
1993-09-01
Mudawar , I. and Anderson, T.M., -High Flux Electronic Cooling by Means of Pool Boiling - Part I: Parametric Investigation of the Effects of Coolant...Electronics, pp. 25-34, 1989. 30 Mudawar , I. and Anderson, T.M., "High Flux Electronic Cooling by Means of Pool Boiling - Part 1I: Optimization of
NASA Astrophysics Data System (ADS)
Perera, Dimuthu
Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate peripheral region was 1: 9. Also, the minimum percentage accuracy and percentage precision were obtained when low b-value is 0 and high b-value is 800 mm2/s for normal tissue and 1400 mm2/s for tumor tissue. Results also showed that for tissues with 1 x 10-3 < ADC < 2.1 x 10-3 mm 2/s the parameter combination at SNR = 20, b-value pair 0, 800 mm 2/s with NEX = 1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%. Also, for tissues with 0.6 x 10-3 < ADC < 1.25 x 10-3 mm2 /s the parameter combination at SNR = 20, b-value pair 0, 1400 mm 2/s with NEX =1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%.
NASA Astrophysics Data System (ADS)
Wang, Zian; Li, Shiguang; Yu, Ting
2015-12-01
This paper propose online identification method of regional frequency deviation coefficient based on the analysis of interconnected grid AGC adjustment response mechanism of regional frequency deviation coefficient and the generator online real-time operation state by measured data through PMU, analyze the optimization method of regional frequency deviation coefficient in case of the actual operation state of the power system and achieve a more accurate and efficient automatic generation control in power system. Verify the validity of the online identification method of regional frequency deviation coefficient by establishing the long-term frequency control simulation model of two-regional interconnected power system.
Automated, Parametric Geometry Modeling and Grid Generation for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Harrand, Vincent J.; Uchitel, Vadim G.; Whitmire, John B.
2000-01-01
The objective of this Phase I project is to develop a highly automated software system for rapid geometry modeling and grid generation for turbomachinery applications. The proposed system features a graphical user interface for interactive control, a direct interface to commercial CAD/PDM systems, support for IGES geometry output, and a scripting capability for obtaining a high level of automation and end-user customization of the tool. The developed system is fully parametric and highly automated, and, therefore, significantly reduces the turnaround time for 3D geometry modeling, grid generation and model setup. This facilitates design environments in which a large number of cases need to be generated, such as for parametric analysis and design optimization of turbomachinery equipment. In Phase I we have successfully demonstrated the feasibility of the approach. The system has been tested on a wide variety of turbomachinery geometries, including several impellers and a multi stage rotor-stator combination. In Phase II, we plan to integrate the developed system with turbomachinery design software and with commercial CAD/PDM software.
Kramer, Gerbrand Maria; Frings, Virginie; Heijtel, Dennis; Smit, E F; Hoekstra, Otto S; Boellaard, Ronald
2017-06-01
The objective of this study was to validate several parametric methods for quantification of 3'-deoxy-3'- 18 F-fluorothymidine ( 18 F-FLT) PET in advanced-stage non-small cell lung carcinoma (NSCLC) patients with an activating epidermal growth factor receptor mutation who were treated with gefitinib or erlotinib. Furthermore, we evaluated the impact of noise on accuracy and precision of the parametric analyses of dynamic 18 F-FLT PET/CT to assess the robustness of these methods. Methods : Ten NSCLC patients underwent dynamic 18 F-FLT PET/CT at baseline and 7 and 28 d after the start of treatment. Parametric images were generated using plasma input Logan graphic analysis and 2 basis functions-based methods: a 2-tissue-compartment basis function model (BFM) and spectral analysis (SA). Whole-tumor-averaged parametric pharmacokinetic parameters were compared with those obtained by nonlinear regression of the tumor time-activity curve using a reversible 2-tissue-compartment model with blood volume fraction. In addition, 2 statistically equivalent datasets were generated by countwise splitting the original list-mode data, each containing 50% of the total counts. Both new datasets were reconstructed, and parametric pharmacokinetic parameters were compared between the 2 replicates and the original data. Results: After the settings of each parametric method were optimized, distribution volumes (V T ) obtained with Logan graphic analysis, BFM, and SA all correlated well with those derived using nonlinear regression at baseline and during therapy ( R 2 ≥ 0.94; intraclass correlation coefficient > 0.97). SA-based V T images were most robust to increased noise on a voxel-level (repeatability coefficient, 16% vs. >26%). Yet BFM generated the most accurate K 1 values ( R 2 = 0.94; intraclass correlation coefficient, 0.96). Parametric K 1 data showed a larger variability in general; however, no differences were found in robustness between methods (repeatability coefficient, 80%-84%). Conclusion: Both BFM and SA can generate quantitatively accurate parametric 18 F-FLT V T images in NSCLC patients before and during therapy. SA was more robust to noise, yet BFM provided more accurate parametric K 1 data. We therefore recommend BFM as the preferred parametric method for analysis of dynamic 18 F-FLT PET/CT studies; however, SA can also be used. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
NASA Astrophysics Data System (ADS)
Demourant, F.; Ferreres, G.
2013-12-01
This article presents a methodology for a linear parameter-varying (LPV) multiobjective flight control law design for a blended wing body (BWB) aircraft and results. So, the method is a direct design of a parametrized control law (with respect to some measured flight parameters) through a multimodel convex design to optimize a set of specifications on the full-flight domain and different mass cases. The methodology is based on the Youla parameterization which is very useful since closed loop specifications are affine with respect to Youla parameter. The LPV multiobjective design method is detailed and applied to the BWB flexible aircraft example.
NASA Technical Reports Server (NTRS)
1973-01-01
A computer program for rapid parametric evaluation of various types of cryogenics spacecraft systems is presented. The mathematical techniques of the program provide the capability for in-depth analysis combined with rapid problem solution for the production of a large quantity of soundly based trade-study data. The program requires a large data bank capable of providing characteristics performance data for a wide variety of component assemblies used in cryogenic systems. The program data requirements are divided into: (1) the semipermanent data tables and source data for performance characteristics and (2) the variable input data which contains input parameters which may be perturbated for parametric system studies.
Non-parametric diffeomorphic image registration with the demons algorithm.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2007-01-01
We propose a non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. The demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. The main idea of our algorithm is to adapt this procedure to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of free form deformations by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the true ones in terms of Jacobians.
Parametric design and gridding through relational geometry
NASA Technical Reports Server (NTRS)
Letcher, John S., Jr.; Shook, D. Michael
1995-01-01
Relational Geometric Synthesis (RGS) is a new logical framework for building up precise definitions of complex geometric models from points, curves, surfaces and solids. RGS achieves unprecedented design flexibility by supporting a rich variety of useful curve and surface entities. During the design process, many qualitative and quantitative relationships between elementary objects may be captured and retained in a data structure equivalent to a directed graph, such that they can be utilized for automatically updating the complete model geometry following changes in the shape or location of an underlying object. Capture of relationships enables many new possibilities for parametric variations and optimization. Examples are given of panelization applications for submarines, sailing yachts, offshore structures, and propellers.
Aircraft conceptual design - an adaptable parametric sizing methodology
NASA Astrophysics Data System (ADS)
Coleman, Gary John, Jr.
Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to meet current aerospace challenges. Overarching goal is to avoid the reoccurring situation of optimizing an already ill-fated solution.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Bláha, M; Hoch, J; Ferko, A; Ryška, A; Hovorková, E
Improvement in any human activity is preconditioned by inspection of results and providing feedback used for modification of the processes applied. Comparison of experts experience in the given field is another indispensable part leading to optimisation and improvement of processes, and optimally to implementation of standards. For the purpose of objective comparison and assessment of the processes, it is always necessary to describe the processes in a parametric way, to obtain representative data, to assess the achieved results, and to provide unquestionable and data-driven feedback based on such analysis. This may lead to a consensus on the definition of standards in the given area of health care. Total mesorectal excision (TME) is a standard procedure of rectal cancer (C20) surgical treatment. However, the quality of performed procedures varies in different health care facilities, which is given, among others, by internal processes and surgeons experience. Assessment of surgical treatment results is therefore of key importance. A pathologist who assesses the resected tissue can provide valuable feedback in this respect. An information system for the parametric assessment of TME performance is described in our article, including technical background in the form of a multicentre clinical registry and the structure of observed parameters. We consider the proposed system of TME parametric assessment as significant for improvement of TME performance, aimed at reducing local recurrences and at improving the overall prognosis of patients. rectal cancer total mesorectal excision parametric data clinical registries TME registry.
NASA Astrophysics Data System (ADS)
Li, Yongming; Li, Fan; Wang, Pin; Zhu, Xueru; Liu, Shujun; Qiu, Mingguo; Zhang, Jingna; Zeng, Xiaoping
2016-10-01
Traditional age estimation methods are based on the same idea that uses the real age as the training label. However, these methods ignore that there is a deviation between the real age and the brain age due to accelerated brain aging. This paper considers this deviation and searches for it by maximizing the separability distance value rather than by minimizing the difference between the estimated brain age and the real age. Firstly, set the search range of the deviation as the deviation candidates according to prior knowledge. Secondly, use the support vector regression (SVR) as the age estimation model to minimize the difference between the estimated age and the real age plus deviation rather than the real age itself. Thirdly, design the fitness function based on the separability distance criterion. Fourthly, conduct age estimation on the validation dataset using the trained age estimation model, put the estimated age into the fitness function, and obtain the fitness value of the deviation candidate. Fifthly, repeat the iteration until all the deviation candidates are involved and get the optimal deviation with maximum fitness values. The real age plus the optimal deviation is taken as the brain pathological age. The experimental results showed that the separability was apparently improved. For normal control-Alzheimer’s disease (NC-AD), normal control-mild cognition impairment (NC-MCI), and MCI-AD, the average improvements were 0.178 (35.11%), 0.033 (14.47%), and 0.017 (39.53%), respectively. For NC-MCI-AD, the average improvement was 0.2287 (64.22%). The estimated brain pathological age could be not only more helpful to the classification of AD but also more precisely reflect accelerated brain aging. In conclusion, this paper offers a new method for brain age estimation that can distinguish different states of AD and can better reflect the extent of accelerated aging.
Dawson, Ree; Lavori, Philip W
2012-01-01
Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.
Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch
2016-07-01
We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less
NASA Astrophysics Data System (ADS)
Ayad, G.; Song, J.; Barriere, T.; Liu, B.; Gelin, J. C.
2007-05-01
The paper is concerned with optimization and parametric identification of Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders parts by solid state diffusion. In the first part, one describes an original methodology to optimize the injection stage based on the combination of Design Of Experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometer curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization for manufacturing of a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1984-01-01
An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.
A General Multidisciplinary Turbomachinery Design Optimization system Applied to a Transonic Fan
NASA Astrophysics Data System (ADS)
Nemnem, Ahmed Mohamed Farid
The blade geometry design process is integral to the development and advancement of compressors and turbines in gas generators or aeroengines. A new airfoil section design capability has been added to an open source parametric 3D blade design tool. Curvature of the meanline is controlled using B-splines to create the airfoils. The curvature is analytically integrated to derive the angles and the meanline is obtained by integrating the angles. A smooth thickness distribution is then added to the airfoil to guarantee a smooth shape while maintaining a prescribed thickness distribution. A leading edge B-spline definition has also been implemented to achieve customized airfoil leading edges which guarantees smoothness with parametric eccentricity and droop. An automated turbomachinery design and optimization system has been created. An existing splittered transonic fan is used as a test and reference case. This design was more general than a conventional design to have access to the other design methodology. The whole mechanical and aerodynamic design loops are automated for the optimization process. The flow path and the geometrical properties of the rotor are initially created using the axi-symmetric design and analysis code (T-AXI). The main and splitter blades are parametrically designed with the created geometry builder (3DBGB) using the new added features (curvature technique). The solid model creation of the rotor sector with a periodic boundaries combining the main blade and splitter is done using MATLAB code directly connected to SolidWorks including the hub, fillets and tip clearance. A mechanical optimization is performed with DAKOTA (developed by DOE) to reduce the mass of the blades while keeping maximum stress as a constraint with a safety factor. A Genetic algorithm followed by Numerical Gradient optimization strategies are used in the mechanical optimization. The splittered transonic fan blades mass is reduced by 2.6% while constraining the maximum stress below 50% material yield strength using 2D sections thickness and chord multipliers. Once the initial design was mechanically optimized, a CFD optimization was performed to maximize efficiency and/or stall margin. The CFD grid generator (AUTOGRID) reads 3DBGB output and accounts for hub fillets and tip gaps. Single and Multi-objective Genetic Algorithm (SOGA, MOGA) optimization have been used with the CFD analysis system. In SOGA optimization, efficiency was increased by 3.525% from 78.364% to 81.889% while only changing 4 design parameters. For MOGA optimization with higher weighting efficiency than stall margin, the efficiency was increased by 2.651% from 78.364% to 81.015% while the static pressure recovery factor was increased from 0.37407 to 0.4812286 that consequently increases the stall margin. The design process starts with a hot shape design, and then a hot to cold transformation process is explained once the optimization process ends which smoothly subtracts the mechanical deflections from the hot shape. This transformation ensures an accurate tip clearance. The optimization modules can be customized by the user as one full optimization or multiple small ones. This allows the designer not to be eliminated from the design loop which helps in taking the right choice of parameters for the optimization and the final feasible design.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Feix, M.; Codis, S.; Pichon, C.; Bernardeau, F.; L'Huillier, B.; Kim, J.; Hong, S. E.; Laigle, C.; Park, C.; Shin, J.; Pogosyan, D.
2018-02-01
Starting from a very accurate model for density-in-cells statistics of dark matter based on large deviation theory, a bias model for the tracer density in spheres is formulated. It adopts a mean bias relation based on a quadratic bias model to relate the log-densities of dark matter to those of mass-weighted dark haloes in real and redshift space. The validity of the parametrized bias model is established using a parametrization-independent extraction of the bias function. This average bias model is then combined with the dark matter PDF, neglecting any scatter around it: it nevertheless yields an excellent model for densities-in-cells statistics of mass tracers that is parametrized in terms of the underlying dark matter variance and three bias parameters. The procedure is validated on measurements of both the one- and two-point statistics of subhalo densities in the state-of-the-art Horizon Run 4 simulation showing excellent agreement for measured dark matter variance and bias parameters. Finally, it is demonstrated that this formalism allows for a joint estimation of the non-linear dark matter variance and the bias parameters using solely the statistics of subhaloes. Having verified that galaxy counts in hydrodynamical simulations sampled on a scale of 10 Mpc h-1 closely resemble those of subhaloes, this work provides important steps towards making theoretical predictions for density-in-cells statistics applicable to upcoming galaxy surveys like Euclid or WFIRST.
Genetic algorithms for multicriteria shape optimization of induction furnace
NASA Astrophysics Data System (ADS)
Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo
2012-09-01
In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.
Moderate temperature control technology for a lunar base
NASA Technical Reports Server (NTRS)
Swanson, Theodore D.; Sridhar, K. R.; Gottmann, Matthias
1993-01-01
A parametric analysis is performed to compare different heat pump based thermal control systems for a Lunar Base. Rankine cycle and absorption cycle heat pumps are compared and optimized for a 100 kW cooling load. Variables include the use or lack of an interface heat exchanger, and different operating fluids. Optimization of system mass to radiator rejection temperature is performed. The results indicate a relatively small sensitivity of Rankine cycle system mass to these variables, with optimized system masses of about 6000 kg for the 100 kW thermal load. It is quantitaively demonstrated that absorption based systems are not mass competitive with Rankine systems.
NASA Astrophysics Data System (ADS)
Khellat, M. R.; Mirjalili, A.
2017-03-01
We first consider the idea of renormalization group-induced estimates, in the context of optimization procedures, for the Brodsky-Lepage-Mackenzie approach to generate higher-order contributions to QCD perturbative series. Secondly, we develop the deviation pattern approach (DPA) in which through a series of comparisons between lowerorder RG-induced estimates and the corresponding analytical calculations, one could modify higher-order RG-induced estimates. Finally, using the normal estimation procedure and DPA, we get estimates of αs4 corrections for the Bjorken sum rule of polarized deep-inelastic scattering and for the non-singlet contribution to the Adler function.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
NASA Astrophysics Data System (ADS)
Perry, Dan; Nakamoto, Mark; Verghese, Nishath; Hurat, Philippe; Rouse, Rich
2007-03-01
Model-based hotspot detection and silicon-aware parametric analysis help designers optimize their chips for yield, area and performance without the high cost of applying foundries' recommended design rules. This set of DFM/ recommended rules is primarily litho-driven, but cannot guarantee a manufacturable design without imposing overly restrictive design requirements. This rule-based methodology of making design decisions based on idealized polygons that no longer represent what is on silicon needs to be replaced. Using model-based simulation of the lithography, OPC, RET and etch effects, followed by electrical evaluation of the resulting shapes, leads to a more realistic and accurate analysis. This analysis can be used to evaluate intelligent design trade-offs and identify potential failures due to systematic manufacturing defects during the design phase. The successful DFM design methodology consists of three parts: 1. Achieve a more aggressive layout through limited usage of litho-related recommended design rules. A 10% to 15% area reduction is achieved by using more aggressive design rules. DFM/recommended design rules are used only if there is no impact on cell size. 2. Identify and fix hotspots using a model-based layout printability checker. Model-based litho and etch simulation are done at the cell level to identify hotspots. Violations of recommended rules may cause additional hotspots, which are then fixed. The resulting design is ready for step 3. 3. Improve timing accuracy with a process-aware parametric analysis tool for transistors and interconnect. Contours of diffusion, poly and metal layers are used for parametric analysis. In this paper, we show the results of this physical and electrical DFM methodology at Qualcomm. We describe how Qualcomm was able to develop more aggressive cell designs that yielded a 10% to 15% area reduction using this methodology. Model-based shape simulation was employed during library development to validate architecture choices and to optimize cell layout. At the physical verification stage, the shape simulator was run at full-chip level to identify and fix residual hotspots on interconnect layers, on poly or metal 1 due to interaction between adjacent cells, or on metal 1 due to interaction between routing (via and via cover) and cell geometry. To determine an appropriate electrical DFM solution, Qualcomm developed an experiment to examine various electrical effects. After reporting the silicon results of this experiment, which showed sizeable delay variations due to lithography-related systematic effects, we also explain how contours of diffusion, poly and metal can be used for silicon-aware parametric analysis of transistors and interconnect at the cell-, block- and chip-level.
NASA Astrophysics Data System (ADS)
Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.
2018-01-01
This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.
NASA Technical Reports Server (NTRS)
Bair, E. K.
1986-01-01
The System Trades Study and Design Methodology Plan is used to conduct trade studies to define the combination of Space Shuttle Main Engine features that will optimize candidate engine configurations. This is accomplished by using vehicle sensitivities and engine parametric data to establish engine chamber pressure and area ratio design points for candidate engine configurations. Engineering analyses are to be conducted to refine and optimize the candidate configurations at their design points. The optimized engine data and characteristics are then evaluated and compared against other candidates being considered. The Evaluation Criteria Plan is then used to compare and rank the optimized engine configurations on the basis of cost.
Pulse shape optimization for electron-positron production in rotating fields
NASA Astrophysics Data System (ADS)
Fillion-Gourdeau, François; Hebenstreit, Florian; Gagnon, Denis; MacLean, Steve
2017-07-01
We optimize the pulse shape and polarization of time-dependent electric fields to maximize the production of electron-positron pairs via strong field quantum electrodynamics processes. The pulse is parametrized in Fourier space by a B -spline polynomial basis, which results in a relatively low-dimensional parameter space while still allowing for a large number of electric field modes. The optimization is performed by using a parallel implementation of the differential evolution, one of the most efficient metaheuristic algorithms. The computational performance of the numerical method and the results on pair production are compared with a local multistart optimization algorithm. These techniques allow us to determine the pulse shape and field polarization that maximize the number of produced pairs in computationally accessible regimes.
Robust Control of Uncertain Systems via Dissipative LQG-Type Controllers
NASA Technical Reports Server (NTRS)
Joshi, Suresh M.
2000-01-01
Optimal controller design is addressed for a class of linear, time-invariant systems which are dissipative with respect to a quadratic power function. The system matrices are assumed to be affine functions of uncertain parameters confined to a convex polytopic region in the parameter space. For such systems, a method is developed for designing a controller which is dissipative with respect to a given power function, and is simultaneously optimal in the linear-quadratic-Gaussian (LQG) sense. The resulting controller provides robust stability as well as optimal performance. Three important special cases, namely, passive, norm-bounded, and sector-bounded controllers, which are also LQG-optimal, are presented. The results give new methods for robust controller design in the presence of parametric uncertainties.
Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.
Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue
2015-08-20
Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.
Optimization of an electromagnetic linear actuator using a network and a finite element model
NASA Astrophysics Data System (ADS)
Neubert, Holger; Kamusella, Alfred; Lienig, Jens
2011-03-01
Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang
This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less
Cluster-void degeneracy breaking: Modified gravity in the balance
NASA Astrophysics Data System (ADS)
Sahlén, Martin; Silk, Joseph
2018-05-01
Combining galaxy cluster and void abundances is a novel, powerful way to constrain deviations from general relativity and the Λ CDM model. For a flat w CDM model with growth of large-scale structure parametrized by the redshift-dependent growth index γ (z )=γ0+γ1z /(1 +z ) of linear matter perturbations, combining void and cluster abundances in future surveys with Euclid and the four-meter multiobject spectroscopic telescope could improve the figure of merit for (w ,γ0,γ1) by a factor of 20 compared to individual abundances. In an ideal case, improvement on current cosmological data is a figure of merit factor 600 or more.
An Integrated Method for Airfoil Optimization
NASA Astrophysics Data System (ADS)
Okrent, Joshua B.
Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal operational conditions from a broad design space with the use of minimal computational resources on both an absolute and relative scale to traditional analysis techniques. Aerodynamicists, program managers, aircraft configuration specialist, and anyone else in charge of aircraft configuration, design studies, and program level decisions might find the evaluation and optimization method proposed of interest.
ERIC Educational Resources Information Center
Evans, Steven T.; Huang, Xinqun; Cramer, Steven M.
2010-01-01
The commercial simulator Aspen Chromatography was employed to study and optimize an important new industrial separation process, weak partitioning chromatography. This case study on antibody purification was implemented in a chromatographic separations course. Parametric simulations were performed to investigate the effect of operating parameters…
Low NOx combustion and SCR flow field optimization in a low volatile coal fired boiler.
Liu, Xing; Tan, Houzhang; Wang, Yibin; Yang, Fuxin; Mikulčić, Hrvoje; Vujanović, Milan; Duić, Neven
2018-08-15
Low NO x burner redesign and deep air staging have been carried out to optimize the poor ignition and reduce the NO x emissions in a low volatile coal fired 330 MW e boiler. Residual swirling flow in the tangentially-fired furnace caused flue gas velocity deviations at furnace exit, leading to flow field unevenness in the SCR (selective catalytic reduction) system and poor denitrification efficiency. Numerical simulations on the velocity field in the SCR system were carried out to determine the optimal flow deflector arrangement to improve flow field uniformity of SCR system. Full-scale experiment was performed to investigate the effect of low NO x combustion and SCR flow field optimization. Compared with the results before the optimization, the NO x emissions at furnace exit decreased from 550 to 650 mg/Nm³ to 330-430 mg/Nm³. The sample standard deviation of the NO x emissions at the outlet section of SCR decreased from 34.8 mg/Nm³ to 7.8 mg/Nm³. The consumption of liquid ammonia reduced from 150 to 200 kg/h to 100-150 kg/h after optimization. Copyright © 2018. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...
2017-10-10
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Diffeomorphic demons: efficient non-parametric image registration.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2009-03-01
We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.
Genetic Networks and Anticipation of Gene Expression Patterns
NASA Astrophysics Data System (ADS)
Gebert, J.; Lätsch, M.; Pickl, S. W.; Radde, N.; Weber, G.-W.; Wünschiers, R.
2004-08-01
An interesting problem for computational biology is the analysis of time-series expression data. Here, the application of modern methods from dynamical systems, optimization theory, numerical algorithms and the utilization of implicit discrete information lead to a deeper understanding. In [1], we suggested to represent the behavior of time-series gene expression patterns by a system of ordinary differential equations, which we analytically and algorithmically investigated under the parametrical aspect of stability or instability. Our algorithm strongly exploited combinatorial information. In this paper, we deepen, extend and exemplify this study from the viewpoint of underlying mathematical modelling. This modelling consists in evaluating DNA-microarray measurements as the basis of anticipatory prediction, in the choice of a smooth model given by differential equations, in an approach of the right-hand side with parametric matrices, and in a discrete approximation which is a least squares optimization problem. We give a mathematical and biological discussion, and pay attention to the special case of a linear system, where the matrices do not depend on the state of expressions. Here, we present first numerical examples.
Non-parametric early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.
2008-03-01
The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.
Georgia Institute of Technology research on the Gas Core Actinide Transmutation Reactor (GCATR)
NASA Technical Reports Server (NTRS)
Clement, J. D.; Rust, J. H.; Schneider, A.; Hohl, F.
1976-01-01
The program reviewed is a study of the feasibility, design, and optimization of the GCATR. The program is designed to take advantage of initial results and to continue work carried out on the Gas Core Breeder Reactor. The program complements NASA's program of developing UF6 fueled cavity reactors for power, nuclear pumped lasers, and other advanced technology applications. The program comprises: (1) General Studies--Parametric survey calculations performed to examine the effects of reactor spectrum and flux level on the actinide transmutation for GCATR conditions. The sensitivity of the results to neutron cross sections are to be assessed. Specifically, the parametric calculations of the actinide transmutation are to include the mass, isotope composition, fission and capture rates, reactivity effects, and neutron activity of recycled actinides. (2) GCATR Design Studies--This task is a major thrust of the proposed research program. Several subtasks are considered: optimization criteria studies of the blanket and fuel reprocessing, the actinide insertion and recirculation system, and the system integration. A brief review of the background of the GCATR and ongoing research is presented.
Parametric Cost Analysis: A Design Function
NASA Technical Reports Server (NTRS)
Dean, Edwin B.
1989-01-01
Parametric cost analysis uses equations to map measurable system attributes into cost. The measures of the system attributes are called metrics. The equations are called cost estimating relationships (CER's), and are obtained by the analysis of cost and technical metric data of products analogous to those to be estimated. Examples of system metrics include mass, power, failure_rate, mean_time_to_repair, energy _consumed, payload_to_orbit, pointing_accuracy, manufacturing_complexity, number_of_fasteners, and percent_of_electronics_weight. The basic assumption is that a measurable relationship exists between system attributes and the cost of the system. If a function exists, the attributes are cost drivers. Candidates for metrics include system requirement metrics and engineering process metrics. Requirements are constraints on the engineering process. From optimization theory we know that any active constraint generates cost by not permitting full optimization of the objective. Thus, requirements are cost drivers. Engineering processes reflect a projection of the requirements onto the corporate culture, engineering technology, and system technology. Engineering processes are an indirect measure of the requirements and, hence, are cost drivers.
Anger and health in dementia caregivers: exploring the mediation effect of optimism.
López, J; Romero-Moreno, R; Márquez-González, M; Losada, A
2015-04-01
Although previous studies indicate a negative association between caregivers' anger and health, the potential mechanisms linking this relationship are not yet fully understood. The aim of this study was to explore the potential mediating role of optimism in the relationship between anger and caregivers' physical health. Dementia caregivers (n = 108) were interviewed and filled out instruments assessing their anger (reaction), optimism and health (vitality). A mediational model was tested to determine whether optimism partially mediated the relationship between anger and vitality. Angry reaction was negatively associated with optimism and vitality; optimism was positively associated with vitality. Finally, the relationship between angry reaction and vitality decreased when optimism was entered simultaneously. A non-parametric bootstrap approach confirmed that optimism significantly mediated some of the relationship between angry reaction and vitality. These findings suggest that low optimism may help explain the association between caregivers' anger and reduced sense of vitality. The results provide a specific target for intervention with caregivers. Copyright © 2013 John Wiley & Sons, Ltd.
Design of a device for sky light polarization measurements.
Wang, Yujie; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Xian, Zhiwen; Ma, Tao
2014-08-14
Sky polarization patterns can be used both as indicators of atmospheric turbidity and as a sun compass for navigation. The objective of this study is to improve the precision of sky light polarization measurements by optimal design of the device used. The central part of the system is composed of a Charge Coupled Device (CCD) camera; a fish-eye lens and a linear polarizer. Algorithms for estimating parameters of the polarized light based on three images are derived and the optimal alignments of the polarizer are analyzed. The least-squares estimation is introduced for sky light polarization pattern measurement. The polarization patterns of sky light are obtained using the designed system and they follow almost the same patterns of the single-scattering Rayleigh model. Deviations of polarization angles between observation and the theory are analyzed. The largest deviations occur near the sun and anti-sun directions. Ninety percent of the deviations are less than 5° and 40% percent of them are less than 1°. The deviations decrease evidently as the degree of polarization increases. It also shows that the polarization pattern of the cloudy sky is almost identical as in the blue sky.
Design of a Device for Sky Light Polarization Measurements
Wang, Yujie; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Xian, Zhiwen; Ma, Tao
2014-01-01
Sky polarization patterns can be used both as indicators of atmospheric turbidity and as a sun compass for navigation. The objective of this study is to improve the precision of sky light polarization measurements by optimal design of the device used. The central part of the system is composed of a Charge Coupled Device (CCD) camera; a fish-eye lens and a linear polarizer. Algorithms for estimating parameters of the polarized light based on three images are derived and the optimal alignments of the polarizer are analyzed. The least-squares estimation is introduced for sky light polarization pattern measurement. The polarization patterns of sky light are obtained using the designed system and they follow almost the same patterns of the single-scattering Rayleigh model. Deviations of polarization angles between observation and the theory are analyzed. The largest deviations occur near the sun and anti-sun directions. Ninety percent of the deviations are less than 5° and 40% percent of them are less than 1°. The deviations decrease evidently as the degree of polarization increases. It also shows that the polarization pattern of the cloudy sky is almost identical as in the blue sky. PMID:25196003
Yarazavi, Mina; Noroozian, Ebrahim
2018-02-13
A novel sol-gel coating on a stainless-steel fiber was developed for the first time for the headspace solid-phase microextraction and determination of α-bisabolol with gas chromatography and flame ionization detection. The parameters influencing the efficiency of solid-phase microextraction process, such as extraction time and temperature, pH, and ionic strength, were optimized by the experimental design method. Under optimized conditions, the linear range was between 0.0027 and 100 μg/mL. The relative standard deviations determined at 0.01 and 1.0 μg/mL concentration levels (n = 3), respectively, were as follows: intraday relative standard deviations 3.4 and 3.3%; interday relative standard deviations 5.0 and 4.3%; and fiber-to-fiber relative standard deviations 6.0 and 3.5%. The relative recovery values were 90.3 and 101.4% at 0.01 and 1.0 μg/mL spiking levels, respectively. The proposed method was successfully applied to various real samples containing α-bisabolol. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computational wing optimization and comparisons with experiment for a semi-span wing model
NASA Technical Reports Server (NTRS)
Waggoner, E. G.; Haney, H. P.; Ballhaus, W. F.
1978-01-01
A computational wing optimization procedure was developed and verified by an experimental investigation of a semi-span variable camber wing model in the NASA Ames Research Center 14 foot transonic wind tunnel. The Bailey-Ballhaus transonic potential flow analysis and Woodward-Carmichael linear theory codes were linked to Vanderplaats constrained minimization routine to optimize model configurations at several subsonic and transonic design points. The 35 deg swept wing is characterized by multi-segmented leading and trailing edge flaps whose hinge lines are swept relative to the leading and trailing edges of the wing. By varying deflection angles of the flap segments, camber and twist distribution can be optimized for different design conditions. Results indicate that numerical optimization can be both an effective and efficient design tool. The optimized configurations had as good or better lift to drag ratios at the design points as the best designs previously tested during an extensive parametric study.
Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft
NASA Astrophysics Data System (ADS)
Rasotto, M.; Armellin, R.; Di Lizia, P.
2016-03-01
An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.
Improving the FLORIS wind plant model for compatibility with gradient-based optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Jared J.; Gebraad, Pieter MO; Ning, Andrew
The FLORIS (FLOw Redirection and Induction in Steady-state) model, a parametric wind turbine wake model that predicts steady-state wake characteristics based on wind turbine position and yaw angle, was developed for optimization of control settings and turbine locations. This article provides details on changes made to the FLORIS model to make the model more suitable for gradient-based optimization. Changes to the FLORIS model were made to remove discontinuities and add curvature to regions of non-physical zero gradient. Exact gradients for the FLORIS model were obtained using algorithmic differentiation. A set of three case studies demonstrate that using exact gradients withmore » gradient-based optimization reduces the number of function calls by several orders of magnitude. The case studies also show that adding curvature improves convergence behavior, allowing gradient-based optimization algorithms used with the FLORIS model to more reliably find better solutions to wind farm optimization problems.« less
NASA Astrophysics Data System (ADS)
Ermida, S. L.; Trigo, I. F.; DaCamara, C.; Ghent, D.
2017-12-01
Land surface temperature (LST) values retrieved from satellite measurements in the thermal infrared (TIR) may be strongly affected by spatial anisotropy. This effect introduces significant discrepancies among LST estimations from different sensors, overlapping in space and time, that are not related to uncertainties in the methodologies or input data used. Furthermore, these directional effects deviate LST products from an ideally defined LST, which should represent to the ensemble of directional radiometric temperature of all surface elements within the FOV. Angular effects on LST are here conveniently estimated by means of a parametric model of the surface thermal emission, which describes the angular dependence of LST as a function of viewing and illumination geometry. Two models are consistently analyzed to evaluate their performance of and to assess their respective potential to correct directional effects on LST for a wide range of surface conditions, in terms of tree coverage, vegetation density, surface emissivity. We also propose an optimization of the correction of directional effects through a synergistic use of both models. The models are calibrated using LST data as provided by two sensors: MODIS on-board NASA's TERRA and AQUA; and SEVIRI on-board EUMETSAT's MSG. As shown in our previous feasibility studies the sampling of illumination and view angles has a high impact on the model parameters. This impact may be mitigated when the sampling size is increased by aggregating pixels with similar surface conditions. Here we propose a methodology where land surface is stratified by means of a cluster analysis using information on land cover type, fraction of vegetation cover and topography. The models are then adjusted to LST data corresponding to each cluster. It is shown that the quality of the cluster based models is very close to the pixel based ones. Furthermore, the reduced number of parameters allows improving the model trough the incorporation of a seasonal component. The application of the procedure discussed here towards the harmonization of LST products from multi-sensors has been tested within the framework of the ESA DUE GlobTemperature project. It is also expected to help the characterization of directional effects of LST products generated within the EUMETSAT LSA SAF.
2006-04-21
C. M., and Prendergast, J. P., 2002, "Thermial Analysis of Hypersonic Inlet Flow with Exergy -Based Design Methods," International Journal of Applied...parametric study of the PS and its components is first presented in order to show the type of detailed information on internal system losses which an exergy ...Thermoeconomic Isolation Applied to the Optimal Synthesis/Design of an Advanced Fighter Aircraft System," International Journal of Thermodynamics, ICAT
NASA Technical Reports Server (NTRS)
Tarras, A.
1987-01-01
The problem of stabilization/pole placement under structural constraints of large scale linear systems is discussed. The existence of a solution to this problem is expressed in terms of fixed modes. The aim is to provide a bibliographic survey of the available results concerning the fixed modes (characterization, elimination, control structure selection to avoid them, control design in their absence) and to present the author's contribution to this problem which can be summarized by the use of the mode sensitivity concept to detect or to avoid them, the use of vibrational control to stabilize them, and the addition of parametric robustness considerations to design an optimal decentralized robust control.
Parametric study of rock pile thermal storage for solar heating and cooling phase 1
NASA Technical Reports Server (NTRS)
Saha, H.
1977-01-01
The test data and an analysis were presented, of heat transfer characteristics of a solar thermal energy storage bed utilizing water filled cans as the energy storage medium. An attempt was made to optimize can size, can arrangement, and bed flow rates by experimental and analytical means. Liquid filled cans, as storage media, utilize benefits of both solids like rocks, and liquids like water. It was found that this combination of solid and liquid media shows unique heat transfer and heat content characteristics and is well suited for use with solar air systems for space and hot water heating. An extensive parametric study was made of heat transfer characteristics of rocks, of other solids, and of solid containers filled with liquids.
Dissipative particle dynamics: Systematic parametrization using water-octanol partition coefficients
NASA Astrophysics Data System (ADS)
Anderson, Richard L.; Bray, David J.; Ferrante, Andrea S.; Noro, Massimo G.; Stott, Ian P.; Warren, Patrick B.
2017-09-01
We present a systematic, top-down, thermodynamic parametrization scheme for dissipative particle dynamics (DPD) using water-octanol partition coefficients, supplemented by water-octanol phase equilibria and pure liquid phase density data. We demonstrate the feasibility of computing the required partition coefficients in DPD using brute-force simulation, within an adaptive semi-automatic staged optimization scheme. We test the methodology by fitting to experimental partition coefficient data for twenty one small molecules in five classes comprising alcohols and poly-alcohols, amines, ethers and simple aromatics, and alkanes (i.e., hexane). Finally, we illustrate the transferability of a subset of the determined parameters by calculating the critical micelle concentrations and mean aggregation numbers of selected alkyl ethoxylate surfactants, in good agreement with reported experimental values.
Optimal Design of Material and Process Parameters in Powder Injection Molding
NASA Astrophysics Data System (ADS)
Ayad, G.; Barriere, T.; Gelin, J. C.; Song, J.; Liu, B.
2007-04-01
The paper is concerned with optimization and parametric identification for the different stages in Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders part by solid state diffusion. In the first part, one describes an original methodology to optimize the process and geometry parameters in injection stage based on the combination of design of experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometeric curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization of material and process parameters for manufacturing a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
NASA Astrophysics Data System (ADS)
Lovell, T. Alan; Schmidt, D. K.
1994-03-01
The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.
On the use of PGD for optimal control applied to automated fibre placement
NASA Astrophysics Data System (ADS)
Bur, N.; Joyot, P.
2017-10-01
Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.
NASA Technical Reports Server (NTRS)
Lovell, T. Alan; Schmidt, D. K.
1994-01-01
The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.
Braithwaite, Susan S; Umpierrez, Guillermo E; Chase, J Geoffrey
2013-09-01
Group metrics are described to quantify blood glucose (BG) variability of hospitalized patients. The "multiplicative surrogate standard deviation" (MSSD) is the reverse-transformed group mean of the standard deviations (SDs) of the logarithmically transformed BG data set of each patient. The "geometric group mean" (GGM) is the reverse-transformed group mean of the means of the logarithmically transformed BG data set of each patient. Before reverse transformation is performed, the mean of means and mean of SDs each has its own SD, which becomes a multiplicative standard deviation (MSD) after reverse transformation. Statistical predictions and comparisons of parametric or nonparametric tests remain valid after reverse transformation. A subset of a previously published BG data set of 20 critically ill patients from the first 72 h of treatment under the SPRINT protocol was transformed logarithmically. After rank ordering according to the SD of the logarithmically transformed BG data of each patient, the cohort was divided into two equal groups, those having lower or higher variability. For the entire cohort, the GGM was 106 (÷/× 1.07) mg/dl, and MSSD was 1.24 (÷/× 1.07). For the subgroups having lower and higher variability, respectively, the GGM did not differ, 104 (÷/× 1.07) versus 109 (÷/× 1.07) mg/dl, but the MSSD differed, 1.17 (÷/× 1.03) versus 1.31 (÷/× 1.05), p = .00004. By using the MSSD with its MSD, groups can be characterized and compared according to glycemic variability of individual patient members. © 2013 Diabetes Technology Society.
Sun, Chao; Feng, Wenquan; Du, Songlin
2018-01-01
As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589
Optimizing 4DCBCT projection allocation to respiratory bins.
O'Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J
2014-10-07
4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%-50% smaller than conventional phase based binning and 59%-76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%-90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images reconstructed using displacement binning and the optimized projection allocation algorithm were clearer, contained visibly fewer streak artefacts and produced more consistent marker segmentation than those reconstructed with either equispaced or equal-density binning. The optimized projection allocation algorithm significantly improves image quality in 4DCBCT images and provides, for the first time, a method to consistently generate high quality displacement binned 4DCBCT images in clinical applications.
The variance of length of stay and the optimal DRG outlier payments.
Felder, Stefan
2009-09-01
Prospective payment schemes in health care often include supply-side insurance for cost outliers. In hospital reimbursement, prospective payments for patient discharges, based on their classification into diagnosis related group (DRGs), are complemented by outlier payments for long stay patients. The outlier scheme fixes the length of stay (LOS) threshold, constraining the profit risk of the hospitals. In most DRG systems, this threshold increases with the standard deviation of the LOS distribution. The present paper addresses the adequacy of this DRG outlier threshold rule for risk-averse hospitals with preferences depending on the expected value and the variance of profits. It first shows that the optimal threshold solves the hospital's tradeoff between higher profit risk and lower premium loading payments. It then demonstrates for normally distributed truncated LOS that the optimal outlier threshold indeed decreases with an increase in the standard deviation.
Using string invariants for prediction searching for optimal parameters
NASA Astrophysics Data System (ADS)
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
Long-distance practical quantum key distribution by entanglement swapping.
Scherer, Artur; Sanders, Barry C; Tittel, Wolfgang
2011-02-14
We develop a model for practical, entanglement-based long-distance quantum key distribution employing entanglement swapping as a key building block. Relying only on existing off-the-shelf technology, we show how to optimize resources so as to maximize secret key distribution rates. The tools comprise lossy transmission links, such as telecom optical fibers or free space, parametric down-conversion sources of entangled photon pairs, and threshold detectors that are inefficient and have dark counts. Our analysis provides the optimal trade-off between detector efficiency and dark counts, which are usually competing, as well as the optimal source brightness that maximizes the secret key rate for specified distances (i.e. loss) between sender and receiver.
NASA Astrophysics Data System (ADS)
Machnes, Shai; AsséMat, Elie; Tannor, David; Wilhelm, Frank
Quantum computation places very stringent demands on gate fidelities, and experimental implementations require both the controls and the resultant dynamics to conform to hardware-specific ansatzes and constraints. Superconducting qubits present the additional requirement that pulses have simple parametrizations, so they can be further calibrated in the experiment, to compensate for uncertainties in system characterization. We present a novel, conceptually simple and easy-to-implement gradient-based optimal control algorithm, GOAT, which satisfies all the above requirements. In part II we shall demonstrate the algorithm's capabilities, by using GOAT to optimize fast high-accuracy pulses for two leading superconducting qubits architectures - Xmons and IBM's flux-tunable couplers.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O
2015-01-01
To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.
Optimal Control of the Parametric Oscillator
ERIC Educational Resources Information Center
Andresen, B.; Hoffmann, K. H.; Nulton, J.; Tsirlin, A.; Salamon, P.
2011-01-01
We present a solution to the minimum time control problem for a classical harmonic oscillator to reach a target energy E[subscript T] from a given initial state (q[subscript i], p[subscript i]) by controlling its frequency [omega], [omega][subscript min] less than or equal to [omega] less than or equal to [omega][subscript max]. A brief synopsis…
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
Program For Optimization Of Nuclear Rocket Engines
NASA Technical Reports Server (NTRS)
Plebuch, R. K.; Mcdougall, J. K.; Ridolphi, F.; Walton, James T.
1994-01-01
NOP is versatile digital-computer program devoloped for parametric analysis of beryllium-reflected, graphite-moderated nuclear rocket engines. Facilitates analysis of performance of engine with respect to such considerations as specific impulse, engine power, type of engine cycle, and engine-design constraints arising from complications of fuel loading and internal gradients of temperature. Predicts minimum weight for specified performance.
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
Internal aerodynamics of a generic three-dimensional scramjet inlet at Mach 10
NASA Technical Reports Server (NTRS)
Holland, Scott D.
1995-01-01
A combined computational and experimental parametric study of the internal aerodynamics of a generic three-dimensional sidewall compression scramjet inlet configuration at Mach 10 has been performed. The study was designed to demonstrate the utility of computational fluid dynamics as a design tool in hypersonic inlet flow fields, to provide a detailed account of the nature and structure of the internal flow interactions, and to provide a comprehensive surface property and flow field database to determine the effects of contraction ratio, cowl position, and Reynolds number on the performance of a hypersonic scramjet inlet configuration. The work proceeded in several phases: the initial inviscid assessment of the internal shock structure, the preliminary computational parametric study, the coupling of the optimized configuration with the physical limitations of the facility, the wind tunnel blockage assessment, and the computational and experimental parametric study of the final configuration. Good agreement between computation and experimentation was observed in the magnitude and location of the interactions, particularly for weakly interacting flow fields. Large-scale forward separations resulted when the interaction strength was increased by increasing the contraction ratio or decreasing the Reynolds number.
Accelerating atomistic simulations through self-learning bond-boost hyperdynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, Danny; Voter, Arthur F
2008-01-01
By altering the potential energy landscape on which molecular dynamics are carried out, the hyperdynamics method of Voter enables one to significantly accelerate the simulation state-to-state dynamics of physical systems. While very powerful, successful application of the method entails solving the subtle problem of the parametrization of the so-called bias potential. In this study, we first clarify the constraints that must be obeyed by the bias potential and demonstrate that fast sampling of the biased landscape is key to the obtention of proper kinetics. We then propose an approach by which the bond boost potential of Miron and Fichthorn canmore » be safely parametrized based on data acquired in the course of a molecular dynamics simulation. Finally, we introduce a procedure, the Self-Learning Bond Boost method, in which the parametrization is step efficiently carried out on-the-fly for each new state that is visited during the simulation by safely ramping up the strength of the bias potential up to its optimal value. The stability and accuracy of the method are demonstrated.« less
Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics
Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven
2011-01-01
Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957
Scalar-tensor theories and modified gravity in the wake of GW170817
NASA Astrophysics Data System (ADS)
Langlois, David; Saito, Ryo; Yamauchi, Daisuke; Noui, Karim
2018-03-01
Theories of dark energy and modified gravity can be strongly constrained by astrophysical or cosmological observations, as illustrated by the recent observation of the gravitational wave event GW170817 and of its electromagnetic counterpart GRB 170817A, which showed that the speed of gravitational waves, cg , is the same as the speed of light, within deviations of order 10-15 . This observation implies severe restrictions on scalar-tensor theories, in particular theories whose action depends on second derivatives of a scalar field. Working in the very general framework of degenerate higher-order scalar-tensor (DHOST) theories, which encompass Horndeski and beyond Horndeski theories, we present the DHOST theories that satisfy cg=c . We then examine, for these theories, the screening mechanism that suppresses scalar interactions on small scales, namely the Vainshtein mechanism, and compute the corresponding gravitational laws for a nonrelativistic spherical body. We show that it can lead to a deviation from standard gravity inside matter, parametrized by three coefficients which satisfy a consistency relation and can be constrained by present and future astrophysical observations.
Cosmological constraints and comparison of viable f (R ) models
NASA Astrophysics Data System (ADS)
Pérez-Romero, Judit; Nesseris, Savvas
2018-01-01
In this paper we present cosmological constraints on several well-known f (R ) models, but also on a new class of models that are variants of the Hu-Sawicki one of the form f (R )=R -2/Λ 1 +b y (R ,Λ ) , that interpolate between the cosmological constant model and a matter dominated universe for different values of the parameter b , which is usually expected to be small for viable models and which in practice measures the deviation from general relativity. We use the latest growth rate, cosmic microwave background, baryon acoustic oscillations, supernovae type Ia and Hubble parameter data to place stringent constraints on the models and to compare them to the cosmological constant model but also other viable f (R ) models such as the Starobinsky or the degenerate hypergeometric models. We find that these kinds of Hu-Sawicki variant parametrizations are in general compatible with the currently available data and can provide useful toy models to explore the available functional space of f (R ) models, something very useful with the current and upcoming surveys that will test deviations from general relativity.
Automated Training of ReaxFF Reactive Force Fields for Energetics of Enzymatic Reactions.
Trnka, Tomáš; Tvaroška, Igor; Koča, Jaroslav
2018-01-09
Computational studies of the reaction mechanisms of various enzymes are nowadays based almost exclusively on hybrid QM/MM models. Unfortunately, the success of this approach strongly depends on the selection of the QM region, and computational cost is a crucial limiting factor. An interesting alternative is offered by empirical reactive molecular force fields, especially the ReaxFF potential developed by van Duin and co-workers. However, even though an initial parametrization of ReaxFF for biomolecules already exists, it does not provide the desired level of accuracy. We have conducted a thorough refitting of the ReaxFF force field to improve the description of reaction energetics. To minimize the human effort required, we propose a fully automated approach to generate an extensive training set comprised of thousands of different geometries and molecular fragments starting from a few model molecules. Electrostatic parameters were optimized with QM electrostatic potentials as the main target quantity, avoiding excessive dependence on the choice of reference atomic charges and improving robustness and transferability. The remaining force field parameters were optimized using the VD-CMA-ES variant of the CMA-ES optimization algorithm. This method is able to optimize hundreds of parameters simultaneously with unprecedented speed and reliability. The resulting force field was validated on a real enzymatic system, ppGalNAcT2 glycosyltransferase. The new force field offers excellent qualitative agreement with the reference QM/MM reaction energy profile, matches the relative energies of intermediate and product minima almost exactly, and reduces the overestimation of transition state energies by 27-48% compared with the previous parametrization.
Atlas of optimal coil orientation and position for TMS: A computational study.
Gomez-Tames, Jose; Hamasaka, Atsushi; Laakso, Ilkka; Hirata, Akimasa; Ugawa, Yoshikazu
2018-04-17
Transcranial magnetic stimulation (TMS) activates target brain structures in a non-invasive manner. The optimal orientation of the TMS coil for the motor cortex is well known and can be estimated using motor evoked potentials. However, there are no easily measurable responses for activation of other cortical areas and the optimal orientation for these areas is currently unknown. This study investigated the electric field strength, optimal coil orientation, and relative locations to optimally stimulate the target cortex based on computed electric field distributions. A total of 518,616 stimulation scenarios were studied using realistic head models (2401 coil locations × 12 coil angles × 18 head models). Inter-subject registration methods were used to generate an atlas of optimized TMS coil orientations on locations on the standard brain. We found that the maximum electric field strength is greater in primary somatosensory cortex and primary motor cortex than in other cortical areas. Additionally, a universal optimal coil orientation applicable to most subjects is more feasible at the primary somatosensory cortex and primary motor cortex. We confirmed that optimal coil angle follows the anatomical shape of the hand motor area to realize personalized optimization of TMS. Finally, on average, the optimal coil positions for TMS on the scalp deviated 5.5 mm from the scalp points with minimum cortex-scalp distance. This deviation was minimal at the premotor cortex and primary motor cortex. Personalized optimal coil orientation is preferable for obtaining the most effective stimulation. Copyright © 2018. Published by Elsevier Inc.
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...
2017-05-18
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza
We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less
Mourocq, Emeline; Bize, Pierre; Bouwhuis, Sandra; Bradley, Russell; Charmantier, Anne; de la Cruz, Carlos; Drobniak, Szymon M; Espie, Richard H M; Herényi, Márton; Hötker, Hermann; Krüger, Oliver; Marzluff, John; Møller, Anders P; Nakagawa, Shinichi; Phillips, Richard A; Radford, Andrew N; Roulin, Alexandre; Török, János; Valencia, Juliana; van de Pol, Martijn; Warkentin, Ian G; Winney, Isabel S; Wood, Andrew G; Griesser, Michael
2016-02-01
Fitness can be profoundly influenced by the age at first reproduction (AFR), but to date the AFR-fitness relationship only has been investigated intraspecifically. Here, we investigated the relationship between AFR and average lifetime reproductive success (LRS) across 34 bird species. We assessed differences in the deviation of the Optimal AFR (i.e., the species-specific AFR associated with the highest LRS) from the age at sexual maturity, considering potential effects of life history as well as social and ecological factors. Most individuals adopted the species-specific Optimal AFR and both the mean and Optimal AFR of species correlated positively with life span. Interspecific deviations of the Optimal AFR were associated with indices reflecting a change in LRS or survival as a function of AFR: a delayed AFR was beneficial in species where early AFR was associated with a decrease in subsequent survival or reproductive output. Overall, our results suggest that a delayed onset of reproduction beyond maturity is an optimal strategy explained by a long life span and costs of early reproduction. By providing the first empirical confirmations of key predictions of life-history theory across species, this study contributes to a better understanding of life-history evolution. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
NASA Technical Reports Server (NTRS)
Smalley, Larry L.
1998-01-01
Project Satellite Energy Exchange (SEE) is a free-flying, high altitude satellite that utilizes space to construct a passive, low-temperature, nano-g environment in order to accurately measure the poorly known gravitational constant G plus other gravitational parameters that are difficult to measure in an earth-based laboratory. Eventually data received from SEE must be analyzed using a model of the gravitational interaction including parameters that describe deviations from general relativity and experiment. One model that can be used to fit tile data is the Parametrized post- Newtonian (PPN) approximation of general relativity (GR) which introduces ten parameters which have specified values in (GR). It is the lowest-order, consistent approximation that contains non linear terms. General relativity predicts that the Robertson parameters, gamma (light deflection), and beta (advance of the perihelion), are both 1 in GR. Another eight parameters, alpha(sub k), k=1,2,3 and zeta(sub k), k=1,2,3,4 and Xi are all zero in GR. Non zero values for alpha(sub k) parameters predict preferred frame effects; for zeta(sub k) violations of globally conserved quantities such as mass, momentum and angular momentum; and for Xi a contribution from the Whitehead theory of gravitation, once thought to be equivalent to GR. In addition, there is the possibility that there may be a preferred frame for the universe. If such a frame exists, then all observers must measure the velocity omega of their motion with respect to this universal rest frame. Such a frame is somewhat reminiscent of the concept of the ether which was supposedly the frame in which the velocity of light took the value c predicted by special relativity. The SEE mission can also look for deviations from the r(exp -2) law of Newtonian gravity, adding parameters alpha and lamda for non Newtonian behavior that describe the magnitude and range of the r(exp -2) deviations respectively. The foundations of the GR supposedly agree with Newtonian gravity to first order so that the parameters alpha and lamda are zero in GR. More important, however, GR subsequently depends on this Newtonian approximation to build up the non linear higher-order terms which forms the basis of the PPN frame work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Fingersh, Lee J; Dykes, Katherine L
As wind turbine blade diameters and tower height increase to capture more energy in the wind, higher structural loads results in more structural support material increasing the cost of scaling. Weight reductions in the generator transfer to overall cost savings of the system. Additive manufacturing facilitates a design-for-functionality approach, thereby removing traditional manufacturing constraints and labor costs. The most feasible additive manufacturing technology identified for large, direct-drive generators in this study is powder-binder jetting of a sand cast mold. A parametric finite element analysis optimization study is performed, optimizing for mass and deformation. Also, topology optimization is employed for eachmore » parameter-optimized design.The optimized U-beam spoked web design results in a 24 percent reduction in structural mass of the rotor and 60 percent reduction in radial deflection.« less
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.
NASA Astrophysics Data System (ADS)
Kazmi, K. R.; Khan, F. A.
2008-01-01
In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].
NASA Astrophysics Data System (ADS)
Lee, J.; Bong, H. J.; Ha, J.; Choi, J.; Barlat, F.; Lee, M.-G.
2018-05-01
In this study, a numerical sensitivity analysis of the springback prediction was performed using advanced strain hardening models. In particular, the springback in U-draw bending for dual-phase 780 steel sheets was investigated while focusing on the effect of the initial yield stress determined from the cyclic loading tests. The anisotropic hardening models could reproduce the flow stress behavior under the non-proportional loading condition for the considered parametric cases. However, various identification schemes for determining the yield stress of the anisotropic hardening models significantly influenced the springback prediction. The deviations from the measured springback varied from 4% to 13.5% depending on the identification method.
Theory and Applications of Weakly Interacting Markov Processes
2018-02-03
Moderate deviation principles for stochastic dynamical systems. Boston University, Math Colloquium, March 27, 2015. • Moderate Deviation Principles for...Markov chain approximation method. Submitted. [8] E. Bayraktar and M. Ludkovski. Optimal trade execution in illiquid markets. Math . Finance, 21(4):681...701, 2011. [9] E. Bayraktar and M. Ludkovski. Liquidation in limit order books with controlled intensity. Math . Finance, 24(4):627–650, 2014. [10] P.D
He, Lei; Cheng, Lulu; Hu, Liangliang; Tang, Jianjun; Chen, Xin
2016-01-01
There is increasing recognition of the importance of niche optima in the shift of plant–plant interactions along environmental stress gradients. Here, we investigate whether deviation from niche optima would affect the outcome of plant–plant interactions along a soil acidity gradient (pH = 3.1, 4.1, 5.5 and 6.1) in a pot experiment. We used the acid-tolerant species Lespedeza formosa Koehne as the neighbouring plant and the acid-tolerant species Indigofera pseudotinctoria Mats. or acid-sensitive species Medicago sativa L. as the target plants. Biomass was used to determine the optimal pH and to calculate the relative interaction index (RII). We found that the relationships between RII and the deviation of soil pH from the target's optimal pH were linear for both target species. Both targets were increasingly promoted by the neighbour as pH values deviated from their optima; neighbours benefitted target plants by promoting soil symbiotic arbuscular mycorrhizal fungi, increasing soil organic matter or reducing soil exchangeable aluminium. Our results suggest that the shape of the curve describing the relationship between soil pH and facilitation/competition depends on the soil pH optima of the particular species. PMID:26740568
NASA Astrophysics Data System (ADS)
Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao
2018-03-01
Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.
Compact objects in relativistic theories of gravity
NASA Astrophysics Data System (ADS)
Okada da Silva, Hector
2017-05-01
In this dissertation we discuss several aspects of compact objects, i.e. neutron stars and black holes, in relativistic theories of gravity. We start by studying the role of nuclear physics (encoded in the so-called equation of state) in determining the properties of neutron stars in general relativity. We show that low-mass neutron stars are potentially useful astrophysical laboratories that can be used to constrain the properties of the equation of state. More specifically, we show that various bulk properties of these objects, such as their quadrupole moment and tidal deformability, are tightly correlated. Next, we develop a formalism that aims to capture how generic modifications from general relativity affect the structure of neutron stars, as predicted by a broad class of gravity theories, in the spirit of the parametrized post-Newtonian formalism (PPN). Our "post-Tolman-Oppenheimer-Volkoff" formalism provides a toolbox to study both stellar structure and the interior/exterior geometries of static, spherically symmetric relativistic stars. We also apply the formalism to parametrize deviations from general relativity in various astrophysical observables related with neutron stars, including surface redshift, apparent radius, Eddington luminosity. We then turn our attention to what is arguably the most well-motivated and well-investigated generalization of general relativity: scalar-tensor theory. We start by considering theories where gravity is mediated by a single extra scalar degree of freedom (in addition to the metric tensor). An interesting class of scalar-tensor theories passes all experimental tests in the weak-field regime of gravity, yet considerably deviates from general relativity in the strong-field regime in the presence of matter. A common assumption in modeling neutron stars is that the pressure within these object is spatially isotropic. We relax this assumption and examine how pressure anisotropy affects the mass, radius and moment of inertia of slowly rotating neutron stars, both in general relativity and in scalar-tensor gravity. We show that a sufficient amount of pressure anisotropy results in neutron star models whose properties in scalar-tensor theory deviate significantly from their general relativistic counterparts. Moreover, the presence of anisotropy allows these deviations to be considerable even for values of the theory's coupling parameter for which neutron stars in scalar-tensor theory would be otherwise indistinguishable from those in general relativity. Within scalar-tensor theory we also investigate the effects of the scalar field on the crustal torsional oscillations of neutron stars, which have been associated to quasi-periodic oscillations in the X-ray spectra in the aftermath of giant flares. We show that the presence of the scalar field has an influence on the thickness of the stellar crust, and investigate how it affects the oscillation frequencies. Deviations from the predictions of general relativity can be large for certain values of the theory's coupling parameter. However, the influence of the scalar field is degenerate with uncertainties in the equation of state of the star's crust and microphysics effects (electron screening) for values of the coupling allowed by binary pulsar observations. We also derive the stellar structure equations for slowly-rotating neutron stars in a broader class of scalar-tensor theories in which matter and scalar field are coupled through the so-called disformal coupling. We study in great detail how the disformal coupling affects the structure of neutron stars, and we investigate the existence of universal (equation of state-independent) relations connecting the stellar compactness and moment of inertia. In particular, we find that these universal relations can deviate considerably from the predictions of general relativity. (Abstract shortened by ProQuest.).
Shan, Tzu-Ray; van Duin, Adri C T; Thompson, Aidan P
2014-02-27
We have developed a new ReaxFF reactive force field parametrization for ammonium nitrate. Starting with an existing nitramine/TATB ReaxFF parametrization, we optimized it to reproduce electronic structure calculations for dissociation barriers, heats of formation, and crystal structure properties of ammonium nitrate phases. We have used it to predict the isothermal pressure-volume curve and the unreacted principal Hugoniot states. The predicted isothermal pressure-volume curve for phase IV solid ammonium nitrate agreed with electronic structure calculations and experimental data within 10% error for the considered range of compression. The predicted unreacted principal Hugoniot states were approximately 17% stiffer than experimental measurements. We then simulated thermal decomposition during heating to 2500 K. Thermal decomposition pathways agreed with experimental findings.
Economic policy optimization based on both one stochastic model and the parametric control theory
NASA Astrophysics Data System (ADS)
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
NASA Technical Reports Server (NTRS)
Wright, J. P.; Wilson, D. E.
1976-01-01
Many payloads currently proposed to be flown by the space shuttle system require long-duration cooling in the 3 to 200 K temperature range. Common requirements also exist for certain DOD payloads. Parametric design and optimization studies are reported for multistage and diode heat pipe radiator systems designed to operate in this temperature range. Also optimized are ground test systems for two long-life passive thermal control concepts operating under specified space environmental conditions. The ground test systems evaluated are ultimately intended to evolve into flight test qualification prototypes for early shuttle flights.
Composite panel development at JPL
NASA Technical Reports Server (NTRS)
Mcelroy, Paul; Helms, Rich
1988-01-01
Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.
Optimal control of parametric oscillations of compressed flexible bars
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.
NASA's Human Mission to a Near-Earth Asteroid: Landing on a Moving Target
NASA Technical Reports Server (NTRS)
Smith, Jeffrey H.; Lincoln, William P.; Weisbin, Charles R.
2011-01-01
This paper describes a Bayesian approach for comparing the productivity and cost-risk tradeoffs of sending versus not sending one or more robotic surveyor missions prior to a human mission to land on an asteroid. The expected value of sample information based on productivity combined with parametric variations in the prior probability an asteroid might be found suitable for landing were used to assess the optimal number of spacecraft and asteroids to survey. The analysis supports the value of surveyor missions to asteroids and indicates one launch with two spacecraft going simultaneously to two independent asteroids appears optimal.
NASA Astrophysics Data System (ADS)
Harshan, Suraj
The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.
Mean-deviation analysis in the theory of choice.
Grechuk, Bogdan; Molyboha, Anton; Zabarankin, Michael
2012-08-01
Mean-deviation analysis, along with the existing theories of coherent risk measures and dual utility, is examined in the context of the theory of choice under uncertainty, which studies rational preference relations for random outcomes based on different sets of axioms such as transitivity, monotonicity, continuity, etc. An axiomatic foundation of the theory of coherent risk measures is obtained as a relaxation of the axioms of the dual utility theory, and a further relaxation of the axioms are shown to lead to the mean-deviation analysis. Paradoxes arising from the sets of axioms corresponding to these theories and their possible resolutions are discussed, and application of the mean-deviation analysis to optimal risk sharing and portfolio selection in the context of rational choice is considered. © 2012 Society for Risk Analysis.
Fidelity deviation in quantum teleportation
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir
2018-04-01
We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel—we here consider the so-called Werner channel. To characterize our results, we introduce a 2D space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.
Ronald E. McRoberts; Grant M. Domke; Qi Chen; Erik Næsset; Terje Gobakken
2016-01-01
The relatively small sampling intensities used by national forest inventories are often insufficient to produce the desired precision for estimates of population parameters unless the estimation process is augmented with auxiliary information, usually in the form of remotely sensed data. The k-Nearest Neighbors (k-NN) technique is a non-parametric,multivariate approach...
NASA Astrophysics Data System (ADS)
Dragan, Laurentiu; Watt, Stephen M.
Computer algebra in scientific computation squarely faces the dilemma of natural mathematical expression versus efficiency. While higher-order programming constructs and parametric polymorphism provide a natural and expressive language for mathematical abstractions, they can come at a considerable cost. We investigate how deeply nested type constructions may be optimized to achieve performance similar to that of hand-tuned code written in lower-level languages.
Handling qualities of large flexible control-configured aircraft
NASA Technical Reports Server (NTRS)
Swaim, R. L.
1979-01-01
The approach to an analytical study of flexible airplane longitudinal handling qualities was to parametrically vary the natural frequencies of two symmetric elastic modes to induce mode interactions with the rigid body dynamics. Since the structure of the pilot model was unknown for such dynamic interactions, the optimal control pilot modeling method is being applied and used in conjunction with pilot rating method.
Non-parametric analysis of LANDSAT maps using neural nets and parallel computers
NASA Technical Reports Server (NTRS)
Salu, Yehuda; Tilton, James
1991-01-01
Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.
Shuttle cryogenic supply system optimization study. Volume 1: Management supply, sections 1 - 3
NASA Technical Reports Server (NTRS)
1973-01-01
An analysis of the cryogenic supply system for use on space shuttle vehicles was conducted. The major outputs of the analysis are: (1) evaluations of subsystem and integrated system concepts, (2) selection of representative designs, (3) parametric data and sensitivity studies, (4) evaluation of cryogenic cooling in environmental control subsystems, and (5) development of mathematical model.
Geometry of Quantum Computation with Qudits
Luo, Ming-Xing; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-01-01
The circuit complexity of quantum qubit system evolution as a primitive problem in quantum computation has been discussed widely. We investigate this problem in terms of qudit system. Using the Riemannian geometry the optimal quantum circuits are equivalent to the geodetic evolutions in specially curved parametrization of SU(dn). And the quantum circuit complexity is explicitly dependent of controllable approximation error bound. PMID:24509710
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
Cooling an Optically Trapped Ultracold Fermi Gas by Periodical Driving.
Li, Jiaming; de Melo, Leonardo F; Luo, Le
2017-03-30
We present a cooling method for a cold Fermi gas by parametrically driving atomic motions in a crossed-beam optical dipole trap (ODT). Our method employs the anharmonicity of the ODT, in which the hotter atoms at the edge of the trap feel the anharmonic components of the trapping potential, while the colder atoms in the center of the trap feel the harmonic one. By modulating the trap depth with frequencies that are resonant with the anharmonic components, we selectively excite the hotter atoms out of the trap while keeping the colder atoms in the trap, generating parametric cooling. This experimental protocol starts with a magneto-optical trap (MOT) that is loaded by a Zeeman slower. The precooled atoms in the MOT are then transferred to an ODT, and a bias magnetic field is applied to create an interacting Fermi gas. We then lower the trapping potential to prepare a cold Fermi gas near the degenerate temperature. After that, we sweep the magnetic field to the noninteracting regime of the Fermi gas, in which the parametric cooling can be manifested by modulating the intensity of the optical trapping beams. We find that the parametric cooling effect strongly depends on the modulation frequencies and amplitudes. With the optimized frequency and amplitude, we measure the dependence of the cloud energy on the modulation time. We observe that the cloud energy is changed in an anisotropic way, where the energy of the axial direction is significantly reduced by parametric driving. The cooling effect is limited to the axial direction because the dominant anharmonicity of the crossed-beam ODT is along the axial direction. Finally, we propose to extend this protocol for the trapping potentials of large anharmonicity in all directions, which provides a promising scheme for cooling quantum gases using external driving.
A case study in programming a quantum annealer for hard operational planning problems
NASA Astrophysics Data System (ADS)
Rieffel, Eleanor G.; Venturelli, Davide; O'Gorman, Bryan; Do, Minh B.; Prystay, Elicia M.; Smelyanskiy, Vadim N.
2015-01-01
We report on a case study in programming an early quantum annealer to attack optimization problems related to operational planning. While a number of studies have looked at the performance of quantum annealers on problems native to their architecture, and others have examined performance of select problems stemming from an application area, ours is one of the first studies of a quantum annealer's performance on parametrized families of hard problems from a practical domain. We explore two different general mappings of planning problems to quadratic unconstrained binary optimization (QUBO) problems, and apply them to two parametrized families of planning problems, navigation-type and scheduling-type. We also examine two more compact, but problem-type specific, mappings to QUBO, one for the navigation-type planning problems and one for the scheduling-type planning problems. We study embedding properties and parameter setting and examine their effect on the efficiency with which the quantum annealer solves these problems. From these results, we derive insights useful for the programming and design of future quantum annealers: problem choice, the mapping used, the properties of the embedding, and the annealing profile all matter, each significantly affecting the performance.
Joint confidence region estimation for area under ROC curve and Youden index.
Yin, Jingjing; Tian, Lili
2014-03-15
In the field of diagnostic studies, the area under the ROC curve (AUC) serves as an overall measure of a biomarker/diagnostic test's accuracy. Youden index, defined as the overall correct classification rate minus one at the optimal cut-off point, is another popular index. For continuous biomarkers of binary disease status, although researchers mainly evaluate the diagnostic accuracy using AUC, for the purpose of making diagnosis, Youden index provides an important and direct measure of the diagnostic accuracy at the optimal threshold and hence should be taken into consideration in addition to AUC. Furthermore, AUC and Youden index are generally correlated. In this paper, we initiate the idea of evaluating diagnostic accuracy based on AUC and Youden index simultaneously. As the first step toward this direction, this paper only focuses on the confidence region estimation of AUC and Youden index for a single marker. We present both parametric and non-parametric approaches for estimating joint confidence region of AUC and Youden index. We carry out extensive simulation study to evaluate the performance of the proposed methods. In the end, we apply the proposed methods to a real data set. Copyright © 2013 John Wiley & Sons, Ltd.
Scaling high-order harmonic generation from laser-solid interactions to ultrahigh intensity.
Dollar, F; Cummings, P; Chvykov, V; Willingale, L; Vargas, M; Yanovsky, V; Zulick, C; Maksimchuk, A; Thomas, A G R; Krushelnick, K
2013-04-26
Coherent x-ray beams with a subfemtosecond (<10(-15) s) pulse duration will enable measurements of fundamental atomic processes in a completely new regime. High-order harmonic generation (HOHG) using short pulse (<100 fs) infrared lasers focused to intensities surpassing 10(18) W cm(-2) onto a solid density plasma is a promising means of generating such short pulses. Critical to the relativistic oscillating mirror mechanism is the steepness of the plasma density gradient at the reflection point, characterized by a scale length, which can strongly influence the harmonic generation mechanism. It is shown that for intensities in excess of 10(21) W cm(-2) an optimum density ramp scale length exists that balances an increase in efficiency with a growth of parametric plasma wave instabilities. We show that for these higher intensities the optimal scale length is c/ω0, for which a variety of HOHG properties are optimized, including total conversion efficiency, HOHG divergence, and their power law scaling. Particle-in-cell simulations show striking evidence of the HOHG loss mechanism through parametric instabilities and relativistic self-phase modulation, which affect the produced spectra and conversion efficiency.
Combining large number of weak biomarkers based on AUC.
Yan, Li; Tian, Lili; Liu, Song
2015-12-20
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Combining large number of weak biomarkers based on AUC
Yan, Li; Tian, Lili; Liu, Song
2018-01-01
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901
Solar tower power plant using a particle-heated steam generator: Modeling and parametric study
NASA Astrophysics Data System (ADS)
Krüger, Michael; Bartsch, Philipp; Pointner, Harald; Zunft, Stefan
2016-05-01
Within the framework of the project HiTExStor II, a system model for the entire power plant consisting of volumetric air receiver, air-sand heat exchanger, sand storage system, steam generator and water-steam cycle was implemented in software "Ebsilon Professional". As a steam generator, the two technologies fluidized bed cooler and moving bed heat exchangers were considered. Physical models for the non-conventional power plant components as air- sand heat exchanger, fluidized bed coolers and moving bed heat exchanger had to be created and implemented in the simulation environment. Using the simulation model for the power plant, the individual components and subassemblies have been designed and the operating parameters were optimized in extensive parametric studies in terms of the essential degrees of freedom. The annual net electricity output for different systems was determined in annual performance calculations at a selected location (Huelva, Spain) using the optimized values for the studied parameters. The solution with moderate regenerative feed water heating has been found the most advantageous. Furthermore, the system with moving bed heat exchanger prevails over the system with fluidized bed cooler due to a 6 % higher net electricity yield.
NASA Astrophysics Data System (ADS)
Ghosh, Nabendu; Kumar, Pradip; Nandi, Goutam
2016-10-01
Welding input process parameters play a very significant role in determining the quality of the welded joint. Only by properly controlling every element of the process can product quality be controlled. For better quality of MIG welding of Ferritic stainless steel AISI 409, precise control of process parameters, parametric optimization of the process parameters, prediction and control of the desired responses (quality indices) etc., continued and elaborate experiments, analysis and modeling are needed. A data of knowledge - base may thus be generated which may be utilized by the practicing engineers and technicians to produce good quality weld more precisely, reliably and predictively. In the present work, X-ray radiographic test has been conducted in order to detect surface and sub-surface defects of weld specimens made of Ferritic stainless steel. The quality of the weld has been evaluated in terms of yield strength, ultimate tensile strength and percentage of elongation of the welded specimens. The observed data have been interpreted, discussed and analyzed by considering ultimate tensile strength ,yield strength and percentage elongation combined with use of Grey-Taguchi methodology.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
An Expert System-Driven Method for Parametric Trajectory Optimization During Conceptual Design
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Steffens, Michael; Edwards, Stephen; Diaz, Manuel J.; Holt, James B.
2015-01-01
During the early phases of engineering design, the costs committed are high, costs incurred are low, and the design freedom is high. It is well documented that decisions made in these early design phases drive the entire design's life cycle cost. In a traditional paradigm, key design decisions are made when little is known about the design. As the design matures, design changes become more difficult in both cost and schedule to enact. The current capability-based paradigm, which has emerged because of the constrained economic environment, calls for the infusion of knowledge usually acquired during later design phases into earlier design phases, i.e. bringing knowledge acquired during preliminary and detailed design into pre-conceptual and conceptual design. An area of critical importance to launch vehicle design is the optimization of its ascent trajectory, as the optimal trajectory will be able to take full advantage of the launch vehicle's capability to deliver a maximum amount of payload into orbit. Hence, the optimal ascent trajectory plays an important role in the vehicle's affordability posture yet little of the information required to successfully optimize a trajectory is known early in the design phase. Thus, the current paradigm of optimizing ascent trajectories involves generating point solutions for every change in a vehicle's design parameters. This is often a very tedious, manual, and time-consuming task for the analysts. Moreover, the trajectory design space is highly non-linear and multi-modal due to the interaction of various constraints. When these obstacles are coupled with the Program to Optimize Simulated Trajectories (POST), an industry standard program to optimize ascent trajectories that is difficult to use, expert trajectory analysts are required to effectively optimize a vehicle's ascent trajectory. Over the course of this paper, the authors discuss a methodology developed at NASA Marshall's Advanced Concepts Office to address these issues. The methodology is two-fold: first, capture the heuristics developed by human analysts over their many years of experience; and secondly, leverage the power of modern computing to evaluate multiple trajectories simultaneously and therefore enable the exploration of the trajectory's design space early during the pre- conceptual and conceptual phases of design. This methodology is coupled with design of experiments in order to train surrogate models, which enables trajectory design space visualization and parametric optimal ascent trajectory information to be available when early design decisions are being made.
NASA Astrophysics Data System (ADS)
Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli
2017-03-01
Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.
Optimal second order sliding mode control for linear uncertain systems.
Das, Madhulika; Mahanta, Chitralekha
2014-11-01
In this paper an optimal second order sliding mode controller (OSOSMC) is proposed to track a linear uncertain system. The optimal controller based on the linear quadratic regulator method is designed for the nominal system. An integral sliding mode controller is combined with the optimal controller to ensure robustness of the linear system which is affected by parametric uncertainties and external disturbances. To achieve finite time convergence of the sliding mode, a nonsingular terminal sliding surface is added with the integral sliding surface giving rise to a second order sliding mode controller. The main advantage of the proposed OSOSMC is that the control input is substantially reduced and it becomes chattering free. Simulation results confirm superiority of the proposed OSOSMC over some existing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Transonic airfoil analysis and design in nonuniform flow
NASA Technical Reports Server (NTRS)
Chang, J. F.; Lan, C. E.
1986-01-01
A nonuniform transonic airfoil code is developed for applications in analysis, inverse design and direct optimization involving an airfoil immersed in propfan slipstream. Problems concerning the numerical stability, convergence, divergence and solution oscillations are discussed. The code is validated by comparing with some known results in incompressible flow. A parametric investigation indicates that the airfoil lift-drag ratio can be increased by decreasing the thickness ratio. A better performance can be achieved if the airfoil is located below the slipstream center. Airfoil characteristics designed by the inverse method and a direct optimization are compared. The airfoil designed with the method of direct optimization exhibits better characteristics and achieves a gain of 22 percent in lift-drag ratio with a reduction of 4 percent in thickness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mouton, S.; Ledoux, Y.; Teissandier, D.
A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision supportmore » system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.« less
Hattori, Yusuke; Ishibashi, Kohei; Noda, Takashi; Okamura, Hideo; Kanzaki, Hideaki; Anzai, Toshihisa; Yasuda, Satoshi; Kusano, Kengo
2017-09-01
We describe the case of a 37-year-old woman who presented with complete right bundle branch block and right axis deviation. She was admitted to our hospital due to severe heart failure and was dependent on inotropic agents. Cardiac resynchronization therapy was initiated but did not improve her condition. After the optimization of the pacing timing, we performed earlier right ventricular pacing, which led to an improvement of her heart failure. Earlier right ventricular pacing should be considered in patients with complete right bundle branch block and right axis deviation when cardiac resynchronization therapy is not effective.
Köddermann, Thorsten; Reith, Dirk; Ludwig, Ralf
2013-10-07
In this contribution, we present two new united-atom force fields (UA-FFs) for 1-alkyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide [C(n)MIM][NTf(2)] (n=1, 2, 4, 6, 8) ionic liquids (ILs). One is parametrized manually, and the other is developed with the gradient-based optimization workflow (GROW). By doing so, we wanted to perform a hard test to determine how researchers could benefit from semiautomated optimization procedures. As with our already published all-atom force field (AA-FF) for [C(n)MIM][NTf(2)] (T. Köddermann, D. Paschek, R. Ludwig, ChemPhysChem- 2007, 8, 2464), the new force fields were derived to fit experimental densities, self-diffusion coefficients, and NMR rotational correlation times for the IL cation and for water molecules dissolved in [C(2)MIM][NTf(2)]. In the manual force field, the alkyl chains of the cation and the CF3 groups of the anion were treated as united atoms. In the GROW force field, only the alkyl chains of the cation were united. All other parts of the structures of the ions remained unchanged to prevent any loss of physical information. Structural, dynamic, and thermodynamic properties such as viscosity, cation rotational correlation times, and heats of vaporization calculated with the new force fields were compared with values simulated with the previous AA-FF and the experimental data. All simulated properties were in excellent agreement with the experimental values. Altogether, the UA-FFs are slightly superior for speed-up reasons. The UA-FF speeds up the simulation by about 100 % and reduces the demanded disk space by about 78 %. More importantly, real time and efforts to generate force fields could be significantly reduced by utilizing GROW. The real time for the GROW parametrization in this work was 2 months. Manual parametrization, in contrast, may take up to 12 months, and this is, therefore, a significant increase in speed, though it is difficult to estimate the duration of manual parametrization. Copyright © 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.; Guglielmo, Magda; Kafle, Prajwal R.; Wilkinson, Mark I.; Power, Chris
2017-09-01
Dwarf galaxies, among the most dark matter dominated structures of our Universe, are excellent test-beds for dark matter theories. Unfortunately, mass modelling of these systems suffers from the well-documented mass-velocity anisotropy degeneracy. For the case of spherically symmetric systems, we describe a method for non-parametric modelling of the radial and tangential velocity moments. The method is a numerical velocity anisotropy 'inversion', with parametric mass models, where the radial velocity dispersion profile, σrr2, is modelled as a B-spline, and the optimization is a three-step process that consists of (I) an evolutionary modelling to determine the mass model form and the best B-spline basis to represent σrr2; (II) an optimization of the smoothing parameters and (III) a Markov chain Monte Carlo analysis to determine the physical parameters. The mass-anisotropy degeneracy is reduced into mass model inference, irrespective of kinematics. We test our method using synthetic data. Our algorithm constructs the best kinematic profile and discriminates between competing dark matter models. We apply our method to the Fornax dwarf spheroidal galaxy. Using a King brightness profile and testing various dark matter mass models, our model inference favours a simple mass-follows-light system. We find that the anisotropy profile of Fornax is tangential (β(r) < 0) and we estimate a total mass of M_{tot} = 1.613^{+0.050}_{-0.075} × 10^8 M_{⊙}, and a mass-to-light ratio of Υ_V = 8.93 ^{+0.32}_{-0.47} (M_{⊙}/L_{⊙}). The algorithm we present is a robust and computationally inexpensive method for non-parametric modelling of spherical clusters independent of the mass-anisotropy degeneracy.
Borri, Marco; Schmidt, Maria A.; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M.; Partridge, Mike; Bhide, Shreerang A.; Nutting, Christopher M.; Harrington, Kevin J.; Newbold, Katie L.; Leach, Martin O.
2015-01-01
Purpose To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. Material and Methods The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. Results The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. Conclusion The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes. PMID:26398888
Comparative study of navigated versus freehand osteochondral graft transplantation of the knee.
Koulalis, Dimitrios; Di Benedetto, Paolo; Citak, Mustafa; O'Loughlin, Padhraig; Pearle, Andrew D; Kendoff, Daniel O
2009-04-01
Osteochondral lesions are a common sports-related injury for which osteochondral grafting, including mosaicplasty, is an established treatment. Computer navigation has been gaining popularity in orthopaedic surgery to improve accuracy and precision. Navigation improves angle and depth matching during harvest and placement of osteochondral grafts compared with conventional freehand open technique. Controlled laboratory study. Three cadaveric knees were used. Reference markers were attached to the femur, tibia, and donor/recipient site guides. Fifteen osteochondral grafts were harvested and inserted into recipient sites with computer navigation, and 15 similar grafts were inserted freehand. The angles of graft removal and placement as well as surface congruity (graft depth) were calculated for each surgical group. The mean harvesting angle at the donor site using navigation was 4 degrees (standard deviation, 2.3 degrees ; range, 1 degrees -9 degrees ) versus 12 degrees (standard deviation, 5.5 degrees ; range, 5 degrees -24 degrees ) using freehand technique (P < .0001). The recipient plug removal angle using the navigated technique was 3.3 degrees (standard deviation, 2.1 degrees ; range, 0 degrees -9 degrees ) versus 10.7 degrees (standard deviation, 4.9 degrees ; range, 2 degrees -17 degrees ) in freehand (P < .0001). The mean navigated recipient plug placement angle was 3.6 degrees (standard deviation, 2.0 degrees ; range, 1 degrees -9 degrees ) versus 10.6 degrees (standard deviation, 4.4 degrees ; range, 3 degrees -17 degrees ) with freehand technique (P = .0001). The mean height of plug protrusion under navigation was 0.3 mm (standard deviation, 0.2 mm; range, 0-0.6 mm) versus 0.5 mm (standard deviation, 0.3 mm; range, 0.2-1.1 mm) using a freehand technique (P = .0034). Significantly greater accuracy and precision were observed in harvesting and placement of the osteochondral grafts in the navigated procedures. Clinical studies are needed to establish a benefit in vivo. Improvement in the osteochondral harvest and placement is desirable to optimize clinical outcomes. Navigation shows great potential to improve both harvest and placement precision and accuracy, thus optimizing ultimate surface congruity.
de Castro, Bianca C R; Guida, Heraldo L; Roque, Adriano L; de Abreu, Luiz Carlos; Ferreira, Celso; Marcomini, Renata S; Monteiro, Carlos B M; Adami, Fernando; Ribeiro, Viviane F; Fonseca, Fernando L A; Santos, Vilma N S; Valenti, Vitor E
2014-01-01
It is poor in the literature the behavior of the geometric indices of heart rate variability (HRV) during the musical auditory stimulation. The objective is to investigate the acute effects of classic musical auditory stimulation on the geometric indexes of HRV in women in response to the postural change maneuver (PCM). We evaluated 11 healthy women between 18 and 25 years old. We analyzed the following indices: Triangular index, Triangular interpolation of RR intervals and Poincarι plot (standard deviation of the instantaneous variability of the beat-to beat heart rate [SD1], standard deviation of long-term continuous RR interval variability and Ratio between the short - and long-term variations of RR intervals [SD1/SD2] ratio). HRV was recorded at seated rest for 10 min. The women quickly stood up from a seated position in up to 3 s and remained standing still for 15 min. HRV was recorded at the following periods: Rest, 0-5 min, 5-10 min and 10-15 min during standing. In the second protocol, the subject was exposed to auditory musical stimulation (Pachelbel-Canon in D) for 10 min at seated position before standing position. Shapiro-Wilk to verify normality of data and ANOVA for repeated measures followed by the Bonferroni test for parametric variables and Friedman's followed by the Dunn's posttest for non-parametric distributions. In the first protocol, all indices were reduced at 10-15 min after the volunteers stood up. In the protocol musical auditory stimulation, the SD1 index was reduced at 5-10 min after the volunteers stood up compared with the music period. The SD1/SD2 ratio was decreased at control and music period compared with 5-10 min after the volunteers stood up. Musical auditory stimulation attenuates the cardiac autonomic responses to the PCM.
Behavioral Modeling of Adversaries with Multiple Objectives in Counterterrorism.
Mazicioglu, Dogucan; Merrick, Jason R W
2018-05-01
Attacker/defender models have primarily assumed that each decisionmaker optimizes the cost of the damage inflicted and its economic repercussions from their own perspective. Two streams of recent research have sought to extend such models. One stream suggests that it is more realistic to consider attackers with multiple objectives, but this research has not included the adaption of the terrorist with multiple objectives to defender actions. The other stream builds off experimental studies that show that decisionmakers deviate from optimal rational behavior. In this article, we extend attacker/defender models to incorporate multiple objectives that a terrorist might consider in planning an attack. This includes the tradeoffs that a terrorist might consider and their adaption to defender actions. However, we must also consider experimental evidence of deviations from the rationality assumed in the commonly used expected utility model in determining such adaption. Thus, we model the attacker's behavior using multiattribute prospect theory to account for the attacker's multiple objectives and deviations from rationality. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. We discuss the problems with implementing such an approach, but argue that research in this area must continue to avoid misrepresenting terrorist behavior in determining optimal defensive actions. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Gilani, Seyed-Omid; Sattarvand, Javad
2016-02-01
Meeting production targets in terms of ore quantity and quality is critical for a successful mining operation. In-situ grade uncertainty causes both deviations from production targets and general financial deficits. A new stochastic optimization algorithm based on ant colony optimization (ACO) approach is developed herein to integrate geological uncertainty described through a series of the simulated ore bodies. Two different strategies were developed based on a single predefined probability value (Prob) and multiple probability values (Pro bnt) , respectively in order to improve the initial solutions that created by deterministic ACO procedure. Application at the Sungun copper mine in the northwest of Iran demonstrate the abilities of the stochastic approach to create a single schedule and control the risk of deviating from production targets over time and also increase the project value. A comparison between two strategies and traditional approach illustrates that the multiple probability strategy is able to produce better schedules, however, the single predefined probability is more practical in projects requiring of high flexibility degree.
Ultimate explanations and suboptimal choice.
Vasconcelos, Marco; Machado, Armando; Pandeirada, Josefa N S
2018-07-01
Researchers have unraveled multiple cases in which behavior deviates from rationality principles. We propose that such deviations are valuable tools to understand the adaptive significance of the underpinning mechanisms. To illustrate, we discuss in detail an experimental protocol in which animals systematically incur substantial foraging losses by preferring a lean but informative option over a rich but non-informative one. To understand how adaptive mechanisms may fail to maximize food intake, we review a model inspired by optimal foraging principles that reconciles sub-optimal choice with the view that current behavioral mechanisms were pruned by the optimizing action of natural selection. To move beyond retrospective speculation, we then review critical tests of the model, regarding both its assumptions and its (sometimes counterintuitive) predictions, all of which have been upheld. The overall contention is that (a) known mechanisms can be used to develop better ultimate accounts and that (b) to understand why mechanisms that generate suboptimal behavior evolved, we need to consider their adaptive value in the animal's characteristic ecology. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Jacob, H. G.
1972-01-01
An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.
NASA Astrophysics Data System (ADS)
Oh, Jungsu S.; Kim, Jae Seung; Chae, Sun Young; Oh, Minyoung; Oh, Seung Jun; Cha, Seung Nam; Chang, Ho-Jong; Lee, Chong Sik; Lee, Jae Hong
2017-03-01
We present an optimized voxelwise statistical parametric mapping (SPM) of partial-volume (PV)-corrected positron emission tomography (PET) of 11C Pittsburgh Compound B (PiB), incorporating the anatomical precision of magnetic resonance image (MRI) and amyloid β (A β) burden-specificity of PiB PET. First, we applied region-based partial-volume correction (PVC), termed the geometric transfer matrix (GTM) method, to PiB PET, creating MRI-based lobar parcels filled with mean PiB uptakes. Then, we conducted a voxelwise PVC by multiplying the original PET by the ratio of a GTM-based PV-corrected PET to a 6-mm-smoothed PV-corrected PET. Finally, we conducted spatial normalizations of the PV-corrected PETs onto the study-specific template. As such, we increased the accuracy of the SPM normalization and the tissue specificity of SPM results. Moreover, lobar smoothing (instead of whole-brain smoothing) was applied to increase the signal-to-noise ratio in the image without degrading the tissue specificity. Thereby, we could optimize a voxelwise group comparison between subjects with high and normal A β burdens (from 10 patients with Alzheimer's disease, 30 patients with Lewy body dementia, and 9 normal controls). Our SPM framework outperformed than the conventional one in terms of the accuracy of the spatial normalization (85% of maximum likelihood tissue classification volume) and the tissue specificity (larger gray matter, and smaller cerebrospinal fluid volume fraction from the SPM results). Our SPM framework optimized the SPM of a PV-corrected A β PET in terms of anatomical precision, normalization accuracy, and tissue specificity, resulting in better detection and localization of A β burdens in patients with Alzheimer's disease and Lewy body dementia.
Svensson, Elin M; Yngman, Gunnar; Denti, Paolo; McIlleron, Helen; Kjellsson, Maria C; Karlsson, Mats O
2018-05-01
Fixed-dose combination formulations where several drugs are included in one tablet are important for the implementation of many long-term multidrug therapies. The selection of optimal dose ratios and tablet content of a fixed-dose combination and the design of individualized dosing regimens is a complex task, requiring multiple simultaneous considerations. In this work, a methodology for the rational design of a fixed-dose combination was developed and applied to the case of a three-drug pediatric anti-tuberculosis formulation individualized on body weight. The optimization methodology synthesizes information about the intended use population, the pharmacokinetic properties of the drugs, therapeutic targets, and practical constraints. A utility function is included to penalize deviations from the targets; a sequential estimation procedure was developed for stable estimation of break-points for individualized dosing. The suggested optimized pediatric anti-tuberculosis fixed-dose combination was compared with the recently launched World Health Organization-endorsed formulation. The optimized fixed-dose combination included 15, 36, and 16% higher amounts of rifampicin, isoniazid, and pyrazinamide, respectively. The optimized fixed-dose combination is expected to result in overall less deviation from the therapeutic targets based on adult exposure and substantially fewer children with underexposure (below half the target). The development of this design tool can aid the implementation of evidence-based formulations, integrating available knowledge and practical considerations, to optimize drug exposures and thereby treatment outcomes.
Gao, Chen-chen; Li, Feng-min; Lu, Lun; Sun, Yue
2015-10-01
For the determination of trace amounts of phthalic acid esters (PAEs) in complex seawater matrix, a stir bar sorptive extraction gas chromatography mass spectrometry (SBSE-GC-MS) method was established. Dimethyl phthalate (DMP), diethyl phthalate (DEP), dibutyl phthalate (DBP), butyl benzyl phthalate (BBP), dibutyl phthalate (2-ethylhexyl) phthalate (DEHP) and dioctyl phthalate (DOP) were selected as study objects. The effects of extraction time, amount of methanol, amount of sodium chloride, desorption time and desorption solvent were optimized. The method of SBSE-GC-MS was validated through recoveries and relative standard deviation. The optimal extraction time was 2 h. The optimal methanol content was 10%. The optimal sodium chloride content was 5% . The optimal desorption time was 50 min. The optimal desorption solvent was the mixture of methanol to acetonitrile (4:1, volume: volume). The linear relationship between the peak area and the concentration of PAEs was relevant. The correlation coefficients were greater than 0.997. The detection limits were between 0.25 and 174.42 ng x L(-1). The recoveries of different concentrations were between 56.97% and 124.22% . The relative standard deviations were between 0.41% and 14.39%. On the basis of the method, several estuaries water sample of Jiaozhou Bay were detected. DEP was detected in all samples, and the concentration of BBP, DEHP and DOP were much higher than the rest.
The Evolution of Generosity in the Ultimatum Game.
Hintze, Arend; Hertwig, Ralph
2016-09-28
When humans fail to make optimal decisions in strategic games and economic gambles, researchers typically try to explain why that behaviour is biased. To this end, they search for mechanisms that cause human behaviour to deviate from what seems to be the rational optimum. But perhaps human behaviour is not biased; perhaps research assumptions about the optimality of strategies are incomplete. In the one-shot anonymous symmetric ultimatum game (UG), humans fail to play optimally as defined by the Nash equilibrium. However, the distinction between kin and non-kin-with kin detection being a key evolutionary adaption-is often neglected when deriving the "optimal" strategy. We computationally evolved strategies in the UG that were equipped with an evolvable probability to discern kin from non-kin. When an opponent was not kin, agents evolved strategies that were similar to those used by humans. We therefore conclude that the strategy humans play is not irrational. The deviation between behaviour and the Nash equilibrium may rather be attributable to key evolutionary adaptations, such as kin detection. Our findings further suggest that social preference models are likely to capture mechanisms that permit people to play optimally in an evolutionary context. Once this context is taken into account, human behaviour no longer appears irrational.
Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...
2016-01-01
This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less
Motion-robust intensity-modulated proton therapy for distal esophageal cancer.
Yu, Jen; Zhang, Xiaodong; Liao, Li; Li, Heng; Zhu, Ronald; Park, Peter C; Sahoo, Narayan; Gillin, Michael; Li, Yupeng; Chang, Joe Y; Komaki, Ritsuko; Lin, Steven H
2016-03-01
To develop methods for evaluation and mitigation of dosimetric impact due to respiratory and diaphragmatic motion during free breathing in treatment of distal esophageal cancers using intensity-modulated proton therapy (IMPT). This was a retrospective study on 11 patients with distal esophageal cancer. For each patient, four-dimensional computed tomography (4D CT) data were acquired, and a nominal dose was calculated on the average phase of the 4D CT. The changes of water equivalent thickness (ΔWET) to cover the treatment volume from the peak of inspiration to the valley of expiration were calculated for a full range of beam angle rotation. Two IMPT plans were calculated: one at beam angles corresponding to small ΔWET and one at beam angles corresponding to large ΔWET. Four patients were selected for the calculation of 4D-robustness-optimized IMPT plans due to large motion-induced dose errors generated in conventional IMPT. To quantitatively evaluate motion-induced dose deviation, the authors calculated the lowest dose received by 95% (D95) of the internal clinical target volume for the nominal dose, the D95 calculated on the maximum inhale and exhale phases of 4D CT DCT0 andDCT50 , the 4D composite dose, and the 4D dynamic dose for a single fraction. The dose deviation increased with the average ΔWET of the implemented beams, ΔWETave. When ΔWETave was less than 5 mm, the dose error was less than 1 cobalt gray equivalent based on DCT0 and DCT50 . The dose deviation determined on the basis of DCT0 and DCT50 was proportionally larger than that determined on the basis of the 4D composite dose. The 4D-robustness-optimized IMPT plans notably reduced the overall dose deviation of multiple fractions and the dose deviation caused by the interplay effect in a single fraction. In IMPT for distal esophageal cancer, ΔWET analysis can be used to select the beam angles that are least affected by respiratory and diaphragmatic motion. To further reduce dose deviation, the 4D-robustness optimization can be implemented for IMPT planning. Calculation of DCT0 and DCT50 is a conservative method to estimate the motion-induced dose errors.
Non-intrusive reduced order modeling of nonlinear problems using neural networks
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Ubbiali, S.
2018-06-01
We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.
The minimal number of parameters in triclinic crystal-field potentials
NASA Astrophysics Data System (ADS)
Mulak, J.
2003-09-01
The optimal parametrization schemes of the crystal-field (CF) potential in fitting procedures are those based on the smallest numbers of parameters. The surplus parametrizations usually lead to artificial and non-physical solutions. Therefore, the symmetry adapted reference systems are commonly used. Instead of them, however, the coordinate systems with the z-axis directed along the principal axes of the CF multipoles (2 k-poles) can be applied successfully, particularly for triclinic CF potentials. Due to the irreducibility of the D(k) representations such a choice can reduce the number of the k-order parameters by 2 k: from 2 k+1 (in the most general case) to only 1 (the axial one). Unfortunately, in general, the numbers of other order CF parameters stay then unrestricted. In this way, the number of parameters for the k-even triclinic CF potentials can be reduced by 4, 8 or 12, for k=2,4 or 6, respectively. Hence, the parametrization schemes based on maximum 14 parameters can be in use solely. For higher point symmetries this number is usually greater than that for the symmetry adapted systems. Nonetheless, many instructive correlations between the multipole contributions to the CF interaction are attainable in this way.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Obtaining orthotropic elasticity tensor using entries zeroing method.
NASA Astrophysics Data System (ADS)
Gierlach, Bartosz; Danek, Tomasz
2017-04-01
A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal rotation. Computations were parallelized with OpenMP to decrease computational time what enables different tensors to be processed by different threads. As a result the distributions of rotated tensor entries values were obtained. For the entries which were to be zeroed we can observe almost normal distributions having mean equal to zero or sum of two normal distributions having inverse means. Non-zero entries represent different distributions with two or three maxima. Analysis of obtained results shows that described method produces consistent values of quaternions used to rotate tensors. Despite of less complex target function in a process of optimization in comparison to common approach, entries zeroing method provides results which can be applied to obtain an orthotropic tensor with good reliability. Modification of the method can produce also a tool for obtaining effective tensors belonging to another symmetry classes. This research was supported by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.
Dual throat engine design for a SSTO launch vehicle
NASA Technical Reports Server (NTRS)
Obrien, C. J.; Salmon, J. W.
1980-01-01
A propulsion system analysis of a dual fuel, dual throat engine for launch vehicle application was conducted. Basic dual throat engine characterization data are presented to allow vehicle optimization studies to be conducted. A preliminary baseline engine system was defined. Dual throat engine performance, envelope, and weight parametric data were generated over the parametric range of thrust from 890 to 8896 KN (200K to 2M lb-force), chamber pressure from 6.89 million to 34.5 million N/sq m (1000 to 5000 psia) thrust ratio from 1.2 to 5, and a range of mixture ratios for the two tripropellant combinations: LO2/RP-1 + LH2 and LO2/LCH4 + LH2. The results of the study indicate that the dual fuel dual throat engine is a viable single stage to orbit candidate.
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Van, Luong
1992-01-01
The objective of this paper are to develop a multidisciplinary computational methodology to predict the hot-gas-side and coolant-side heat transfer and to use it in parametric studies to recommend optimized design of the coolant channels for a regeneratively cooled liquid rocket engine combustor. An integrated numerical model which incorporates CFD for the hot-gas thermal environment, and thermal analysis for the liner and coolant channels, was developed. This integrated CFD/thermal model was validated by comparing predicted heat fluxes with those of hot-firing test and industrial design methods for a 40 k calorimeter thrust chamber and the Space Shuttle Main Engine Main Combustion Chamber. Parametric studies were performed for the Advanced Main Combustion Chamber to find a strategy for a proposed combustion chamber coolant channel design.
Conceptual design of reduced energy transports
NASA Technical Reports Server (NTRS)
Ardema, M. D.; Harper, M.; Smith, C. L.; Waters, M. H.; Williams, L. J.
1975-01-01
This paper reports the results of a conceptual design study of new, near-term fuel-conservative aircraft. A parametric study was made to determine the effects of cruise Mach number and fuel cost on the 'optimum' configuration characteristics and on economic performance. Supercritical wing technology and advanced engine cycles were assumed. For each design, the wing geometry was optimized to give maximum return on investment at a particular fuel cost. Based on the results of the parametric study, a reduced energy configuration was selected. Compared with existing transport designs, the reduced energy design has a higher aspect ratio wing with lower sweep, and cruises at a lower Mach number. It yields about 30% more seat-miles/gal than current wide-body aircraft. At the higher fuel costs anticipated in the future, the reduced energy design has about the same economic performance as existing designs.
Study of noise transmission through double wall aircraft windows
NASA Technical Reports Server (NTRS)
Vaicaitis, R.
1983-01-01
Analytical and experimental procedures were used to predict the noise transmitted through double wall windows into the cabin of a twin-engine G/A aircraft. The analytical model was applied to optimize cabin noise through parametric variation of the structural and acoustic parameters. The parametric study includes mass addition, increase in plexiglass thickness, decrease in window size, increase in window cavity depth, depressurization of the space between the two window plates, replacement of the air cavity with a transparent viscoelastic material, change in stiffness of the plexiglass material, and different absorptive materials for the interior walls of the cabin. It was found that increasing the exterior plexiglass thickness and/or decreasing the total window size could achieve the proper amount of noise reduction for this aircraft. The total added weight to the aircraft is then about 25 lbs.
Artificial Intelligence Methods in Pursuit Evasion Differential Games
1990-07-30
objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of
Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO)
Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing
2016-01-01
The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles. PMID:27420073
Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO).
Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing
2016-07-13
The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle's speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.