Advanced Optimal Extraction for the Spitzer/IRS
NASA Astrophysics Data System (ADS)
Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.
2010-02-01
We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Khajeh, Masoud; Safigholi, Habib
2015-01-01
A miniature X-ray source has been optimized for electronic brachytherapy. The cooling fluid for this device is water. Unlike the radionuclide brachytherapy sources, this source is able to operate at variable voltages and currents to match the dose with the tumor depth. First, Monte Carlo (MC) optimization was performed on the tungsten target-buffer thickness layers versus energy such that the minimum X-ray attenuation occurred. Second optimization was done on the selection of the anode shape based on the Monte Carlo in water TG-43U1 anisotropy function. This optimization was carried out to get the dose anisotropy functions closer to unity at any angle from 0° to 170°. Three anode shapes including cylindrical, spherical, and conical were considered. Moreover, by Computational Fluid Dynamic (CFD) code the optimal target-buffer shape and different nozzle shapes for electronic brachytherapy were evaluated. The characterization criteria of the CFD were the minimum temperature on the anode shape, cooling water, and pressure loss from inlet to outlet. The optimal anode was conical in shape with a conical nozzle. Finally, the TG-43U1 parameters of the optimal source were compared with the literature. PMID:26966563
Adjoint Sensitivity Method to Determine Optimal Set of Stations for Tsunami Source Inversion
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Hossen, M. J.; Cummins, P. R.; Satake, K.
2017-12-01
We applied the adjoint sensitivity technique in tsunami science for the first time to determine an optimal set of stations for a tsunami source inversion. The adjoint sensitivity (AS) method has been used in numerical weather prediction to find optimal locations for adaptive observations. We implemented this technique to Green's Function based Time Reverse Imaging (GFTRI), which is recently used in tsunami source inversion in order to reconstruct the initial sea surface displacement, known as tsunami source model. This method has the same source representation as the traditional least square (LSQ) source inversion method where a tsunami source is represented by dividing the source region into a regular grid of "point" sources. For each of these, Green's function (GF) is computed using a basis function for initial sea surface displacement whose amplitude is concentrated near the grid point. We applied the AS method to the 2009 Samoa earthquake tsunami that occurred on 29 September 2009 in the southwest Pacific, near the Tonga trench. Many studies show that this earthquake is a doublet associated with both normal faulting in the outer-rise region and thrust faulting in the subduction interface. To estimate the tsunami source model for this complex event, we initially considered 11 observations consisting of 5 tide gauges and 6 DART bouys. After implementing AS method, we found the optimal set of observations consisting with 8 stations. Inversion with this optimal set provides better result in terms of waveform fitting and source model that shows both sub-events associated with normal and thrust faulting.
Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Weihong; Sun, Kai; Qi, Junjian
2015-01-01
Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
NASA Astrophysics Data System (ADS)
Lin, Juan; Liu, Chenglian; Guo, Yongning
2014-10-01
The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.
Optimal observation network design for conceptual model discrimination and uncertainty reduction
NASA Astrophysics Data System (ADS)
Pham, Hai V.; Tsai, Frank T.-C.
2016-02-01
This study expands the Box-Hill discrimination function to design an optimal observation network to discriminate conceptual models and, in turn, identify a most favored model. The Box-Hill discrimination function measures the expected decrease in Shannon entropy (for model identification) before and after the optimal design for one additional observation. This study modifies the discrimination function to account for multiple future observations that are assumed spatiotemporally independent and Gaussian-distributed. Bayesian model averaging (BMA) is used to incorporate existing observation data and quantify future observation uncertainty arising from conceptual and parametric uncertainties in the discrimination function. In addition, the BMA method is adopted to predict future observation data in a statistical sense. The design goal is to find optimal locations and least data via maximizing the Box-Hill discrimination function value subject to a posterior model probability threshold. The optimal observation network design is illustrated using a groundwater study in Baton Rouge, Louisiana, to collect additional groundwater heads from USGS wells. The sources of uncertainty creating multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. Impacts of considering homoscedastic and heteroscedastic future observation data and the sources of uncertainties on potential observation areas are analyzed. Results show that heteroscedasticity should be considered in the design procedure to account for various sources of future observation uncertainty. After the optimal design is obtained and the corresponding data are collected for model updating, total variances of head predictions can be significantly reduced by identifying a model with a superior posterior model probability.
Joint optimization of source, mask, and pupil in optical lithography
NASA Astrophysics Data System (ADS)
Li, Jia; Lam, Edmund Y.
2014-03-01
Mask topography effects need to be taken into consideration for more advanced resolution enhancement techniques in optical lithography. However, rigorous 3D mask model achieves high accuracy at a large computational cost. This work develops a combined source, mask and pupil optimization (SMPO) approach by taking advantage of the fact that pupil phase manipulation is capable of partially compensating for mask topography effects. We first design the pupil wavefront function by incorporating primary and secondary spherical aberration through the coefficients of the Zernike polynomials, and achieve optimal source-mask pair under the condition of aberrated pupil. Evaluations against conventional source mask optimization (SMO) without incorporating pupil aberrations show that SMPO provides improved performance in terms of pattern fidelity and process window sizes.
NASA Astrophysics Data System (ADS)
Maringanti, Chetan; Chaubey, Indrajeet; Popp, Jennie
2009-06-01
Best management practices (BMPs) are effective in reducing the transport of agricultural nonpoint source pollutants to receiving water bodies. However, selection of BMPs for placement in a watershed requires optimization of the available resources to obtain maximum possible pollution reduction. In this study, an optimization methodology is developed to select and place BMPs in a watershed to provide solutions that are both economically and ecologically effective. This novel approach develops and utilizes a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The BMP tool replaces the dynamic linkage of the distributed parameter watershed model during optimization and therefore reduces the computation time considerably. Total pollutant load from the watershed, and net cost increase from the baseline, were the two objective functions minimized during the optimization process. The optimization model, consisting of a multiobjective genetic algorithm (NSGA-II) in combination with a watershed simulation tool (Soil Water and Assessment Tool (SWAT)), was developed and tested for nonpoint source pollution control in the L'Anguille River watershed located in eastern Arkansas. The optimized solutions provided a trade-off between the two objective functions for sediment, phosphorus, and nitrogen reduction. The results indicated that buffer strips were very effective in controlling the nonpoint source pollutants from leaving the croplands. The optimized BMP plans resulted in potential reductions of 33%, 32%, and 13% in sediment, phosphorus, and nitrogen loads, respectively, from the watershed.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Ma, Yunfeng; Xiang, Fu; Xiang, Jun; Yu, Longjiang
2012-01-01
Selenium is an essential nutrient with diverse physiological functions, and soluble organic selenium (SOS) sources have a higher bioavailability than inorganic selenium sources. Based on the response surface methodology and central composite design, this study presents the optimal medium components for SOS accumulation in batch cultures of Flammulina velutipes, i.e. 30 g/L glucose, 11.2 mg/L sodium selenite, and 1.85 g/L NH4NO3. Furthermore, logistic function model feeding was found to be the optimal feeding strategy for SOS accumulation during Flammulina velutipes mycelia fermentation, where the maximum SOS accumulation reached (4.63 +/- 0.24) mg/L, which is consistent with the predicted value.
Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment
NASA Astrophysics Data System (ADS)
Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang
2016-11-01
A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.
NASA Astrophysics Data System (ADS)
Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid
2011-10-01
In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
Cohen, Michael X
2017-09-27
The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Tool Support for Software Lookup Table Optimization
Wilcox, Chris; Strout, Michelle Mills; Bieman, James M.
2011-01-01
A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology andmore » tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0× and 6.9× for two molecular biology algorithms, 1.4× for a molecular dynamics program, 2.1× to 2.8× for a neural network application, and 4.6× for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.« less
NASA Astrophysics Data System (ADS)
Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl
2014-09-01
Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.
Chande, Ruchi D; Wayne, Jennifer S
2017-09-01
Computational models of diarthrodial joints serve to inform the biomechanical function of these structures, and as such, must be supplied appropriate inputs for performance that is representative of actual joint function. Inputs for these models are sourced from both imaging modalities as well as literature. The latter is often the source of mechanical properties for soft tissues, like ligament stiffnesses; however, such data are not always available for all the soft tissues nor is it known for patient-specific work. In the current research, a method to improve the ligament stiffness definition for a computational foot/ankle model was sought with the greater goal of improving the predictive ability of the computational model. Specifically, the stiffness values were optimized using artificial neural networks (ANNs); both feedforward and radial basis function networks (RBFNs) were considered. Optimal networks of each type were determined and subsequently used to predict stiffnesses for the foot/ankle model. Ultimately, the predicted stiffnesses were considered reasonable and resulted in enhanced performance of the computational model, suggesting that artificial neural networks can be used to optimize stiffness inputs.
Nonparametric variational optimization of reaction coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk
State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such anmore » approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.« less
NASA Astrophysics Data System (ADS)
Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi
2017-02-01
An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.
Theoretical Investigation Leading to Energy Storage in Atomic and Molecular Systems
1990-12-01
can be calculated in a single run. 21 j) Non-gradient optimization of basis function exponents is possible. The source code can be modified to carry...basis. The 10s3p/5s3p basis consists of the 9s/4s contraction of Siegbahn and Liu (Reference 91) augmented by a diffuse s-type function ( exponent ...vibrational modes. Introduction of diffuse basis functions and optimization of the d-orbital exponents have a small but important effect on the
Automated optimization of an aspheric light-emitting diode lens for uniform illumination.
Luo, Xiaoxia; Liu, Hua; Lu, Zhenwu; Wang, Yao
2011-07-10
In this paper, an automated optimization method in the sequential mode of ZEMAX is proposed in the design of an aspheric lens with uniform illuminance for an LED source. A feedback modification is introduced in the design for the LED extended source. The user-defined merit function is written out by using ZEMAX programming language macros language and, as an example, optimum parameters of an aspheric lens are obtained via running an optimization. The optical simulation results show that the illumination efficiency and uniformity can reach 83% and 90%, respectively, on a target surface of 40 mm diameter and at 60 mm away for a 1×1 mm LED source. © 2011 Optical Society of America
Nonlinear optimal control policies for buoyancy-driven flows in the built environment
NASA Astrophysics Data System (ADS)
Nabi, Saleh; Grover, Piyush; Caulfield, Colm
2017-11-01
We consider optimal control of turbulent buoyancy-driven flows in the built environment, focusing on a model test case of displacement ventilation with a time-varying heat source. The flow is modeled using the unsteady Reynolds-averaged equations (URANS). To understand the stratification dynamics better, we derive a low-order partial-mixing ODE model extending the buoyancy-driven emptying filling box problem to the case of where both the heat source and the (controlled) inlet flow are time-varying. In the limit of a single step-change in the heat source strength, our model is consistent with that of Bower et al.. Our model considers the dynamics of both `filling' and `intruding' added layers due to a time-varying source and inlet flow. A nonlinear direct-adjoint-looping optimal control formulation yields time-varying values of temperature and velocity of the inlet flow that lead to `optimal' time-averaged temperature relative to appropriate objective functionals in a region of interest.
Guo, Gang; Wu, Di; Ekama, George A; Hao, Tianwei; Mackey, Hamish Robert; Chen, Guanghao
2018-04-16
The recently developed Denitrifying Sulfur conversion-associated Enhanced Biological Phosphorus Removal (DS-EBPR) process has demonstrated simultaneous removal of organics, nitrogen and phosphorus with minimal sludge production in the treatment of saline/brackish wastewater. Its performance, however, is sensitive to operating and environmental conditions. In this study, the effects of temperature (20, 25, 30 and 35 °C) and the ratio of influent acetate to propionate (100-0, 75-25, 50-50, 25-75 and 0-100%) on anaerobic metabolism were investigated, and their optimal values/controls for performance optimization were identified. A mature DS-EBPR sludge enriched with approximately 30% sulfate-reducing bacteria (SRB) and 33% sulfide-oxidizing bacteria (SOB) was used in this study. The anaerobic stoichiometry of this process was insensitive to temperature or changes in the carbon source. However, an increase in temperature from 20 to 35 °C accelerated the kinetic reactions of the functional bacteria (i.e. SRB and SOB) and raised the energy requirement for their anaerobic maintenance, while a moderate temperature (25-30 °C) resulted in better P removal (≥93%, 18.6 mg P/L removal from total 20 mg P/L in the influent) with a maximum sulfur conversion of approximately 16 mg S/L. These results indicate that the functional bacteria are likely to be mesophilic. When a mixed carbon source (75-25 and 50-50% acetate to propionate ratios) was supplied, DS-EBPR achieved a stable P removal (≥89%, 17.8 mg P/L for 400 mg COD/L in the influent) with sulfur conversions at around 23 mg S/L, suggesting the functional bacteria could effectively adapt to changes in acetate or propionate as the carbon source. The optimal temperatures or carbon source conditions maximized the functional bacteria competition against glycogen-accumulating organisms by favoring their activity and synergy. Therefore, the DS-EBPR process can be optimized by setting the temperature in the appropriate range (25-30 °C) and/or manipulating influent carbon sources. Copyright © 2018 Elsevier Ltd. All rights reserved.
Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.
2008-05-01
The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.
Formal optimization of hovering performance using free wake lifting surface theory
NASA Technical Reports Server (NTRS)
Chung, S. Y.
1986-01-01
Free wake techniques for performance prediction and optimization of hovering rotor are discussed. The influence functions due to vortex ring, vortex cylinder, and source or vortex sheets are presented. The vortex core sizes of rotor wake vortices are calculated and their importance is discussed. Lifting body theory for finite thickness body is developed for pressure calculation, and hence performance prediction of hovering rotors. Numerical optimization technique based on free wake lifting line theory is presented and discussed. It is demonstrated that formal optimization can be used with the implicit and nonlinear objective or cost function such as the performance of hovering rotors as used in this report.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali
2015-04-01
The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.
Heuristic Approach for Configuration of a Grid-Tied Microgrid in Puerto Rico
NASA Astrophysics Data System (ADS)
Rodriguez, Miguel A.
The high rates of cost of electricity that consumers are being charged by the utility grid in Puerto Rico have created an energy crisis around the island. This situation is due to the island's dependence on imported fossil fuels. In order to aid in the transition from fossil-fuel based electricity into electricity from renewable and alternative sources, this research work focuses on reducing the cost of electricity for Puerto Rico through means of finding the optimal microgrid configuration for a set number of consumers from the residential sector. The Hybrid Optimization Modeling for Energy Renewables (HOMER) software, developed by NREL, is utilized as an aid in determining the optimal microgrid setting. The problem is also approached via convex optimization; specifically, an objective function C(t) is formulated in order to be minimized. The cost function depends on the energy supplied by the grid, the energy supplied by renewable sources, the energy not supplied due to outages, as well as any excess energy sold to the utility in a yearly manner. A term for considering the social cost of carbon is also considered in the cost function. Once the microgrid settings from HOMER are obtained, those are evaluated via the optimized function C( t), which will in turn assess the true optimality of the microgrid configuration. A microgrid to supply 10 consumers is considered; each consumer can possess a different microgrid configuration. The cost function C( t) is minimized, and the Net Present Value and Cost of Electricity are computed for each configuration, in order to assess the true feasibility. Results show that the greater the penetration of components into the microgrid, the greater the energy produced by the renewable sources in the microgrid, the greater the energy not supplied due to outages. The proposed method demonstrates that adding large amounts of renewable components in a microgrid does not necessarily translates into economic benefits for the consumer; in fact, there is a trade back between cost and addition of elements that must be considered. Any configurations which consider further increases in microgrid components will result in increased NPV and increased costs of electricity, which deem the configurations as unfeasible.
D-Optimal Experimental Design for Contaminant Source Identification
NASA Astrophysics Data System (ADS)
Sai Baba, A. K.; Alexanderian, A.
2016-12-01
Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.
Quadratic Optimization in the Problems of Active Control of Sound
NASA Technical Reports Server (NTRS)
Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).
How Conjunctive Use of Surface and Ground Water could Increase Resiliency in US?
NASA Astrophysics Data System (ADS)
Josset, L.; Rising, J. A.; Russo, T. A.; Troy, T. J.; Lall, U.; Allaire, M.
2016-12-01
Optimized management practices are crucial to ensuring water availability in the future. However this presents a tremendous challenge due to the many functions of water: water is not only central for our survival as drinking water or for irrigation, but it is also valued for industrial and recreational use. Sources of water meeting these needs range from rain water harvesting to reservoirs, water reuse, groundwater abstraction and desalination. A global conjunctive management approach is thus necessary to develop sustainable practices as all sectors are strongly coupled. Policy-makers and researchers have identified pluralism in water sources as a key solution to reach water security. We propose a novel approach to sustainable water management that accounts for multiple sources of water in an integrated manner. We formulate this challenge as an optimization problem where the choice of water sources is driven both by the availability of the sources and their relative cost. The results determine the optimal operational decisions for each sources (e.g. reservoirs releases, surface water withdrawals, groundwater abstraction and/or desalination water use) at each time step for a given time horizon. The physical surface and ground water systems are simulated inside the optimization by setting state equations as constraints. Additional constraints may be added to the model to represent the influence of policy decisions. To account for uncertainty in weather conditions and its impact on availability, the optimization is performed for an ensemble of climate scenarios. While many sectors and their interactions are represented, the computational cost is limited as the problem remains linear and thus enables large-scale applications and the propagation of uncertainty. The formulation is implemented within the model "America's Water Analysis, Synthesis and Heuristic", an integrated model for the conterminous US discretized at the county-scale. This enables a systematic evaluation of stresses on water resources. We explore in particular geographic and temporal trends in function of user-types to develop a better understanding of the dynamics at play. We conclude with a comparison between the optimization results and current water use to identify potential solutions to increase resiliency.
A review of biomedical multiphoton microscopy and its laser sources
NASA Astrophysics Data System (ADS)
Lefort, Claire
2017-10-01
Multiphoton microscopy (MPM) has been the subject of major development efforts for about 25 years for imaging biological specimens at micron scale and presented as an elegant alternative to classical fluorescence methods such as confocal microscopy. In this topical review, the main interests and technical requirements of MPM are addressed with a focus on the crucial role of excitation source for optimization of multiphoton processes. Then, an overview of the different sources successfully demonstrated in literature for MPM is presented, and their physical parameters are inventoried. A classification of these sources in function with their ability to optimize multiphoton processes is proposed, following a protocol found in literature. Starting from these considerations, a suggestion of a possible identikit of the ideal laser source for MPM concludes this topical review. Dedicated to Martin.
A study of the optimization method used in the NAVY/NASA gas turbine engine computer code
NASA Technical Reports Server (NTRS)
Horsewood, J. L.; Pines, S.
1977-01-01
Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.
Discretized energy minimization in a wave guide with point sources
NASA Technical Reports Server (NTRS)
Propst, G.
1994-01-01
An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.
The General Mission Analysis Tool (GMAT): Current Features And Adding Custom Functionality
NASA Technical Reports Server (NTRS)
Conway, Darrel J.; Hughes, Steven P.
2010-01-01
The General Mission Analysis Tool (GMAT) is a software system for trajectory optimization, mission analysis, trajectory estimation, and prediction developed by NASA, the Air Force Research Lab, and private industry. GMAT's design and implementation are based on four basic principles: open source visibility for both the source code and design documentation; platform independence; modular design; and user extensibility. The system, released under the NASA Open Source Agreement, runs on Windows, Mac and Linux. User extensions, loaded at run time, have been built for optimization, trajectory visualization, force model extension, and estimation, by parties outside of GMAT's development group. The system has been used to optimize maneuvers for the Lunar Crater Observation and Sensing Satellite (LCROSS) and ARTEMIS missions and is being used for formation design and analysis for the Magnetospheric Multiscale Mission (MMS).
Galaxy Redshifts from Discrete Optimization of Correlation Functions
NASA Astrophysics Data System (ADS)
Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi
2016-12-01
We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.
Li, Jia; Lam, Edmund Y
2014-04-21
Mask topography effects need to be taken into consideration for a more accurate solution of source mask optimization (SMO) in advanced optical lithography. However, rigorous 3D mask models generally involve intensive computation and conventional SMO fails to manipulate the mask-induced undesired phase errors that degrade the usable depth of focus (uDOF) and process yield. In this work, an optimization approach incorporating pupil wavefront aberrations into SMO procedure is developed as an alternative to maximize the uDOF. We first design the pupil wavefront function by adding primary and secondary spherical aberrations through the coefficients of the Zernike polynomials, and then apply the conjugate gradient method to achieve an optimal source-mask pair under the condition of aberrated pupil. We also use a statistical model to determine the Zernike coefficients for the phase control and adjustment. Rigorous simulations of thick masks show that this approach provides compensation for mask topography effects by improving the pattern fidelity and increasing uDOF.
Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System
NASA Astrophysics Data System (ADS)
Agarwal, Ruchi; Singh, Sanjeev
2017-12-01
The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.
NASA Astrophysics Data System (ADS)
Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung
2018-02-01
Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.
NASA Astrophysics Data System (ADS)
Molaei Imen Abadi, Rouzbeh; Sedigh Ziabari, Seyed Ali
2016-11-01
In this paper, a first qualitative study on the performance characteristics of dual-work function gate junctionless TFET (DWG-JLTFET) on the basis of energy band profile modulation is investigated. A dual-work function gate technique is used in a JLTFET in order to create a downward band bending on the source side similar to PNPN structure. Compared with the single-work function gate junctionless TFET (SWG-JLTFET), the numerical simulation results demonstrated that the DWG-JLTFET simultaneously optimizes the ON-state current, the OFF-state leakage current, and the threshold voltage and also improves average subthreshold slope. It is illustrated that if appropriate work functions are selected for the gate materials on the source side and the drain side, the JLTFET exhibits a considerably improved performance. Furthermore, the optimization design of the tunnel gate length ( L Tun) for the proposed DWG-JLTFET is studied. All the simulations are done in Silvaco TCAD for a channel length of 20 nm using the nonlocal band-to-band tunneling (BTBT) model.
The Optimal Location of GEODSS Sensors in Canada
1991-02-01
nteractive procedures for solving multiobjective transportation problems. A transportation problem is a classical linear programming problem where a...product must be transported from each of m sources to any of n destinations such that one or more objectives are optimized (36:96). The first algorithm...0, k - 1,...,L where z, is the fth element of zk The function z’(x) can now be optimized using any efficient, single-objectivc transportation
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
Optimal secondary source position in exterior spherical acoustical holophony
NASA Astrophysics Data System (ADS)
Pasqual, A. M.; Martin, V.
2012-02-01
Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.
Lower Emittance Lattice for the Advanced Photon Source Upgrade Using Reverse Bending Magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borland, M.; Berenc, T.; Sun, Y.
The Advanced Photon Source (APS) is pursuing an upgrade to the storage ring to a hybrid seven-bend-achromat design [1]. The nominal design provides a natural emittance of 67 pm [2]. By adding reverse dipole fields to several quadrupoles [3, 4] we can reduce the natural emittance to 41 pm while simultaneously providing more optimal beta functions in the insertion devices and increasing the dispersion function at the chromaticity sextupole magnets. The improved emittance results from a combination of increased energy loss per turn and a change in the damping partition. At the same time, the nonlinear dynamics performance is verymore » similar, thanks in part to increased dispersion in the sextupoles. This paper describes the properties, optimization, and performance of the new lattice.« less
Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.
Firtha, Gergely; Fiala, Péter
2017-08-01
The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.
NASA Astrophysics Data System (ADS)
Zhang, Jingjing; Guo, Weihong; Xie, Bin; Yu, Xingjian; Luo, Xiaobing; Zhang, Tao; Yu, Zhihua; Wang, Hong; Jin, Xing
2017-09-01
Blue light hazard of white light-emitting diodes (LED) is a hidden risk for human's photobiological safety. Recent spectral optimization methods focus on maximizing luminous efficacy and improving color performances of LEDs, but few of them take blue hazard into account. Therefore, for healthy lighting, it's urgent to propose a spectral optimization method for white LED source to exhibit low blue light hazard, high luminous efficacy of radiation (LER) and high color performances. In this study, a genetic algorithm with penalty functions was proposed for realizing white spectra with low blue hazard, maximal LER and high color rendering index (CRI) values. By simulations, white spectra from LEDs with low blue hazard, high LER (≥297 lm/W) and high CRI (≥90) were achieved at different correlated color temperatures (CCTs) from 2013 K to 7845 K. Thus, the spectral optimization method can be used for guiding the fabrication of LED sources in line with photobiological safety. It is also found that the maximum permissible exposure duration of the optimized spectra increases by 14.9% than that of bichromatic phosphor-converted LEDs with equal CCT.
Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors
NASA Technical Reports Server (NTRS)
Palumbo, D. L.; Padula, S. L.
1997-01-01
Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.
Fisher, Michael B; Shields, Katherine F; Chan, Terence U; Christenson, Elizabeth; Cronk, Ryan D; Leker, Hannah; Samani, Destina; Apoya, Patrick; Lutz, Alexandra; Bartram, Jamie
2015-10-01
Safe drinking water is critical to human health and development. In rural sub-Saharan Africa, most improved water sources are boreholes with handpumps; studies suggest that up to one third of these handpumps are nonfunctional at any given time. This work presents findings from a secondary analysis of cross-sectional data from 1509 water sources in 570 communities in the rural Greater Afram Plains (GAP) region of Ghana; one of the largest studies of its kind. 79.4% of enumerated water sources were functional when visited; in multivariable regressions, functionality depended on source age, management, tariff collection, the number of other sources in the community, and the district. A Bayesian network (BN) model developed using the same data set found strong dependencies of functionality on implementer, pump type, management, and the availability of tools, with synergistic effects from management determinants on functionality, increasing the likelihood of a source being functional from a baseline of 72% to more than 97% with optimal management and available tools. We suggest that functionality may be a dynamic equilibrium between regular breakdowns and repairs, with management a key determinant of repair rate. Management variables may interact synergistically in ways better captured by BN analysis than by logistic regressions. These qualitative findings may prove generalizable beyond the study area, and may offer new approaches to understanding and increasing handpump functionality and safe water access.
Vale, S S; Fuller, I C; Procter, J N; Basher, L R; Smith, I E
2016-02-01
Knowledge of sediment movement throughout a catchment environment is essential due to its influence on the character and form of our landscape relating to agricultural productivity and ecological health. Sediment fingerprinting is a well-used tool for evaluating sediment sources within a fluvial catchment but still faces areas of uncertainty for applications to large catchments that have a complex arrangement of sources. Sediment fingerprinting was applied to the Manawatu River Catchment to differentiate 8 geological and geomorphological sources. The source categories were Mudstone, Hill Subsurface, Hill Surface, Channel Bank, Mountain Range, Gravel Terrace, Loess and Limestone. Geochemical analysis was conducted using XRF and LA-ICP-MS. Geochemical concentrations were analysed using Discriminant Function Analysis and sediment un-mixing models. Two mixing models were used in conjunction with GRG non-linear and Evolutionary optimization methods for comparison. Discriminant Function Analysis required 16 variables to correctly classify 92.6% of sediment sources. Geological explanations were achieved for some of the variables selected, although there is a need for mineralogical information to confirm causes for the geochemical signatures. Consistent source estimates were achieved between models with optimization techniques providing globally optimal solutions for sediment quantification. Sediment sources was attributed primarily to Mudstone, ≈38-46%; followed by the Mountain Range, ≈15-18%; Hill Surface, ≈12-16%; Hill Subsurface, ≈9-11%; Loess, ≈9-15%; Gravel Terrace, ≈0-4%; Channel Bank, ≈0-5%; and Limestone, ≈0%. Sediment source apportionment fits with the conceptual understanding of the catchment which has recognized soft sedimentary mudstone to be highly susceptible to erosion. Inference of the processes responsible for sediment generation can be made for processes where there is a clear relationship with the geomorphology, but is problematic for processes which occur within multiple terrains. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji
2017-12-01
Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.
Hybrid Optimization Parallel Search PACKage
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
α7 nicotinic ACh receptors as a ligand-gated source of Ca(2+) ions: the search for a Ca(2+) optimum.
Uteshev, Victor V
2012-01-01
The spatiotemporal distribution of cytosolic Ca(2+) ions is a key determinant of neuronal behavior and survival. Distinct sources of Ca(2+) ions including ligand- and voltage-gated Ca(2+) channels contribute to intracellular Ca(2+) homeostasis. Many normal physiological and therapeutic neuronal functions are Ca(2+)-dependent, however an excess of cytosolic Ca(2+) or a lack of the appropriate balance between Ca(2+) entry and clearance may destroy cellular integrity and cause cellular death. Therefore, the existence of optimal spatiotemporal patterns of cytosolic Ca(2+) elevations and thus, optimal activation of ligand- and voltage-gated Ca(2+) ion channels are postulated to benefit neuronal function and survival. Alpha7 nicotinic -acetylcholine receptors (nAChRs) are highly permeable to Ca(2+) ions and play an important role in modulation of neurotransmitter release, gene expression and neuroprotection in a variety of neuronal and non-neuronal cells. In this review, the focus is placed on α7 nAChR-mediated currents and Ca(2+) influx and how this source of Ca(2+) entry compares to NMDA receptors in supporting cytosolic Ca(2+) homeostasis, neuronal function and survival.
Coupling control and optimization at the Canadian Light Source
NASA Astrophysics Data System (ADS)
Wurtz, W. A.
2018-06-01
We present a detailed study using the skew quadrupoles in the Canadian Light Source storage ring lattice to control the parameters of a coupled lattice. We calculate the six-dimensional beam envelop matrix and use it to produce a variety of objective functions for optimization using the Multi-Objective Particle Swarm Optimization (MOPSO) algorithm. MOPSO produces a number of skew quadrupole configurations that we apply to the storage ring. We use the X-ray synchrotron radiation diagnostic beamline to image the beam and we make measurements of the vertical dispersion and beam lifetime. We observe satisfactory agreement between the measurements and simulations. These methods can be used to adjust phase space coupling in a rational way and have applications to fine-tuning the vertical emittance and Touschek lifetime and measuring the gas scattering lifetime.
Bagul, Mayuri B; Sonawane, Sachin K; Arya, Shalini S
2018-04-01
Tamarind seed has been a source of valuable nutrients such as protein (contains high amount of many essential amino acids), essential fatty acids, and minerals which are recognized as additive to develop perfect balanced functional foods. The objective of present work was to optimize the process parameters for extraction and hydrolysis of protein from tamarind seeds. Papain-derived hydrolysates showed a maximum degree of hydrolysis (39.49%) and radical scavenging activity (42.92 ± 2.83%) at optimized conditions such as enzyme-to-substrate ratio (1:5), hydrolysis time (3 h), hydrolysis temperature (65 °C), and pH 6. From this study, papain hydrolysate can be considered as good source of natural antioxidants in developing food formulations.
Smart grid technologies in local electric grids
NASA Astrophysics Data System (ADS)
Lezhniuk, Petro D.; Pijarski, Paweł; Buslavets, Olga A.
2017-08-01
The research is devoted to the creation of favorable conditions for the integration of renewable sources of energy into electric grids, which were designed to be supplied from centralized generation at large electric power stations. Development of distributed generation in electric grids influences the conditions of their operation - conflict of interests arises. The possibility of optimal functioning of electric grids and renewable sources of energy, when complex criterion of the optimality is balance reliability of electric energy in local electric system and minimum losses of electric energy in it. Multilevel automated system for power flows control in electric grids by means of change of distributed generation of power is developed. Optimization of power flows is performed by local systems of automatic control of small hydropower stations and, if possible, solar power plants.
NASA Astrophysics Data System (ADS)
Dahm, T.; Heimann, S.; Isken, M.; Vasyura-Bathke, H.; Kühn, D.; Sudhaus, H.; Kriegerowski, M.; Daout, S.; Steinberg, A.; Cesca, S.
2017-12-01
Seismic source and moment tensor waveform inversion is often ill-posed or non-unique if station coverage is poor or signals are weak. Therefore, the interpretation of moment tensors can become difficult, if not the full model space is explored, including all its trade-offs and uncertainties. This is especially true for non-double couple components of weak or shallow earthquakes, as for instance found in volcanic, geothermal or mining environments.We developed a bootstrap-based probabilistic optimization scheme (Grond), which is based on pre-calculated Greens function full waveform databases (e.g. fomosto tool, doi.org/10.5880/GFZ.2.1.2017.001). Grond is able to efficiently explore the full model space, the trade-offs and the uncertainties of source parameters. The program is highly flexible with respect to the adaption to specific problems, the design of objective functions, and the diversity of empirical datasets.It uses an integrated, robust waveform data processing based on a newly developed Python toolbox for seismology (Pyrocko, see Heimann et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.001), and allows for visual inspection of many aspects of the optimization problem. Grond has been applied to the CMT moment tensor inversion using W-phases, to nuclear explosions in Korea, to meteorite atmospheric explosions, to volcano-tectonic events during caldera collapse and to intra-plate volcanic and tectonic crustal events.Grond can be used to optimize simultaneously seismological waveforms, amplitude spectra and static displacements of geodetic data as InSAR and GPS (e.g. KITE, Isken et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.002). We present examples of Grond optimizations to demonstrate the advantage of a full exploration of source parameter uncertainties for interpretation.
SU-F-T-336: A Quick Auto-Planning (QAP) Method for Patient Intensity Modulated Radiotherapy (IMRT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, J; Zhang, Z; Wang, J
2016-06-15
Purpose: The aim of this study is to develop a quick auto-planning system that permits fast patient IMRT planning with conformal dose to the target without manual field alignment and time-consuming dose distribution optimization. Methods: The planning target volume (PTV) of the source and the target patient were projected to the iso-center plane in certain beameye- view directions to derive the 2D projected shapes. Assuming the target interior was isotropic for each beam direction boundary analysis under polar coordinate was performed to map the source shape boundary to the target shape boundary to derive the source-to-target shape mapping function. Themore » derived shape mapping function was used to morph the source beam aperture to the target beam aperture over all segments in each beam direction. The target beam weights were re-calculated to deliver the same dose to the reference point (iso-center) as the source beam did in the source plan. The approach was tested on two rectum patients (one source patient and one target patient). Results: The IMRT planning time by QAP was 5 seconds on a laptop computer. The dose volume histograms and the dose distribution showed the target patient had the similar PTV dose coverage and OAR dose sparing with the source patient. Conclusion: The QAP system can instantly and automatically finish the IMRT planning without dose optimization.« less
Point to point multispectral light projection applied to cultural heritage
NASA Astrophysics Data System (ADS)
Vázquez, D.; Alvarez, A.; Canabal, H.; Garcia, A.; Mayorga, S.; Muro, C.; Galan, T.
2017-09-01
Use of new of light sources based on LED technology should allow the develop of systems that combine conservation and exhibition requirements and allow to make these art goods available to the next generations according to sustainability principles. The goal of this work is to develop light systems and sources with an optimized spectral distribution for each specific point of the art piece. This optimization process implies to maximize the color fidelity reproduction and the same time to minimize the photochemical damage. Perceived color under these sources will be similar (metameric) to technical requirements given by the restoration team uncharged of the conservation and exhibition of the goods of art. Depending of the fragility of the exposed art objects (i.e. spectral responsivity of the material) the irradiance must be kept under a critical level. Therefore, it is necessary to develop a mathematical model that simulates with enough accuracy both the visual effect of the illumination and the photochemical impact of the radiation. Spectral reflectance of a reference painting The mathematical model is based on a merit function that optimized the individual intensity of the LED-light sources taking into account the damage function of the material and color space coordinates. Moreover the algorithm used weights for damage and color fidelity in order to adapt the model to a specific museal application. In this work we show a sample of this technology applied to a picture of Sorolla (1863-1923) an important Spanish painter title "woman walking at the beach".
DUV light source availability improvement via further enhancement of gas management technologies
NASA Astrophysics Data System (ADS)
Riggs, Daniel J.; O'Brien, Kevin; Brown, Daniel J. W.
2011-04-01
The continuous evolution of the semiconductor market necessitates ever-increasing improvements in DUV light source uptime as defined in the SEMI E10 standard. Cymer is developing technologies to exceed current and projected light source availability requirements via significant reduction in light source downtime. As an example, consider discharge chamber gas management functions which comprise a sizable portion of DUV light source downtime. Cymer's recent introduction of Gas Lifetime Extension (GLXTM) as a productivity improvement technology for its DUV lithography light sources has demonstrated noteworthy reduction in downtime. This has been achieved by reducing the frequency of full gas replenishment events from once per 100 million pulses to as low as once per 2 billion pulses. Cymer has continued to develop relevant technologies that target further reduction in downtime associated with light source gas management functions. Cymer's current subject is the development of technologies to reduce downtime associated with gas state optimization (e.g. total chamber gas pressure) and gas life duration. Current gas state optimization involves execution of a manual procedure at regular intervals throughout the lifetime of light source core components. Cymer aims to introduce a product enhancement - iGLXTM - that eliminates the need for the manual procedure and, further, achieves 4 billion pulse gas lives. Projections of uptime on DUV light sources indicate that downtime associated with gas management will be reduced by 70% when compared with GLX2. In addition to reducing downtime, iGLX reduces DUV light source cost of operation by constraining gas usage. Usage of fluorine rich Halogen gas mix has been reduced by 20% over GLX2.
Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.
Baranwal, Vipul K; Pandey, Ram K; Singh, Om P
2014-01-01
We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.
Implementation and verification of global optimization benchmark problems
NASA Astrophysics Data System (ADS)
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Shields, Katherine F.; Chan, Terence U.; Christenson, Elizabeth; Cronk, Ryan D.; Leker, Hannah; Samani, Destina; Apoya, Patrick; Lutz, Alexandra
2015-01-01
Abstract Safe drinking water is critical to human health and development. In rural sub‐Saharan Africa, most improved water sources are boreholes with handpumps; studies suggest that up to one third of these handpumps are nonfunctional at any given time. This work presents findings from a secondary analysis of cross‐sectional data from 1509 water sources in 570 communities in the rural Greater Afram Plains (GAP) region of Ghana; one of the largest studies of its kind. 79.4% of enumerated water sources were functional when visited; in multivariable regressions, functionality depended on source age, management, tariff collection, the number of other sources in the community, and the district. A Bayesian network (BN) model developed using the same data set found strong dependencies of functionality on implementer, pump type, management, and the availability of tools, with synergistic effects from management determinants on functionality, increasing the likelihood of a source being functional from a baseline of 72% to more than 97% with optimal management and available tools. We suggest that functionality may be a dynamic equilibrium between regular breakdowns and repairs, with management a key determinant of repair rate. Management variables may interact synergistically in ways better captured by BN analysis than by logistic regressions. These qualitative findings may prove generalizable beyond the study area, and may offer new approaches to understanding and increasing handpump functionality and safe water access. PMID:27667863
NASA Astrophysics Data System (ADS)
Fisher, Michael B.; Shields, Katherine F.; Chan, Terence U.; Christenson, Elizabeth; Cronk, Ryan D.; Leker, Hannah; Samani, Destina; Apoya, Patrick; Lutz, Alexandra; Bartram, Jamie
2015-10-01
Safe drinking water is critical to human health and development. In rural sub-Saharan Africa, most improved water sources are boreholes with handpumps; studies suggest that up to one third of these handpumps are nonfunctional at any given time. This work presents findings from a secondary analysis of cross-sectional data from 1509 water sources in 570 communities in the rural Greater Afram Plains (GAP) region of Ghana; one of the largest studies of its kind. 79.4% of enumerated water sources were functional when visited; in multivariable regressions, functionality depended on source age, management, tariff collection, the number of other sources in the community, and the district. A Bayesian network (BN) model developed using the same data set found strong dependencies of functionality on implementer, pump type, management, and the availability of tools, with synergistic effects from management determinants on functionality, increasing the likelihood of a source being functional from a baseline of 72% to more than 97% with optimal management and available tools. We suggest that functionality may be a dynamic equilibrium between regular breakdowns and repairs, with management a key determinant of repair rate. Management variables may interact synergistically in ways better captured by BN analysis than by logistic regressions. These qualitative findings may prove generalizable beyond the study area, and may offer new approaches to understanding and increasing handpump functionality and safe water access. This article was corrected on 11 Nov 2015. See the end of the full text for details.
SPOTting model parameters using a ready-made Python package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.
Siauve, N; Nicolas, L; Vollaire, C; Marchal, C
2004-12-01
This article describes an optimization process specially designed for local and regional hyperthermia in order to achieve the desired specific absorption rate in the patient. It is based on a genetic algorithm coupled to a finite element formulation. The optimization method is applied to real human organs meshes assembled from computerized tomography scans. A 3D finite element formulation is used to calculate the electromagnetic field in the patient, achieved by radiofrequency or microwave sources. Space discretization is performed using incomplete first order edge elements. The sparse complex symmetric matrix equation is solved using a conjugate gradient solver with potential projection pre-conditionning. The formulation is validated by comparison of calculated specific absorption rate distributions in a phantom to temperature measurements. A genetic algorithm is used to optimize the specific absorption rate distribution to predict the phases and amplitudes of the sources leading to the best focalization. The objective function is defined as the specific absorption rate ratio in the tumour and healthy tissues. Several constraints, regarding the specific absorption rate in tumour and the total power in the patient, may be prescribed. Results obtained with two types of applicators (waveguides and annular phased array) are presented and show the faculties of the developed optimization process.
Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Walton, Christopher Charles [Berkeley, CA
2003-12-23
A method and system for determining a source flux modulation recipe for achieving a selected thickness profile of a film to be deposited (e.g., with highly uniform or highly accurate custom graded thickness) over a flat or curved substrate (such as concave or convex optics) by exposing the substrate to a vapor deposition source operated with time-varying flux distribution as a function of time. Preferably, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. Preferably, the method includes the steps of measuring the source flux distribution (using a test piece held stationary while exposed to the source with the source operated at each of a number of different applied power levels), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of source flux modulation recipes, and determining from the predicted film thickness profiles a source flux modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal source flux modulation recipe to achieve a desired thickness profile on a substrate. The method enables precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.
Evaluation of DICOM viewer software for workflow integration in clinical trials
NASA Astrophysics Data System (ADS)
Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.
2015-03-01
The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.
Parametric investigations of plasma characteristics in a remote inductively coupled plasma system
NASA Astrophysics Data System (ADS)
Shukla, Prasoon; Roy, Abhra; Jain, Kunal; Bhoj, Ananth
2016-09-01
Designing a remote plasma system involves source chamber sizing, selection of coils and/or electrodes to power the plasma, designing the downstream tubes, selection of materials used in the source and downstream regions, locations of inlets and outlets and finally optimizing the process parameter space of pressure, gas flow rates and power delivery. Simulations can aid in spatial and temporal plasma characterization in what are often inaccessible locations for experimental probes in the source chamber. In this paper, we report on simulations of a remote inductively coupled Argon plasma system using the modeling platform CFD-ACE +. The coupled multiphysics model description successfully address flow, chemistry, electromagnetics, heat transfer and plasma transport in the remote plasma system. The SimManager tool enables easy setup of parametric simulations to investigate the effect of varying the pressure, power, frequency, flow rates and downstream tube lengths. It can also enable the automatic solution of the varied parameters to optimize a user-defined objective function, which may be the integral ion and radical fluxes at the wafer. The fast run time coupled with the parametric and optimization capabilities can add significant insight and value in design and optimization.
NASA Astrophysics Data System (ADS)
Chen, Xudong
2010-07-01
This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.
Adaptation, Growth, and Resilience in Biological Distribution Networks
NASA Astrophysics Data System (ADS)
Ronellenfitsch, Henrik; Katifori, Eleni
Highly optimized complex transport networks serve crucial functions in many man-made and natural systems such as power grids and plant or animal vasculature. Often, the relevant optimization functional is nonconvex and characterized by many local extrema. In general, finding the global, or nearly global optimum is difficult. In biological systems, it is believed that such an optimal state is slowly achieved through natural selection. However, general coarse grained models for flow networks with local positive feedback rules for the vessel conductivity typically get trapped in low efficiency, local minima. We show how the growth of the underlying tissue, coupled to the dynamical equations for network development, can drive the system to a dramatically improved optimal state. This general model provides a surprisingly simple explanation for the appearance of highly optimized transport networks in biology such as plant and animal vasculature. In addition, we show how the incorporation of spatially collective fluctuating sources yields a minimal model of realistic reticulation in distribution networks and thus resilience against damage.
NASA Astrophysics Data System (ADS)
Ofek, Eran O.; Zackay, Barak
2018-04-01
Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.
Research on illumination uniformity of high-power LED array light source
NASA Astrophysics Data System (ADS)
Yu, Xiaolong; Wei, Xueye; Zhang, Ou; Zhang, Xinwei
2018-06-01
Uniform illumination is one of the most important problem that must be solved in the application of high-power LED array. A numerical optimization algorithm, is applied to obtain the best LED array typesetting so that the light intensity of the target surface is evenly distributed. An evaluation function is set up through the standard deviation of the illuminance function, then the particle swarm optimization algorithm is utilized to optimize different arrays. Furthermore, the light intensity distribution is obtained by optical ray tracing method. Finally, a hybrid array is designed and the optical ray tracing method is applied to simulate the array. The simulation results, which is consistent with the traditional theoretical calculation, show that the algorithm introduced in this paper is reasonable and effective.
A Requirements-Driven Optimization Method for Acoustic Liners Using Analytic Derivatives
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.; Lopes, Leonard V.
2017-01-01
More than ever, there is flexibility and freedom in acoustic liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. In a previous paper on this subject, a method deriving the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground was described. A simple code-wrapping approach was used to evaluate a community noise objective function for an external optimizer. Gradients were evaluated using a finite difference formula. The subject of this paper is an application of analytic derivatives that supply precise gradients to an optimization process. Analytic derivatives improve the efficiency and accuracy of gradient-based optimization methods and allow consideration of more design variables. In addition, the benefit of variable impedance liners is explored using a multi-objective optimization.
Optimal Magnetic Sensor Vests for Cardiac Source Imaging
Lau, Stephan; Petković, Bojana; Haueisen, Jens
2016-01-01
Magnetocardiography (MCG) non-invasively provides functional information about the heart. New room-temperature magnetic field sensors, specifically magnetoresistive and optically pumped magnetometers, have reached sensitivities in the ultra-low range of cardiac fields while allowing for free placement around the human torso. Our aim is to optimize positions and orientations of such magnetic sensors in a vest-like arrangement for robust reconstruction of the electric current distributions in the heart. We optimized a set of 32 sensors on the surface of a torso model with respect to a 13-dipole cardiac source model under noise-free conditions. The reconstruction robustness was estimated by the condition of the lead field matrix. Optimization improved the condition of the lead field matrix by approximately two orders of magnitude compared to a regular array at the front of the torso. Optimized setups exhibited distributions of sensors over the whole torso with denser sampling above the heart at the front and back of the torso. Sensors close to the heart were arranged predominantly tangential to the body surface. The optimized sensor setup could facilitate the definition of a standard for sensor placement in MCG and the development of a wearable MCG vest for clinical diagnostics. PMID:27231910
Constructing graph models for software system development and analysis
NASA Astrophysics Data System (ADS)
Pogrebnoy, Andrey V.
2017-01-01
We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.
Neutron spectroscopy with scintillation detectors using wavelets
NASA Astrophysics Data System (ADS)
Hartman, Jessica
The purpose of this research was to study neutron spectroscopy using the EJ-299-33A plastic scintillator. This scintillator material provided a novel means of detection for fast neutrons, without the disadvantages of traditional liquid scintillation materials. EJ-299-33A provided a more durable option to these materials, making it less likely to be damaged during handling. Unlike liquid scintillators, this plastic scintillator was manufactured from a non-toxic material, making it safer to use, as well as easier to design detectors. The material was also manufactured with inherent pulse shape discrimination abilities, making it suitable for use in neutron detection. The neutron spectral unfolding technique was developed in two stages. Initial detector response function modeling was carried out through the use of the MCNPX Monte Carlo code. The response functions were developed for a monoenergetic neutron flux. Wavelets were then applied to smooth the response function. The spectral unfolding technique was applied through polynomial fitting and optimization techniques in MATLAB. Verification of the unfolding technique was carried out through the use of experimentally determined response functions. These were measured on the neutron source based on the Van de Graff accelerator at the University of Kentucky. This machine provided a range of monoenergetic neutron beams between 0.1 MeV and 24 MeV, making it possible to measure the set of response functions of the EJ-299-33A plastic scintillator detector to neutrons of specific energies. The response of a plutonium-beryllium (PuBe) source was measured using the source available at the University of Nevada, Las Vegas. The neutron spectrum reconstruction was carried out using the experimentally measured response functions. Experimental data was collected in the list mode of the waveform digitizer. Post processing of this data focused on the pulse shape discrimination analysis of the recorded response functions to remove the effects of photons and allow for source characterization based solely on the neutron response. The unfolding technique was performed through polynomial fitting and optimization techniques in MATLAB, and provided an energy spectrum for the PuBe source.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Optimizing Irrigation Water Allocation under Multiple Sources of Uncertainty in an Arid River Basin
NASA Astrophysics Data System (ADS)
Wei, Y.; Tang, D.; Gao, H.; Ding, Y.
2015-12-01
Population growth and climate change add additional pressures affecting water resources management strategies for meeting demands from different economic sectors. It is especially challenging in arid regions where fresh water is limited. For instance, in the Tailanhe River Basin (Xinjiang, China), a compromise must be made between water suppliers and users during drought years. This study presents a multi-objective irrigation water allocation model to cope with water scarcity in arid river basins. To deal with the uncertainties from multiple sources in the water allocation system (e.g., variations of available water amount, crop yield, crop prices, and water price), the model employs a interval linear programming approach. The multi-objective optimization model developed from this study is characterized by integrating eco-system service theory into water-saving measures. For evaluation purposes, the model is used to construct an optimal allocation system for irrigation areas fed by the Tailan River (Xinjiang Province, China). The objective functions to be optimized are formulated based on these irrigation areas' economic, social, and ecological benefits. The optimal irrigation water allocation plans are made under different hydroclimate conditions (wet year, normal year, and dry year), with multiple sources of uncertainty represented. The modeling tool and results are valuable for advising decision making by the local water authority—and the agricultural community—especially on measures for coping with water scarcity (by incorporating uncertain factors associated with crop production planning).
Automatic control of a negative ion source
NASA Astrophysics Data System (ADS)
Saadatmand, K.; Sredniawski, J.; Solensten, L.
1989-04-01
A CAMAC based control architecture is devised for a Berkeley-type H - volume ion source [1]. The architecture employs three 80386 TM PCs. One PC is dedicated to control and monitoring of source operation. The other PC functions with digitizers to provide data acquisition of waveforms. The third PC is used for off-line analysis. Initially, operation of the source was put under remote computer control (supervisory). This was followed by development of an automated startup procedure. Finally, a study of the physics of operation is now underway to establish a data base from which automatic beam optimization can be derived.
Fitting and Modeling in the ASC Data Analysis Environment
NASA Astrophysics Data System (ADS)
Doe, S.; Siemiginowska, A.; Joye, W.; McDowell, J.
As part of the AXAF Science Center (ASC) Data Analysis Environment, we will provide to the astronomical community a Fitting Application. We present a design of the application in this paper. Our design goal is to give the user the flexibility to use a variety of optimization techniques (Levenberg-Marquardt, maximum entropy, Monte Carlo, Powell, downhill simplex, CERN-Minuit, and simulated annealing) and fit statistics (chi (2) , Cash, variance, and maximum likelihood); our modular design allows the user easily to add their own optimization techniques and/or fit statistics. We also present a comparison of the optimization techniques to be provided by the Application. The high spatial and spectral resolutions that will be obtained with AXAF instruments require a sophisticated data modeling capability. We will provide not only a suite of astronomical spatial and spectral source models, but also the capability of combining these models into source models of up to four data dimensions (i.e., into source functions f(E,x,y,t)). We will also provide tools to create instrument response models appropriate for each observation.
Illumination system development using design and analysis of computer experiments
NASA Astrophysics Data System (ADS)
Keresztes, Janos C.; De Ketelaere, Bart; Audenaert, Jan; Koshel, R. J.; Saeys, Wouter
2015-09-01
Computer assisted optimal illumination design is crucial when developing cost-effective machine vision systems. Standard local optimization methods, such as downhill simplex optimization (DHSO), often result in an optimal solution that is influenced by the starting point by converging to a local minimum, especially when dealing with high dimensional illumination designs or nonlinear merit spaces. This work presents a novel nonlinear optimization approach, based on design and analysis of computer experiments (DACE). The methodology is first illustrated with a 2D case study of four light sources symmetrically positioned along a fixed arc in order to obtain optimal irradiance uniformity on a flat Lambertian reflecting target at the arc center. The first step consists of choosing angular positions with no overlap between sources using a fast, flexible space filling design. Ray-tracing simulations are then performed at the design points and a merit function is used for each configuration to quantify the homogeneity of the irradiance at the target. The obtained homogeneities at the design points are further used as input to a Gaussian Process (GP), which develops a preliminary distribution for the expected merit space. Global optimization is then performed on the GP more likely providing optimal parameters. Next, the light positioning case study is further investigated by varying the radius of the arc, and by adding two spots symmetrically positioned along an arc diametrically opposed to the first one. The added value of using DACE with regard to the performance in convergence is 6 times faster than the standard simplex method for equal uniformity of 97%. The obtained results were successfully validated experimentally using a short-wavelength infrared (SWIR) hyperspectral imager monitoring a Spectralon panel illuminated by tungsten halogen sources with 10% of relative error.
Functional near infrared spectroscopy for awake monkey to accelerate neurorehabilitation study
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Higo, Noriyuki; Kato, Junpei; Matsuda, Keiji; Yamada, Toru
2017-02-01
Functional near-infrared spectroscopy (fNIRS) is suitable for measuring brain functions during neurorehabilitation because of its portability and less motion restriction. However, it is not known whether neural reconstruction can be observed through changes in cerebral hemodynamics. In this study, we modified an fNIRS system for measuring the motor function of awake monkeys to study cerebral hemodynamics during neurorehabilitation. Computer simulation was performed to determine the optimal fNIRS source-detector interval for monkey motor cortex. Accurate digital phantoms were constructed based on anatomical magnetic resonance images. Light propagation based on the diffusion equation was numerically calculated using the finite element method. The source-detector pair was placed on the scalp above the primary motor cortex. Four different interval values (10, 15, 20, 25 mm) were examined. The results showed that the detected intensity decreased and the partial optical path length in gray matter increased with an increase in the source-detector interval. We found that 15 mm is the optimal interval for the fNIRS measurement of monkey motor cortex. The preliminary measurement was performed on a healthy female macaque monkey using fNIRS equipment and custom-made optodes and optode holder. The optodes were attached above bilateral primary motor cortices. Under the awaking condition, 10 to 20 trials of alternated single-sided hand movements for several seconds with intervals of 10 to 30 s were performed. Increases and decreases in oxy- and deoxyhemoglobin concentration were observed in a localized area in the hemisphere contralateral to the moved forelimb.
NASA Astrophysics Data System (ADS)
Zhang, Bao-Ji; Zhang, Zhu-Xin
2015-09-01
To obtain low resistance and high efficiency energy-saving ship, minimum total resistance hull form design method is studied based on potential flow theory of wave-making resistance and considering the effects of tail viscous separation. With the sum of wave resistance and viscous resistance as objective functions and the parameters of B-Spline function as design variables, mathematical models are built using Nonlinear Programming Method (NLP) ensuring the basic limit of displacement and considering rear viscous separation. We develop ship lines optimization procedures with intellectual property rights. Series60 is used as parent ship in optimization design to obtain improved ship (Series60-1) theoretically. Then drag tests for the improved ship (Series60-1) is made to get the actual minimum total resistance hull form.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Chen, Yanxi; Niu, Zhiguang; Zhang, Hongwei
2013-06-01
Landscape lakes in the city suffer high eutrophication risk because of their special characters and functions in the water circulation system. Using a landscape lake HMLA located in Tianjin City, North China, with a mixture of point source (PS) pollution and non-point source (NPS) pollution, we explored the methodology of Fluent and AQUATOX to simulate and predict the state of HMLA, and trophic index was used to assess the eutrophication state. Then, we use water compensation optimization and three scenarios to determine the optimal management methodology. Three scenarios include ecological restoration scenario, best management practices (BMPs) scenario, and a scenario combining both. Our results suggest that the maintenance of a healthy ecosystem with ecoremediation is necessary and the BMPs have a far-reaching effect on water reusing and NPS pollution control. This study has implications for eutrophication control and management under development for urbanization in China.
Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process
NASA Astrophysics Data System (ADS)
Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.
2015-08-01
An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.
Nowak, Krzysztof M; Kurosawa, Yoshiaki; Suganuma, Takashi; Kawasuji, Yasufumi; Nakarai, Hiroaki; Saito, Takashi; Fujimoto, Junichi; Mizoguchi, Hakaru
2016-07-01
One of the unique features of the quantum-cascade-laser-seeded, nanosecond-pulse CO2 laser, invented for the purpose of generation of extreme UV by laser-produced-plasma, is a robust synthesis of arbitrary pulse waveforms. In the present Letter we report on experimental results that are, to our best knowledge, the first demonstration of such functionality obtainable from nanosecond-pulse CO2 laser technology. An online pulse duration adjustment within 10-40 ns was demonstrated, and a few exemplary pulse waveforms were synthesized, such as "tophat," "tailspike," and "leadspike" shapes. Such output characteristics may be useful to optimize the performance of LPP EUV source.
Full Waveform Inversion for Seismic Velocity And Anelastic Losses in Heterogeneous Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Askan, A.; /Carnegie Mellon U.; Akcelik, V.
2009-04-30
We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the inverse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to amore » variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propagation model in which the intrinsic energy-dissipating nature of the soil medium is modeled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regularization functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multiple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to remain within the basin of attraction of the global minimum. We illustrate the methodology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.« less
NASA Astrophysics Data System (ADS)
Rantz, Robert; Roundy, Shad
2016-04-01
A tremendous amount of research has been performed on the design and analysis of vibration energy harvester architectures with the goal of optimizing power output; most studies assume idealized input vibrations without paying much attention to whether such idealizations are broadly representative of real sources. These "idealized input signals" are typically derived from the expected nature of the vibrations produced from a given source. Little work has been done on corroborating these expectations by virtue of compiling a comprehensive list of vibration signals organized by detailed classifications. Vibration data representing 333 signals were collected from the NiPS Laboratory "Real Vibration" database, processed, and categorized according to the source of the signal (e.g. animal, machine, etc.), the number of dominant frequencies, the nature of the dominant frequencies (e.g. stationary, band-limited noise, etc.), and other metrics. By categorizing signals in this way, the set of idealized vibration inputs commonly assumed for harvester input can be corroborated and refined, and heretofore overlooked vibration input types have motivation for investigation. An initial qualitative analysis of vibration signals has been undertaken with the goal of determining how often a standard linear oscillator based harvester is likely the optimal architecture, and how often a nonlinear harvester with a cubic stiffness function might provide improvement. Although preliminary, the analysis indicates that in at least 23% of cases, a linear harvester is likely optimal and in no more than 53% of cases would a nonlinear cubic stiffness based harvester provide improvement.
SPOTting Model Parameters Using a Ready-Made Python Package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2017-04-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package.
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783
Numerical convergence and validation of the DIMP inverse particle transport model
Nelson, Noel; Azmy, Yousry
2017-09-01
The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less
NASA Astrophysics Data System (ADS)
Hirsch, Piotr; Duzinkiewicz, Kazimierz; Grochowski, Michał
2017-11-01
District Heating (DH) systems are commonly supplied using local heat sources. Nowadays, modern insulation materials allow for effective and economically viable heat transportation over long distances (over 20 km). In the paper a method for optimized selection of design and operating parameters of long distance Heat Transportation System (HTS) is proposed. The method allows for evaluation of feasibility and effectivity of heat transportation from the considered heat sources. The optimized selection is formulated as multicriteria decision-making problem. The constraints for this problem include a static HTS model, allowing considerations of system life cycle, time variability and spatial topology. Thereby, variation of heat demand and ground temperature within the DH area, insulation and pipe aging and/or terrain elevation profile are taken into account in the decision-making process. The HTS construction costs, pumping power, and heat losses are considered as objective functions. Inner pipe diameter, insulation thickness, temperatures and pumping stations locations are optimized during the decision-making process. Moreover, the variants of pipe-laying e.g. one pipeline with the larger diameter or two with the smaller might be considered during the optimization. The analyzed optimization problem is multicriteria, hybrid and nonlinear. Because of such problem properties, the genetic solver was applied.
Multidimensional optimal droop control for wind resources in DC microgrids
NASA Astrophysics Data System (ADS)
Bunker, Kaitlyn J.
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Structured illumination diffuse optical tomography for noninvasive functional neuroimaging in mice.
Reisman, Matthew D; Markow, Zachary E; Bauer, Adam Q; Culver, Joseph P
2017-04-01
Optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to superficial cortical tissues. Diffuse optical tomography (DOT) techniques provide noninvasive imaging, but previous DOT systems for rodent neuroimaging have been limited either by sparse spatial sampling or by slow speed. Here, we develop a DOT system with asymmetric source-detector sampling that combines the high-density spatial sampling (0.4 mm) detection of a scientific complementary metal-oxide-semiconductor camera with the rapid (2 Hz) imaging of a few ([Formula: see text]) structured illumination (SI) patterns. Analysis techniques are developed to take advantage of the system's flexibility and optimize trade-offs among spatial sampling, imaging speed, and signal-to-noise ratio. An effective source-detector separation for the SI patterns was developed and compared with light intensity for a quantitative assessment of data quality. The light fall-off versus effective distance was also used for in situ empirical optimization of our light model. We demonstrated the feasibility of this technique by noninvasively mapping the functional response in the somatosensory cortex of the mouse following electrical stimulation of the forepaw.
Sizing a rainwater harvesting cistern by minimizing costs
NASA Astrophysics Data System (ADS)
Pelak, Norman; Porporato, Amilcare
2016-10-01
Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.
Ma, Y T; Wubs, A M; Mathieu, A; Heuvelink, E; Zhu, J Y; Hu, B G; Cournède, P H; de Reffye, P
2011-04-01
Many indeterminate plants can have wide fluctuations in the pattern of fruit-set and harvest. Fruit-set in these types of plants depends largely on the balance between source (assimilate supply) and sink strength (assimilate demand) within the plant. This study aims to evaluate the ability of functional-structural plant models to simulate different fruit-set patterns among Capsicum cultivars through source-sink relationships. A greenhouse experiment of six Capsicum cultivars characterized with different fruit weight and fruit-set was conducted. Fruit-set patterns and potential fruit sink strength were determined through measurement. Source and sink strength of other organs were determined via the GREENLAB model, with a description of plant organ weight and dimensions according to plant topological structure established from the measured data as inputs. Parameter optimization was determined using a generalized least squares method for the entire growth cycle. Fruit sink strength differed among cultivars. Vegetative sink strength was generally lower for large-fruited cultivars than for small-fruited ones. The larger the size of the fruit, the larger variation there was in fruit-set and fruit yield. Large-fruited cultivars need a higher source-sink ratio for fruit-set, which means higher demand for assimilates. Temporal heterogeneity of fruit-set affected both number and yield of fruit. The simulation study showed that reducing heterogeneity of fruit-set was obtained by different approaches: for example, increasing source strength; decreasing vegetative sink strength, source-sink ratio for fruit-set and flower appearance rate; and harvesting individual fruits earlier before full ripeness. Simulation results showed that, when we increased source strength or decreased vegetative sink strength, fruit-set and fruit weight increased. However, no significant differences were found between large-fruited and small-fruited groups of cultivars regarding the effects of source and vegetative sink strength on fruit-set and fruit weight. When the source-sink ratio at fruit-set decreased, the number of fruit retained on the plant increased competition for assimilates with vegetative organs. Therefore, total plant and vegetative dry weights decreased, especially for large-fruited cultivars. Optimization study showed that temporal heterogeneity of fruit-set and ripening was predicted to be reduced when fruits were harvested earlier. Furthermore, there was a 20 % increase in the number of extra fruit set.
NASA Astrophysics Data System (ADS)
Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.
2018-05-01
In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.
Data processing and optimization system to study prospective interstate power interconnections
NASA Astrophysics Data System (ADS)
Podkovalnikov, Sergei; Trofimov, Ivan; Trofimov, Leonid
2018-01-01
The paper presents Data processing and optimization system for studying and making rational decisions on the formation of interstate electric power interconnections, with aim to increasing effectiveness of their functioning and expansion. The technologies for building and integrating a Data processing and optimization system including an object-oriented database and a predictive mathematical model for optimizing the expansion of electric power systems ORIRES, are described. The technology of collection and pre-processing of non-structured data collected from various sources and its loading to the object-oriented database, as well as processing and presentation of information in the GIS system are described. One of the approaches of graphical visualization of the results of optimization model is considered on the example of calculating the option for expansion of the South Korean electric power grid.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.; ...
2018-04-19
QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less
A newly isolated and identified vitamin B12 producing strain: Sinorhizobium meliloti 320.
Dong, Huina; Li, Sha; Fang, Huan; Xia, Miaomiao; Zheng, Ping; Zhang, Dawei; Sun, Jibin
2016-10-01
Vitamin B12 (Cobalamin, VB12) has several physiological functions and is widely used in pharmaceutical and food industries. A new unicellular species was extracted from China farmland, and the strain could produce VB12 which was identified by HPLC and HPLC-MS/MS. 16S rDNA analysis reveals this strain belongs to the species Sinorhizobium meliloti and we named it S. meliloti 320. Its whole genome information indicates that this strain has a complete VB12 synthetic pathway, which paves the way for further metabolic engineering studies. The optimal carbon and nitrogen sources are sucrose and corn steep liquor (CSL) plus peptone. The optimal combination of sucrose and CSL was obtained by response surface methodology as they are the most suitable carbon and nitrogen sources, respectively. This strain could produce 140 ± 4.2 mg L(-1) vitamin B12 after incubating for 7 days in the optimal medium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jeongnim; Baczewski, Andrew T.; Beaudet, Todd D.
QMCPACK is an open source quantum Monte Carlo package for ab-initio electronic structure calculations. It supports calculations of metallic and insulating solids, molecules, atoms, and some model Hamiltonians. Implemented real space quantum Monte Carlo algorithms include variational, diffusion, and reptation Monte Carlo. QMCPACK uses Slater-Jastrow type trial wave functions in conjunction with a sophisticated optimizer capable of optimizing tens of thousands of parameters. The orbital space auxiliary field quantum Monte Carlo method is also implemented, enabling cross validation between different highly accurate methods. The code is specifically optimized for calculations with large numbers of electrons on the latest high performancemore » computing architectures, including multicore central processing unit (CPU) and graphical processing unit (GPU) systems. We detail the program’s capabilities, outline its structure, and give examples of its use in current research calculations. The package is available at http://www.qmcpack.org.« less
NASA Astrophysics Data System (ADS)
Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.
2018-03-01
According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.
Two Functionally Distinct Sources of Actin Monomers Supply the Leading Edge of Lamellipodia
Vitriol, Eric A.; McMillen, Laura M.; Kapustina, Maryna; Gomez, Shawn M.; Vavylonis, Dimitrios; Zheng, James Q.
2015-01-01
Summary Lamellipodia, the sheet-like protrusions of motile cells, consist of networks of actin filaments (F-actin) regulated by the ordered assembly from and disassembly into actin monomers (G-actin). Traditionally, G-actin is thought to exist as a homogeneous pool. Here, we show that there are two functionally and molecularly distinct sources of G-actin that supply lamellipodial actin networks. G-actin originating from the cytosolic pool requires the monomer binding protein thymosin β4 (Tβ4) for optimal leading edge localization, is targeted to formins, and is responsible for creating an elevated G/F-actin ratio that promotes membrane protrusion. The second source of G-actin comes from recycled lamellipodia F-actin. Recycling occurs independently of Tβ4 and appears to regulate lamellipodia homeostasis. Tβ4-bound G-actin specifically localizes to the leading edge because it doesn’t interact with Arp2/3-mediated polymerization sites found throughout the lamellipodia. These findings demonstrate that actin networks can be constructed from multiple sources of monomers with discrete spatiotemporal functions. PMID:25865895
Combining Multiobjective Optimization and Cluster Analysis to Study Vocal Fold Functional Morphology
Palaparthi, Anil; Riede, Tobias
2017-01-01
Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology. PMID:24771563
Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Dȩbski, Wojciech
2008-07-01
Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.
2017-02-01
Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.
The role of veterinary research laboratories in the provision of veterinary services.
Verwoerd, D W
1998-08-01
Veterinary research laboratories play an essential role in the provision of veterinary services in most countries. These laboratories are the source of new knowledge, innovative ideas and improved technology for the surveillance, prevention and control of animal diseases. In addition, many laboratories provide diagnostic and other services. To ensure the optimal integration of various veterinary activities, administrators must understand the functions and constraints of research laboratories. Therefore, a brief discussion is presented of the following: organisational structures methods for developing research programmes outputs of research scientists and how these are measured the management of quality assurance funding of research. Optimal collaboration can only be attained by understanding the environment in which a research scientist functions and the motivational issues at stake.
Rigorous ILT optimization for advanced patterning and design-process co-optimization
NASA Astrophysics Data System (ADS)
Selinidis, Kosta; Kuechler, Bernd; Cai, Howard; Braam, Kyle; Hoppe, Wolfgang; Domnenko, Vitaly; Poonawala, Amyn; Xiao, Guangming
2018-03-01
Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions. In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A
2013-01-01
A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.
Power Generation from a Radiative Thermal Source Using a Large-Area Infrared Rectenna
NASA Astrophysics Data System (ADS)
Shank, Joshua; Kadlec, Emil A.; Jarecki, Robert L.; Starbuck, Andrew; Howell, Stephen; Peters, David W.; Davids, Paul S.
2018-05-01
Electrical power generation from a moderate-temperature thermal source by means of direct conversion of infrared radiation is important and highly desirable for energy harvesting from waste heat and micropower applications. Here, we demonstrate direct rectified power generation from an unbiased large-area nanoantenna-coupled tunnel diode rectifier called a rectenna. Using a vacuum radiometric measurement technique with irradiation from a temperature-stabilized thermal source, a generated power density of 8 nW /cm2 is observed at a source temperature of 450 °C for the unbiased rectenna across an optimized load resistance. The optimized load resistance for the peak power generation for each temperature coincides with the tunnel diode resistance at zero bias and corresponds to the impedance matching condition for a rectifying antenna. Current-voltage measurements of a thermally illuminated large-area rectenna show current zero crossing shifts into the second quadrant indicating rectification. Photon-assisted tunneling in the unbiased rectenna is modeled as the mechanism for the large short-circuit photocurrents observed where the photon energy serves as an effective bias across the tunnel junction. The measured current and voltage across the load resistor as a function of the thermal source temperature represents direct current electrical power generation.
Docosahexaenoic Acid and Cognition throughout the Lifespan
Weiser, Michael J.; Butt, Christopher M.; Mohajeri, M. Hasan
2016-01-01
Docosahexaenoic acid (DHA) is the predominant omega-3 (n-3) polyunsaturated fatty acid (PUFA) found in the brain and can affect neurological function by modulating signal transduction pathways, neurotransmission, neurogenesis, myelination, membrane receptor function, synaptic plasticity, neuroinflammation, membrane integrity and membrane organization. DHA is rapidly accumulated in the brain during gestation and early infancy, and the availability of DHA via transfer from maternal stores impacts the degree of DHA incorporation into neural tissues. The consumption of DHA leads to many positive physiological and behavioral effects, including those on cognition. Advanced cognitive function is uniquely human, and the optimal development and aging of cognitive abilities has profound impacts on quality of life, productivity, and advancement of society in general. However, the modern diet typically lacks appreciable amounts of DHA. Therefore, in modern populations, maintaining optimal levels of DHA in the brain throughout the lifespan likely requires obtaining preformed DHA via dietary or supplemental sources. In this review, we examine the role of DHA in optimal cognition during development, adulthood, and aging with a focus on human evidence and putative mechanisms of action. PMID:26901223
BASKET on-board software library
NASA Astrophysics Data System (ADS)
Luntzer, Armin; Ottensamer, Roland; Kerschbaum, Franz
2014-07-01
The University of Vienna is a provider of on-board data processing software with focus on data compression, such as used on board the highly successful Herschel/PACS instrument, as well as in the small BRITE-Constellation fleet of cube-sats. Current contributions are made to CHEOPS, SAFARI and PLATO. The effort was taken to review the various functions developed for Herschel and provide a consolidated software library to facilitate the work for future missions. This library is a shopping basket of algorithms. Its contents are separated into four classes: auxiliary functions (e.g. circular buffers), preprocessing functions (e.g. for calibration), lossless data compression (arithmetic or Rice coding) and lossy reduction steps (ramp fitting etc.). The "BASKET" has all functionality that is needed to create an on-board data processing chain. All sources are written in C, supplemented by optimized versions in assembly, targeting popular CPU architectures for space applications. BASKET is open source and constantly growing
NASA Astrophysics Data System (ADS)
Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.
2016-06-01
A new implementation of indirect boundary element method allows simulating the elastic wave propagation in complex configurations made of embedded regions that are homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the discrete wavenumber (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full-space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2-D and 3-D problems. Higher efficiency is achieved in 3-D.
Optical design of system for a lightship
NASA Astrophysics Data System (ADS)
Chirkov, M. A.; Tsyganok, E. A.
2017-06-01
This article presents the result of the optical design of illuminating optical system for lightship using the freeform surface. It shows an algorithm of optical design of side-emitting lens for point source using Freeform Z function in Zemax non-sequential mode; optimization of calculation results and testing of optical system with real diode
Connecting source aggregating areas with distributive regions via Optimal Transportation theory.
NASA Astrophysics Data System (ADS)
Lanzoni, S.; Putti, M.
2016-12-01
We study the application of Optimal Transport (OT) theory to the transfer of water and sediments from a distributed aggregating source to a distributing area connected by a erodible hillslope. Starting from the Monge-Kantorovich equations, We derive a global energy functional that nonlinearly combines the cost of constructing the drainage network over the entire domain and the cost of water and sediment transportation through the network. It can be shown that the minimization of this functional is equivalent to the infinite time solution of a system of diffusion partial differential equations coupled with transient ordinary differential equations, that closely resemble the classical conservation laws of water and sediments mass and momentum. We present several numerical simulations applied to realstic test cases. For example, the solution of the proposed model forms network configurations that share strong similiratities with rill channels formed on an hillslope. At a larger scale, we obtain promising results in simulating the network patterns that ensure a progressive and continuous transition from a drainage drainage area to a distributive receiving region.
Scaling of plane-wave functions in statistically optimized near-field acoustic holography.
Hald, Jørgen
2014-11-01
Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.
Performance analysis of optimal power allocation in wireless cooperative communication systems
NASA Astrophysics Data System (ADS)
Babikir Adam, Edriss E.; Samb, Doudou; Yu, Li
2013-03-01
Cooperative communication has been recently proposed in wireless communication systems for exploring the inherent spatial diversity in relay channels.The Amplify-and-Forward (AF) cooperation protocols with multiple relays have not been sufficiently investigated even if it has a low complexity in term of implementation. We consider in this work a cooperative diversity system in which a source transmits some information to a destination with the help of multiple relay nodes with AF protocols and investigate the optimality of allocating powers both at the source and the relays system by optimizing the symbol error rate (SER) performance in an efficient way. Firstly we derive a closedform SER formulation for MPSK signal using the concept of moment generating function and some statistical approximations in high signal to noise ratio (SNR) for the system under studied. We then find a tight corresponding lower bound which converges to the same limit as the theoretical upper bound and develop an optimal power allocation (OPA) technique with mean channel gains to minimize the SER. Simulation results show that our scheme outperforms the equal power allocation (EPA) scheme and is tight to the theoretical approximation based on the SER upper bound in high SNR for different number of relays.
SHARPEN-systematic hierarchical algorithms for rotamers and proteins on an extended network.
Loksha, Ilya V; Maiolo, James R; Hong, Cheng W; Ng, Albert; Snow, Christopher D
2009-04-30
Algorithms for discrete optimization of proteins play a central role in recent advances in protein structure prediction and design. We wish to improve the resources available for computational biologists to rapidly prototype such algorithms and to easily scale these algorithms to many processors. To that end, we describe the implementation and use of two new open source resources, citing potential benefits over existing software. We discuss CHOMP, a new object-oriented library for macromolecular optimization, and SHARPEN, a framework for scaling CHOMP scripts to many computers. These tools allow users to develop new algorithms for a variety of applications including protein repacking, protein-protein docking, loop rebuilding, or homology model remediation. Particular care was taken to allow modular energy function design; protein conformations may currently be scored using either the OPLSaa molecular mechanical energy function or an all-atom semiempirical energy function employed by Rosetta. (c) 2009 Wiley Periodicals, Inc.
Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles
NASA Astrophysics Data System (ADS)
Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi
2012-09-01
In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.
NASA Astrophysics Data System (ADS)
Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.
2016-12-01
New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective function evaluation varies unpredictably, so efficiency is improved with asynchronous parallel calculations to improve load balancing. The third application (done at NCSS) incorporates new global surrogate multi-objective parallel search algorithms into pySOT and applies it to a large watershed calibration problem.
NASA Technical Reports Server (NTRS)
Lester, H. C.; Posey, J. W.
1976-01-01
A discrete frequency study is made of the influence of source characteristics on the optimal properties of acoustically lined uniform and two section ducts. Two simplified sources, a plane wave and a monopole, are considered in some detail and over a greater frequency range than has been previously studied. Source and termination impedance effects are given limited examination. An example of a turbomachinery source and three associated source variants is also presented. Optimal liner designs based on modal theory approach the Cremer criterion at low frequencies and the geometric acoustics limit at high frequencies. Over an intermediate frequency range, optimal two section liners produced higher transmission losses than did the uniform configurations. Source distribution effects were found to have a significant effect on optimal liner design, but source and termination impedance effects appear to be relatively unimportant.
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-09-01
The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Duct Liner Optimization for Turbomachinery Noise Sources
1975-11-01
AD-A279 441lIIIflhIh* NASA TECHNICAL NASA TMA X-72789 MEMORANDUM oo £ 00 r-:. DUCT LINER OPTIMIZATION FOR TURBOMACHINERY w NOISE SOURCES By Harold C...Recipient’s r.atalog No. NASA TM X-72789! 4 Title diid Subtitle 5. Rewrt Date Duct Liner Optimization for Turbomachinery Noise Sources November 1975...profiles is combined wit., a numerical minimization algorithm to predict optimal liner configurations having one, two, and three sections. Source models
Tracking historical increases in nitrogen-driven crop production possibilities
NASA Astrophysics Data System (ADS)
Mueller, N. D.; Lassaletta, L.; Billen, G.; Garnier, J.; Gerber, J. S.
2015-12-01
The environmental costs of nitrogen use have prompted a focus on improving the efficiency of nitrogen use in the global food system, the primary source of nitrogen pollution. Typical approaches to improving agricultural nitrogen use efficiency include more targeted field-level use (timing, placement, and rate) and modification of the crop mix. However, global efficiency gains can also be achieved by improving the spatial allocation of nitrogen between regions or countries, due to consistent diminishing returns at high nitrogen use. This concept is examined by constructing a tradeoff frontier (or production possibilities frontier) describing global crop protein yield as a function of applied nitrogen from all sources, given optimal spatial allocation. Yearly variation in country-level input-output nitrogen budgets are utilized to parameterize country-specific hyperbolic yield-response models. Response functions are further characterized for three ~15-year eras beginning in 1961, and series of calculations uses these curves to simulate optimal spatial allocation in each era and determine the frontier. The analyses reveal that excess nitrogen (in recent years) could be reduced by ~40% given optimal spatial allocation. Over time, we find that gains in yield potential and in-country nitrogen use efficiency have led to increases in the global nitrogen production possibilities frontier. However, this promising shift has been accompanied by an actual spatial distribution of nitrogen use that has become less optimal, in an absolute sense, relative to the frontier. We conclude that examination of global production possibilities is a promising approach to understanding production constraints and efficiency opportunities in the global food system.
NASA Astrophysics Data System (ADS)
Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan
2016-11-01
The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.
NASA Technical Reports Server (NTRS)
Cunefare, K. A.; Koopmann, G. H.
1991-01-01
This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.
Huang, Ning; Wang, Hong Ying; Lin, Tao; Liu, Qi Ming; Huang, Yun Feng; Li, Jian Xiong
2016-10-01
Watershed landscape pattern regulation and optimization based on 'source-sink' theory for non-point source pollution control is a cost-effective measure and still in the exploratory stage. Taking whole watershed as the research object, on the basis of landscape ecology, related theories and existing research results, a regulation framework of watershed landscape pattern for non-point source pollution control was developed at two levels based on 'source-sink' theory in this study: 1) at watershed level: reasonable basic combination and spatial pattern of 'source-sink' landscape was analyzed, and then holistic regulation and optimization method of landscape pattern was constructed; 2) at landscape patch level: key 'source' landscape was taken as the focus of regulation and optimization. Firstly, four identification criteria of key 'source' landscape including landscape pollutant loading per unit area, landscape slope, long and narrow transfer 'source' landscape, pollutant loading per unit length of 'source' landscape along the riverbank were developed. Secondly, nine types of regulation and optimization methods for different key 'source' landscape in rural and urban areas were established, according to three regulation and optimization rules including 'sink' landscape inlay, banding 'sink' landscape supplement, pollutants capacity of original 'sink' landscape enhancement. Finally, the regulation framework was applied for the watershed of Maluan Bay in Xiamen City. Holistic regulation and optimization mode of watershed landscape pattern of Maluan Bay and key 'source' landscape regulation and optimization measures for the three zones were made, based on GIS technology, remote sensing images and DEM model.
Intercorrelation of P and Pn Recordings for the North Korean Nuclear Tests
NASA Astrophysics Data System (ADS)
Lay, T.; Voytan, D.; Ohman, J.
2017-12-01
The relative waveform analysis procedure called Intercorrelation is applied to Pn and P waveforms at regional and teleseismic distances, respectively, for the 5 underground nuclear tests at the North Korean nuclear test site. Intercorrelation is a waveform equalization procedure that parameterizes the effective source function for a given explosion, including the reduced velocity potential convolved with a simplified Green's function that accounts for the free surface reflections (pPn and pP), and possibly additional arrivals such as spall. The source function for one event is convolved with the signal at a given station for a second event, and the recording at the same station for the first event is convolved with the source function for the second event. This procedure eliminates the need to predict the complex receiver function effects at the station, which are typically not well-known for short-period response. The parameters of the source function representation are yield and burial depth, and an explosion source model is required. Here we use the Mueller-Murphy representation of the explosion reduced velocity potential, which explicitly depends on yield and burial depth. We then search over yield and burial depth ranges for both events, constrained by a priori information about reasonable ranges of parameters, to optimize the simultaneous match of multiple station signals for the two events. This procedure, applied to the apparently overburied North Korean nuclear tests (no indications of spall complexity), assuming simple free surface interactions (elastic reflection from a flat surface), provides excellent waveform equalization for all combinations of 5 nuclear tests.
Development and Characterization of a 16.3 keV X-Ray Source at the National Ignition Facility
NASA Astrophysics Data System (ADS)
Fournier, K. B.; Barrios, M. A.; Schneider, M. B.; Khan, S.; Chen, H.; Coppari, F.; Rygg, R.; Hohenberger, M.; Albert, F.; Moody, J.; Ralph, J.; Kemp, G. E.; Regan, S. P.
2014-10-01
X-ray sources at the National Ignition Facility are needed for radiography of in-flight capsules in inertial confinement fusion experiments and for diffraction studies of materials at high pressures. In the former case, we want to optimize signal to noise and signal over background ratios for the radiograph, in the latter case, we want to minimize high-energy emission from the backlighter that creates background on the diffraction data. Four interleaved shots at NIF were taken in one day, with laser irradiances on a Zr backlighter target ranging from 5 to 14 × 1015 W/cm2. Two shots were for source optimization as a function of laser irradiance. X-ray fluxes were measured with the time-resolved NIF X-ray Spectrometer (NXS) and the DANTE array of calibrated, filtered diodes. Two shots were optimized to make backscatter measurements with the FABS and NBI optical power systems. The backscatter levels are investigated to look for correlation with hot electron populations inferred from high-energy x rays measured with the FFLEX broadband spectrometer. Results from all shots are presented and compared with models. Work performed under the auspices of the U.S. DOE by LLNL under Contract No. DE-AC52-07NA27344.
Lee, Geon-Ho; Bae, Jae-Han; Suh, Min-Jung; Kim, In-Hwan; Hou, Ching T; Kim, Hak-Ryul
2007-06-01
Lipases are industrially useful versatile enzymes that catalyze numerous different reactions including hydrolysis of triglycerides, transesterification, and chiral synthesis of esters under natural conditions. Although lipases from various sources have been widely used in industrial applications, such as in food, chemical, pharmaceutical, and detergent industries, there are still substantial current interests in developing new microbial lipases, specifically those functioning in abnormal conditions. We screened 17 lipase-producing yeast strains, which were prescreened for substrate specificity of lipase from more than 500 yeast strains from the Agricultural Research Service Culture Collection (Peoria, IL, U.S.A.), and selected Yarrowia lipolytica NRRL Y-2178 as a best lipase producer. This report presents new finding and optimal production of a novel extracellular alkaline lipase from Y. lipolytica NRRL Y-2178. Optimal c ulture conditions f orlipase production by Y. lipolytica NRRL Y-2178 were 72 h incubation time, 27.5 degrees C, pH 9.0. Glycerol and glucose were efficiently used as the most efficient carbon sources, and a combination of yeast extract and peptone was a good nitrogen source for lipase production by Y. lipolytica NRRL Y-2178. These results suggested that Y. lipolytica NRRL Y-2178 showsgood industrial potential as a new alkaline lipase producer.
How to Decide? Multi-Objective Early-Warning Monitoring Networks for Water Suppliers
NASA Astrophysics Data System (ADS)
Bode, Felix; Loschko, Matthias; Nowak, Wolfgang
2015-04-01
Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources, which cannot be eliminated, especially in urban regions. As a matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs. In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations, to enhance the early warning time before detected contaminations reach the drinking water well, and to minimize the installation and operating costs of the monitoring network. Using multi-objectives optimization, we avoid the problem of having to weight these objectives to a single objective-function. These objectives are clearly competing, and it is impossible to know their mutual trade-offs beforehand - each catchment differs in many points and it is hardly possible to transfer knowledge between geological formations and risk inventories. To make our optimization results more specific to the type of risk inventory in different catchments we do risk prioritization of all known risk sources. Due to the lack of the required data, quantitative risk ranking is impossible. Instead, we use a qualitative risk ranking to prioritize the known risk sources for monitoring. Additionally, we allow for the existence of unknown risk sources that are totally uncertain in location and in their inherent risk. Therefore, they can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well. We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrades) to also cover moderate, tolerable and unknown risk sources. Monitoring networks, which are valid for the remaining risk also cover all other risk sources, but only with a relatively poor early-warning time. The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. It simulates, which potential contaminant plumes from the risk sources would be detectable where and when by all possible candidate positions for monitoring wells. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. These include uncertainty in ambient flow direction of the groundwater, uncertainty of the conductivity field, and different scenarios for the pumping rates of the production wells. To avoid numerical dispersion during the transport simulations, we use particle-tracking random walk methods when simulating transport.
Ntozini, Robert; Marks, Sara J; Mangwadu, Goldberg; Mbuya, Mduduzi N N; Gerema, Grace; Mutasa, Batsirai; Julian, Timothy R; Schwab, Kellogg J; Humphrey, Jean H; Zungu, Lindiwe I
2015-12-15
Access to water and sanitation are important determinants of behavioral responses to hygiene and sanitation interventions. We estimated cluster-specific water access and sanitation coverage to inform a constrained randomization technique in the SHINE trial. Technicians and engineers inspected all public access water sources to ascertain seasonality, function, and geospatial coordinates. Households and water sources were mapped using open-source geospatial software. The distance from each household to the nearest perennial, functional, protected water source was calculated, and for each cluster, the median distance and the proportion of households within <500 m and >1500 m of such a water source. Cluster-specific sanitation coverage was ascertained using a random sample of 13 households per cluster. These parameters were included as covariates in randomization to optimize balance in water and sanitation access across treatment arms at the start of the trial. The observed high variability between clusters in both parameters suggests that constraining on these factors was needed to reduce risk of bias. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KNUPP,PATRICK
2000-12-13
We investigate a well-motivated mesh untangling objective function whose optimization automatically produces non-inverted elements when possible. Examples show the procedure is highly effective on simplicial meshes and on non-simplicial (e.g., hexahedral) meshes constructed via mapping or sweeping algorithms. The current whisker-weaving (WW) algorithm in CUBIT usually produces hexahedral meshes that are unsuitable for analyses due to inverted elements. The majority of these meshes cannot be untangled using the new objective function. The most likely source of the difficulty is poor mesh topology.
Ghatnur, Shashidhar M.; Parvatam, Giridhar; Balaraman, Manohar
2015-01-01
Background: Cordyceps sinensis (CS) is a traditional Chinese medicine contains potent active metabolites such as nucleosides and polysaccharides. The submerged cultivation technique is studied for the large scale production of CS for biomass and metabolites production. Objective: To optimize culture conditions for large-scale production of CS1197 biomass and metabolites production. Materials and Methods: The CS1197 strain of CS was isolated from dead larvae of natural CS and the authenticity was assured by the presence of two major markers adenosine and cordycepin by high performance liquid chromatography and mass spectrometry. A three-level Box-Behnken design was employed to optimize process parameters culturing temperature, pH, and inoculum volume for the biomass yield, adenosine and cordycepin. The experimental results were regressed to a second-order polynomial equation by a multiple regression analysis for the prediction of biomass yield, adenosine and cordycepin production. Multiple responses were optimized based on desirability function method. Results: The desirability function suggested the process conditions temperature 28°C, pH 7 and inoculum volume 10% for optimal production of nutraceuticals in the biomass. The water extracts from dried CS1197 mycelia showed good inhibition for 2 diphenyl-1-picrylhydrazyl and 2,2-azinobis-(3-ethyl-benzo-thiazoline-6-sulfonic acid-free radicals. Conclusion: The result suggests that response surface methodology-desirability function coupled approach can successfully optimize the culture conditions for CS1197. SUMMARY Authentication of CS1197 strain by the presence of adenosine and cordycepin and culturing period was determined to be for 14 daysContent of nucleosides in natural CS was found higher than in cultured CS1197 myceliumBox-Behnken design to optimize critical cultural conditions: temperature, pH and inoculum volumeWater extract showed better antioxidant activity proving credible source of natural antioxidants. PMID:26929580
Open-source Software for Exoplanet Atmospheric Modeling
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph
2018-01-01
I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.
Gómez-Favela, Mario Armando; Gutiérrez-Dorado, Roberto; Cuevas-Rodríguez, Edith Oliva; Canizalez-Román, Vicente Adrián; Del Rosario León-Sicairos, Claudia; Milán-Carrillo, Jorge; Reyes-Moreno, Cuauhtémoc
2017-12-01
Chia (Salvia hispanica L.) plant is native from southern Mexico and northern Guatemala. Their seeds are a rich source of bioactive compounds which protect consumers against chronic diseases. Germination improves functionality of the seeds due to the increase in the bioactive compounds and associated antioxidant activity. The purpose of this study was to obtain functional flour from germinated chia seeds under optimized conditions with increased antioxidant activity, phenolic compounds, GABA, essential amino acids, and dietary fiber with respect to un-germinated chia seeds. The effect of germination temperature and time (GT = 20-35 °C, Gt = 10-300 h) on protein, lipid, and total phenolic contents (PC, LC, TPC, respectively), and antioxidant activity (AoxA) was analyzed by response surface methodology as optimization tool. Chia seeds were germinated inside plastic trays with absorbent paper moisturized with 50 mL of 100 ppm sodium hypochlorite dissolution. The sprouts were dried (50 °C/8 h) and ground to obtain germinated chia flours (GCF). The prediction models developed for PC, LC, TPC, and AoxA showed high coefficients of determination, demonstrating their adequacy to explain the variations in experimental data. The highest values of PC, LC, TPC, and AoxA were obtained at two different optimal conditions (GT = 21 °C/Gt = 157 h; GT = 33 °C/Gt = 126 h). Optimized germinated chia flours (OGCF) had higher PC, TPC, AoxA, GABA, essential amino acids, calculated protein efficiency ratio (C-PER), and total dietary fiber (TDF) than un-germinated chia seed flour. The OGCF could be utilized as a natural source of proteins, dietary fiber, GABA, and antioxidants in the development of new functional beverages and foods.
He, Jianfang; Fang, Xiaohui; Lin, Yuanhai; Zhang, Xinping
2015-05-04
Half-wave plates were introduced into an interference-lithography scheme consisting of three fibers that were arranged into a rectangular triangle. Such a flexible and compact geometry allows convenient tuning of the polarizations of both the UV laser source and each branch arm. This not only enables optimization of the contrast of the produced photonic structures with expected square lattices, but also multiplies the nano-patterning functions of a fixed design of fiber-based interference lithography. The patterns of the photonic structures can be thus tuned simply by rotating a half-wave plate.
System and method for optimal load and source scheduling in context aware homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shetty, Pradeep; Foslien Graber, Wendy; Mangsuli, Purnaprajna R.
A controller for controlling energy consumption in a home includes a constraints engine to define variables for multiple appliances in the home corresponding to various home modes and persona of an occupant of the home. A modeling engine models multiple paths of energy utilization of the multiple appliances to place the home into a desired state from a current context. An optimal scheduler receives the multiple paths of energy utilization and generates a schedule as a function of the multiple paths and a selected persona to place the home in a desired state.
NASA Astrophysics Data System (ADS)
Kingston, Andrew M.; Myers, Glenn R.; Latham, Shane J.; Li, Heyang; Veldkamp, Jan P.; Sheppard, Adrian P.
2016-10-01
With the GPU computing becoming main-stream, iterative tomographic reconstruction (IR) is becoming a com- putationally viable alternative to traditional single-shot analytical methods such as filtered back-projection. IR liberates one from the continuous X-ray source trajectories required for analytical reconstruction. We present a family of novel X-ray source trajectories for large-angle CBCT. These discrete (sparsely sampled) trajectories optimally fill the space of possible source locations by maximising the degree of mutually independent information. They satisfy a discrete equivalent of Tuy's sufficiency condition and allow high cone-angle (high-flux) tomog- raphy. The highly isotropic nature of the trajectory has several advantages: (1) The average source distance is approximately constant throughout the reconstruction volume, thus avoiding the differential-magnification artefacts that plague high cone-angle helical computed tomography; (2) Reduced streaking artifacts due to e.g. X-ray beam-hardening; (3) Misalignment and component motion manifests as blur in the tomogram rather than double-edges, which is easier to automatically correct; (4) An approximately shift-invariant point-spread-function which enables filtering as a pre-conditioner to speed IR convergence. We describe these space-filling trajectories and demonstrate their above-mentioned properties compared with a traditional helical trajectories.
Folta, James A.; Montcalm, Claude; Walton, Christopher
2003-01-01
A method and system for producing a thin film with highly uniform (or highly accurate custom graded) thickness on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source with controlled (and generally, time-varying) velocity. In preferred embodiments, the method includes the steps of measuring the source flux distribution (using a test piece that is held stationary while exposed to the source), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of sweep velocity modulation recipes, and determining from the predicted film thickness profiles a sweep velocity modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a practical method of accurately measuring source flux distribution, and a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal sweep velocity modulation recipe to achieve a desired thickness profile on a substrate. Preferably, the computer implements an algorithm in which many sweep velocity function parameters (for example, the speed at which each substrate spins about its center as it sweeps across the source) can be varied or set to zero.
Naser, Mohamed A.; Patterson, Michael S.
2011-01-01
Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647
A universal Model-R Coupler to facilitate the use of R functions for model calibration and analysis
Wu, Yiping; Liu, Shuguang; Yan, Wende
2014-01-01
Mathematical models are useful in various fields of science and engineering. However, it is a challenge to make a model utilize the open and growing functions (e.g., model inversion) on the R platform due to the requirement of accessing and revising the model's source code. To overcome this barrier, we developed a universal tool that aims to convert a model developed in any computer language to an R function using the template and instruction concept of the Parameter ESTimation program (PEST) and the operational structure of the R-Soil and Water Assessment Tool (R-SWAT). The developed tool (Model-R Coupler) is promising because users of any model can connect an external algorithm (written in R) with their model to implement various model behavior analyses (e.g., parameter optimization, sensitivity and uncertainty analysis, performance evaluation, and visualization) without accessing or modifying the model's source code.
RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.
2016-02-01
We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.
Strategies for the Optimization of Natural Leads to Anticancer Drugs or Drug Candidates
Xiao, Zhiyan; Morris-Natschke, Susan L.; Lee, Kuo-Hsiung
2015-01-01
Natural products have made significant contribution to cancer chemotherapy over the past decades and remain an indispensable source of molecular and mechanistic diversity for anticancer drug discovery. More often than not, natural products may serve as leads for further drug development rather than as effective anticancer drugs by themselves. Generally, optimization of natural leads into anticancer drugs or drug candidates should not only address drug efficacy, but also improve ADMET profiles and chemical accessibility associated with the natural leads. Optimization strategies involve direct chemical manipulation of functional groups, structure-activity relationship-directed optimization and pharmacophore-oriented molecular design based on the natural templates. Both fundamental medicinal chemistry principles (e.g., bio-isosterism) and state-of-the-art computer-aided drug design techniques (e.g., structure-based design) can be applied to facilitate optimization efforts. In this review, the strategies to optimize natural leads to anticancer drugs or drug candidates are illustrated with examples and described according to their purposes. Furthermore, successful case studies on lead optimization of bioactive compounds performed in the Natural Products Research Laboratories at UNC are highlighted. PMID:26359649
Salvat, Regina S; Verma, Deeptak; Parker, Andrew S; Kirsch, Jack R; Brooks, Seth A; Bailey-Kellogg, Chris; Griswold, Karl E
2017-06-27
Therapeutic proteins of wide-ranging function hold great promise for treating disease, but immune surveillance of these macromolecules can drive an antidrug immune response that compromises efficacy and even undermines safety. To eliminate widespread T-cell epitopes in any biotherapeutic and thereby mitigate this key source of detrimental immune recognition, we developed a Pareto optimal deimmunization library design algorithm that optimizes protein libraries to account for the simultaneous effects of combinations of mutations on both molecular function and epitope content. Active variants identified by high-throughput screening are thus inherently likely to be deimmunized. Functional screening of an optimized 10-site library (1,536 variants) of P99 β-lactamase (P99βL), a component of ADEPT cancer therapies, revealed that the population possessed high overall fitness, and comprehensive analysis of peptide-MHC II immunoreactivity showed the population possessed lower average immunogenic potential than the wild-type enzyme. Although similar functional screening of an optimized 30-site library (2.15 × 10 9 variants) revealed reduced population-wide fitness, numerous individual variants were found to have activity and stability better than the wild type despite bearing 13 or more deimmunizing mutations per enzyme. The immunogenic potential of one highly active and stable 14-mutation variant was assessed further using ex vivo cellular immunoassays, and the variant was found to silence T-cell activation in seven of the eight blood donors who responded strongly to wild-type P99βL. In summary, our multiobjective library-design process readily identified large and mutually compatible sets of epitope-deleting mutations and produced highly active but aggressively deimmunized constructs in only one round of library screening.
Optimizing an experimental design for an electromagnetic experiment
NASA Astrophysics Data System (ADS)
Roux, Estelle; Garcia, Xavier
2013-04-01
Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.
ERIC Educational Resources Information Center
Iran-Nejad, Asghar; Ortony, Andrew
Optimal-level theories maintain that the quality of affect is a function of a quantitative arousal potential dimension. An alternative view is that the quantitative dimension merely modulates preexisting qualitative properties and is therefore only responsible for changes in the degree of affect. Thus, the quality of affect, whether it is positive…
USDA-ARS?s Scientific Manuscript database
The molecular biological techniques for plasmid-based assembly and cloning of synthetic assembled gene open reading frames are essential for elucidating the function of the proteins encoded by the genes. These techniques involve the production of full-length cDNA libraries as a source of plasmid-bas...
Designing a freeform optic for oblique illumination
NASA Astrophysics Data System (ADS)
Uthoff, Ross D.; Ulanch, Rachel N.; Williams, Kaitlyn E.; Ruiz Diaz, Liliana; King, Page; Koshel, R. John
2017-11-01
The Functional Freeform Fitting (F4) method is utilized to design a freeform optic for oblique illumination of Mark Rothko's Green on Blue (1956). Shown are preliminary results from an iterative freeform design process; from problem definition and specification development to surface fit, ray tracing results, and optimization. This method is applicable to both point and extended sources of various geometries.
Spinal motor control system incorporates an internal model of limb dynamics.
Shimansky, Y P
2000-10-01
The existence and utilization of an internal representation of the controlled object is one of the most important features of the functioning of neural motor control systems. This study demonstrates that this property already exists at the level of the spinal motor control system (SMCS), which is capable of generating motor patterns for reflex rhythmic movements, such as locomotion and scratching, without the aid of the peripheral afferent feedback, but substantially modifies the generated activity in response to peripheral afferent stimuli. The SMCS is presented as an optimal control system whose optimality requires that it incorporate an internal model (IM) of the controlled object's dynamics. A novel functional mechanism for the integration of peripheral sensory signals with the corresponding predictive output from the IM, the summation of information precision (SIP) is proposed. In contrast to other models in which the correction of the internal representation of the controlled object's state is based on the calculation of a mismatch between the internal and external information sources, the SIP mechanism merges the information from these sources in order to optimize the precision of the controlled object's state estimate. It is demonstrated, based on scratching in decerebrate cats as an example of the spinal control of goal-directed movements, that the results of computer modeling agree with the experimental observations related to the SMCS's reactions to phasic and tonic peripheral afferent stimuli. It is also shown that the functional requirements imposed by the mathematical model of the SMCS comply with the current knowledge about the related properties of spinal neuronal circuitry. The crucial role of the spinal presynaptic inhibition mechanism in the neuronal implementation of SIP is elucidated. Important differences between the IM and a state predictor employed for compensating for a neural reflex time delay are discussed.
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
Akeroyd, Michael A; Chambers, John; Bullock, David; Palmer, Alan R; Summerfield, A Quentin; Nelson, Philip A; Gatehouse, Stuart
2007-02-01
Cross-talk cancellation is a method for synthesizing virtual auditory space using loudspeakers. One implementation is the "Optimal Source Distribution" technique [T. Takeuchi and P. Nelson, J. Acoust. Soc. Am. 112, 2786-2797 (2002)], in which the audio bandwidth is split across three pairs of loudspeakers, placed at azimuths of +/-90 degrees, +/-15 degrees, and +/-3 degrees, conveying low, mid, and high frequencies, respectively. A computational simulation of this system was developed and verified against measurements made on an acoustic system using a manikin. Both the acoustic system and the simulation gave a wideband average cancellation of almost 25 dB. The simulation showed that when there was a mismatch between the head-related transfer functions used to set up the system and those of the final listener, the cancellation was reduced to an average of 13 dB. Moreover, in this case the binaural interaural time differences and interaural level differences delivered by the simulation of the optimal source distribution (OSD) system often differed from the target values. It is concluded that only when the OSD system is set up with "matched" head-related transfer functions can it deliver accurate binaural cues.
NASA Astrophysics Data System (ADS)
Priya, Anjali; Mishra, Ram Awadh
2016-04-01
In this paper, analytical modeling of surface potential is proposed for new Triple Metal Gate (TMG) fully depleted Recessed-Source/Dain Silicon On Insulator (SOI) Metal Oxide Semiconductor Field Effect Transistor (MOSFET). The metal with the highest work function is arranged near the source region and the lowest one near the drain. Since Recessed-Source/Drain SOI MOSFET has higher drain current as compared to conventional SOI MOSFET due to large source and drain region. The surface potential model developed by 2D Poisson's equation is verified by comparison to the simulation result of 2-dimensional ATLAS simulator. The model is compared with DMG and SMG devices and analysed for different device parameters. The ratio of metal gate length is varied to optimize the result.
Intelligent and robust optimization frameworks for smart grids
NASA Astrophysics Data System (ADS)
Dhansri, Naren Reddy
A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Technical Note: spektr 3.0-A computational tool for x-ray spectrum modeling and analysis.
Punnoose, J; Xu, J; Sisniega, A; Zbijewski, W; Siewerdsen, J H
2016-08-01
A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. The spektr code generates x-ray spectra (photons/mm(2)/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20-150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30-140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available.
Intelligent control for PMSM based on online PSO considering parameters change
NASA Astrophysics Data System (ADS)
Song, Zhengqiang; Yang, Huiling
2018-03-01
A novel online particle swarm optimization method is proposed to design speed and current controllers of vector controlled interior permanent magnet synchronous motor drives considering stator resistance variation. In the proposed drive system, the space vector modulation technique is employed to generate the switching signals for a two-level voltage-source inverter. The nonlinearity of the inverter is also taken into account due to the dead-time, threshold and voltage drop of the switching devices in order to simulate the system in the practical condition. Speed and PI current controller gains are optimized with PSO online, and the fitness function is changed according to the system dynamic and steady states. The proposed optimization algorithm is compared with conventional PI control method in the condition of step speed change and stator resistance variation, showing that the proposed online optimization method has better robustness and dynamic characteristics compared with conventional PI controller design.
Cross-domain latent space projection for person re-identification
NASA Astrophysics Data System (ADS)
Pu, Nan; Wu, Song; Qian, Li; Xiao, Guoqiang
2018-04-01
In this paper, we research the problem of person re-identification and propose a cross-domain latent space projection (CDLSP) method to address the problems of the absence or insufficient labeled data in the target domain. Under the assumption that the visual features in the source domain and target domain share the similar geometric structure, we transform the visual features from source domain and target domain to a common latent space by optimizing the object function defined in the manifold alignment method. Moreover, the proposed object function takes into account the specific knowledge in the re-id with the aim to improve the performance of re-id under complex situations. Extensive experiments conducted on four benchmark datasets show the proposed CDLSP outperforms or is competitive with stateof- the-art methods for person re-identification.
Gschwind, Michael K
2013-07-23
Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.
Design optimization of a smooth headlamp reflector to SAE/DOT beam-shape requirements
NASA Astrophysics Data System (ADS)
Shatz, Narkis E.; Bortz, John C.; Dassanayake, Mahendra S.
1999-10-01
The optical design of Ford Motor Company's 1992 Mercury Grand Marquis headlamp utilized a Sylvania 9007 filament source, a paraboloidal reflector and an array of cylindrical lenses (flutes). It has been of interest to Ford to determine the practicality of closely reproducing the on- road beam pattern performance of this headlamp, with an alternate optical arrangement whereby the control of the beam would be achieved solely by means of the geometry of the surface of the reflector, subject to a requirement of smooth-surface continuity; replacing the outer lens with a clear plastic cover having no beam-forming function. To this end the far-field intensity distribution produced by the 9007 bulb was measured at the low-beam setting. These measurements were then used to develop a light-source model for use in ray tracing simulations of candidate reflector geometries. An objective function was developed to compare candidate beam patterns with the desired beam pattern. Functional forms for the 3D reflector geometry were developed with free parameters to be subsequently optimized. A solution was sought meeting the detailed US SAE/DOT constraints for minimum and maximum permissible levels of illumination in the different portions of the beam pattern. Simulated road scenes were generated by Ford Motor Company to compare the illumination properties of the new design with those of the original Grand Marquis headlamp.
Functional connectivity analysis in EEG source space: The choice of method
Knyazeva, Maria G.
2017-01-01
Functional connectivity (FC) is among the most informative features derived from EEG. However, the most straightforward sensor-space analysis of FC is unreliable owing to volume conductance effects. An alternative—source-space analysis of FC—is optimal for high- and mid-density EEG (hdEEG, mdEEG); however, it is questionable for widely used low-density EEG (ldEEG) because of inadequate surface sampling. Here, using simulations, we investigate the performance of the two source FC methods, the inverse-based source FC (ISFC) and the cortical partial coherence (CPC). To examine the effects of localization errors of the inverse method on the FC estimation, we simulated an oscillatory source with varying locations and SNRs. To compare the FC estimations by the two methods, we simulated two synchronized sources with varying between-source distance and SNR. The simulations were implemented for hdEEG, mdEEG, and ldEEG. We showed that the performance of both methods deteriorates for deep sources owing to their inaccurate localization and smoothing. The accuracy of both methods improves with the increasing between-source distance. The best ISFC performance was achieved using hd/mdEEG, while the best CPC performance was observed with ldEEG. In conclusion, with hdEEG, ISFC outperforms CPC and therefore should be the preferred method. In the studies based on ldEEG, the CPC is a method of choice. PMID:28727750
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
NASA Astrophysics Data System (ADS)
Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith
2005-11-01
Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.
Optimized data fusion for K-means Laplacian clustering
Yu, Shi; Liu, Xinhai; Tranchevent, Léon-Charles; Glänzel, Wolfgang; Suykens, Johan A. K.; De Moor, Bart; Moreau, Yves
2011-01-01
Motivation: We propose a novel algorithm to combine multiple kernels and Laplacians for clustering analysis. The new algorithm is formulated on a Rayleigh quotient objective function and is solved as a bi-level alternating minimization procedure. Using the proposed algorithm, the coefficients of kernels and Laplacians can be optimized automatically. Results: Three variants of the algorithm are proposed. The performance is systematically validated on two real-life data fusion applications. The proposed Optimized Kernel Laplacian Clustering (OKLC) algorithms perform significantly better than other methods. Moreover, the coefficients of kernels and Laplacians optimized by OKLC show some correlation with the rank of performance of individual data source. Though in our evaluation the K values are predefined, in practical studies, the optimal cluster number can be consistently estimated from the eigenspectrum of the combined kernel Laplacian matrix. Availability: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/oklc.html. Contact: shiyu@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20980271
Romdhane, Molka Ben; Haddar, Anissa; Ghazala, Imen; Jeddou, Khawla Ben; Helbert, Claire Boisset; Ellouz-Chaabouni, Semia
2017-02-01
In the present work, optimization of hot water extraction, structural characteristics, functional properties, and biological activities of polysaccharides extracted from watermelon rinds (WMRP) were investigated. The physicochemical characteristics and the monosaccharide composition of these polysaccharides were then determined using chemical composition analysis, Fourier transform infrared (FT-IR) spectroscopy, scanning electron microscopy (SEM) and gas chromatography-flame ionization detection (GC-FID). SEM images showed that extracted polysaccharides had a rough surface with many cavities. GC-FID results proved that galactose was the dominant sugar in the extracted polysaccharides, followed by arabinose, glucose, galacturonic acid, rhamnose, mannose, xylose and traces of glucuronic acid. The findings revealed that WMRP displayed excellent antihypertensive and antioxidant activities. Those polysaccharides had also a protection effect against hydroxyl radical-induced DNA damage. Functional properties of extracted polysaccharides were also evaluated. WMRP showed good interfacial dose-dependent proprieties. Overall, the results suggested that WMRP presents a promising natural source of antioxidants and antihypertensive agents. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo; Savarese, Salvatore; Schipani, Pietro
2016-07-01
The communication presents an innovative method for the diagnosis of reflector antennas in radio astronomical applications. The approach is based on the optimization of the number and the distribution of the far field sampling points exploited to retrieve the antenna status in terms of feed misalignments, this to drastically reduce the time length of the measurement process and minimize the effects of variable environmental conditions and simplifying the tracking process of the source. The feed misplacement is modeled in terms of an aberration function of the aperture field. The relationship between the unknowns and the far field pattern samples is linearized thanks to a Principal Component Analysis. The number and the position of the field samples are then determined by optimizing the Singular Values behaviour of the relevant operator.
Identifying functionally informative evolutionary sequence profiles.
Gil, Nelson; Fiser, Andras
2018-04-15
Multiple sequence alignments (MSAs) can provide essential input to many bioinformatics applications, including protein structure prediction and functional annotation. However, the optimal selection of sequences to obtain biologically informative MSAs for such purposes is poorly explored, and has traditionally been performed manually. We present Selection of Alignment by Maximal Mutual Information (SAMMI), an automated, sequence-based approach to objectively select an optimal MSA from a large set of alternatives sampled from a general sequence database search. The hypothesis of this approach is that the mutual information among MSA columns will be maximal for those MSAs that contain the most diverse set possible of the most structurally and functionally homogeneous protein sequences. SAMMI was tested to select MSAs for functional site residue prediction by analysis of conservation patterns on a set of 435 proteins obtained from protein-ligand (peptides, nucleic acids and small substrates) and protein-protein interaction databases. Availability and implementation: A freely accessible program, including source code, implementing SAMMI is available at https://github.com/nelsongil92/SAMMI.git. andras.fiser@einstein.yu.edu. Supplementary data are available at Bioinformatics online.
Preliminary design of a mobile lunar power supply
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.; Kenny, Barbara H.; Fulmer, Christopher R.
1991-01-01
A preliminary design for a Stirling isotope power system for use as a mobile lunar power supply is presented. Performance and mass of the components required for the system are estimated. These estimates are based on power requirements and the operating environment. Optimizations routines are used to determine minimum mass operational points. Shielding for the isotope system are given as a function of the allowed dose, distance from the source, and the time spent near the source. The technologies used in the power conversion and radiator systems are taken from ongoing research in the Civil Space Technology Initiative (CSTI) program.
Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers
NASA Astrophysics Data System (ADS)
Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi
2018-03-01
Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.
The Influences of Lamination Angles on the Interior Noise Levels of an Aircraft
NASA Technical Reports Server (NTRS)
Fernholz, Christian M.; Robinson, Jay H.
1996-01-01
The feasibility of reducing the interior noise levels of an aircraft passenger cabin through optimization of the composite lay up of the fuselage is investigated. MSC/NASTRAN, a commercially available finite element code, is used to perform the dynamic analysis and subsequent optimization of the fuselage. The numerical calculation of sensitivity of acoustic pressure to lamination angle is verified using a simple thin, cylindrical shell with point force excitations as noise sources. The thin shell used represents a geometry similar to the fuselage and analytic solutions are available for the cylindrical thin shell equations of motion. Optimization of lamination angle for the reduction of interior noise is performed using a finite element model of an actual aircraft fuselage. The aircraft modeled for this study is the Beech Starship. Point forces simulate the structure borne noise produced by the engines and are applied to the fuselage at the wing mounting locations. These forces are the noise source for the optimization problem. The acoustic pressure response is reduced at a number of points in the fuselage and over a number of frequencies. The objective function is minimized with the constraint that it be larger than the maximum sound pressure level at the response points in the passenger cabin for all excitation frequencies in the range of interest. Results from the study of the fuselage model indicate that a reduction in interior noise levels is possible over a finite frequency range through optimal configuration of the lamination angles in the fuselage. Noise reductions of roughly 4 dB were attained. For frequencies outside the optimization range, the acoustic pressure response may increase after optimization. The effects of changing lamination angle on the overall structural integrity of the airframe are not considered in this study.
NASA Astrophysics Data System (ADS)
Cachera, M.; Ernande, B.; Villanueva, M. C.; Lefebvre, S.
2017-02-01
Individual diet variation (i.e. diet variation among individuals) impacts intra- and inter-specific interactions. Investigating its sources and relationship with species trophic niche organization is important for understanding community structure and dynamics. Individual diet variation may increase with intra-specific phenotypic (or "individual state") variation and habitat variability, according to Optimal Foraging Theory (OFT), and with species trophic niche width, according to the Niche Variation Hypothesis (NVH). OFT proposes "proximate sources" of individual diet variation such as variations in habitat or size whereas NVH relies on "ultimate sources" related to the competitive balance between intra- and inter-specific competitions. The latter implies as a corollary that species trophic niche overlap, taken as inter-specific competition measure, decreases as species niche width and individual niche variation increase. We tested the complementary predictions of OFT and NVH in a marine fish assemblage using stomach content data and associated trophic niche metrics. The NVH predictions were tested between species of the assemblage and decomposed into a between- and a within-functional group component to assess the potential influence of species' ecological function. For most species, individual diet variation and niche overlap were consistently larger than expected. Individual diet variation increased with intra-specific variability in individual state and habitat, as expected from OFT. It also increased with species niche width but in compliance with the null expectation, thus not supporting the NVH. In contrast, species niche overlap increased significantly less than null expectation with both species niche width and individual diet variation, supporting NVH corollary. The between- and within-functional group components of the NVH relationships were consistent with those between species at the assemblage level. Changing the number of prey categories used to describe diet (from 16 to 41) did not change the results qualitatively. These results suggest that, besides proximate sources, intra-specific competition favors higher individual diet variation than expected while inter-specific competition limits the increase of individual diet variation and of species niche overlap with species niche expansion. This reveals partial trophic resource partitioning between species. Various niche metrics used in combination allow inferring competition effects on trophic niches' organization within communities.
Moment tensor solutions estimated using optimal filter theory for 51 selected earthquakes, 1980-1984
Sipkin, S.A.
1987-01-01
The 51 global events that occurred from January 1980 to March 1984, which were chosen by the convenors of the Symposium on Seismological Theory and Practice, have been analyzed using a moment tensor inversion algorithm (Sipkin). Many of the events were routinely analyzed as part of the National Earthquake Information Center's (NEIC) efforts to publish moment tensor and first-motion fault-plane solutions for all moderate- to large-sized (mb>5.7) earthquakes. In routine use only long-period P-waves are used and the source-time function is constrained to be a step-function at the source (??-function in the far-field). Four of the events were of special interest, and long-period P, SH-wave solutions were obtained. For three of these events, an unconstrained inversion was performed. The resulting time-dependent solutions indicated that, for many cases, departures of the solutions from pure double-couples are caused by source complexity that has not been adequately modeled. These solutions also indicate that source complexity of moderate-sized events can be determined from long-period data. Finally, for one of the events of special interest, an inversion of the broadband P-waveforms was also performed, demonstrating the potential for using broadband waveform data in inversion procedures. ?? 1987.
NASA Astrophysics Data System (ADS)
Ganguli, R.
2002-11-01
An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marleau, Peter; Reyna, David
In this work we investigate a method that confirms the operability of neutron detectors requiring neither radiological sources nor radiation-generating devices. This is desirable when radiological sources are not available, but confidence in the functionality of the instrument is required. The “source”, based on the production of neutrons in high-Z materials by muons, provides a tagged, low-background and consistent rate of neutrons that can be used to check the functionality of or calibrate a detector. Using a Monte Carlo guided optimization, an experimental apparatus was designed and built to evaluate the feasibility of this technique. Through a series of trialmore » measurements in a variety of locations we show that gated muon-induced neutrons appear to provide a consistent source of neutrons (35.9 ± 2.3 measured neutrons/10,000 muons in the instrument) under normal environmental variability (less than one statistical standard deviation for 10,000 muons) with a combined environmental + statistical uncertainty of ~18% for 10,000 muons. This is achieved in a single 21-22 minute measurement at sea level.« less
Coherent transport and energy flow patterns in photosynthesis under incoherent excitation.
Pelzer, Kenley M; Can, Tankut; Gray, Stephen K; Morr, Dirk K; Engel, Gregory S
2014-03-13
Long-lived coherences have been observed in photosynthetic complexes after laser excitation, inspiring new theories regarding the extreme quantum efficiency of photosynthetic energy transfer. Whether coherent (ballistic) transport occurs in nature and whether it improves photosynthetic efficiency remain topics of debate. Here, we use a nonequilibrium Green's function analysis to model exciton transport after excitation from an incoherent source (as opposed to coherent laser excitation). We find that even with an incoherent source, the rate of environmental dephasing strongly affects exciton transport efficiency, suggesting that the relationship between dephasing and efficiency is not an artifact of coherent excitation. The Green's function analysis provides a clear view of both the pattern of excitonic fluxes among chromophores and the multidirectionality of energy transfer that is a feature of coherent transport. We see that even in the presence of an incoherent source, transport occurs by qualitatively different mechanisms as dephasing increases. Our approach can be generalized to complex synthetic systems and may provide a new tool for optimizing synthetic light harvesting materials.
Propeller performance analysis and multidisciplinary optimization using a genetic algorithm
NASA Astrophysics Data System (ADS)
Burger, Christoph
A propeller performance analysis program has been developed and integrated into a Genetic Algorithm for design optimization. The design tool will produce optimal propeller geometries for a given goal, which includes performance and/or acoustic signature. A vortex lattice model is used for the propeller performance analysis and a subsonic compact source model is used for the acoustic signature determination. Compressibility effects are taken into account with the implementation of Prandtl-Glauert domain stretching. Viscous effects are considered with a simple Reynolds number based model to account for the effects of viscosity in the spanwise direction. An empirical flow separation model developed from experimental lift and drag coefficient data of a NACA 0012 airfoil is included. The propeller geometry is generated using a recently introduced Class/Shape function methodology to allow for efficient use of a wide design space. Optimizing the angle of attack, the chord, the sweep and the local airfoil sections, produced blades with favorable tradeoffs between single and multiple point optimizations of propeller performance and acoustic noise signatures. Optimizations using a binary encoded IMPROVE(c) Genetic Algorithm (GA) and a real encoded GA were obtained after optimization runs with some premature convergence. The newly developed real encoded GA was used to obtain the majority of the results which produced generally better convergence characteristics when compared to the binary encoded GA. The optimization trade-offs show that single point optimized propellers have favorable performance, but circulation distributions were less smooth when compared to dual point or multiobjective optimizations. Some of the single point optimizations generated propellers with proplets which show a loading shift to the blade tip region. When noise is included into the objective functions some propellers indicate a circulation shift to the inboard sections of the propeller as well as a reduction in propeller diameter. In addition the propeller number was increased in some optimizations to reduce the acoustic blade signature.
Optimal configuration of power grid sources based on optimal particle swarm algorithm
NASA Astrophysics Data System (ADS)
Wen, Yuanhua
2018-04-01
In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.
Flores-Girón, Emmanuel; Salazar-Montoya, Juan Alfredo; Ramos-Ramírez, Emma Gloria
2016-08-01
Agave (Agave tequilana Weber var. Azul) is an industrially important crop in México since it is the only raw material appropriate to produce tequila, an alcoholic beverage. Nowadays, however, these plants have also a nutritional interest as a source of functional food ingredients, owing to the prebiotic potential of agave fructans. In this study, a Box-Behnken design was employed to determine the influence of temperature, liquid:solid ratio and time in a maceration process for agave fructan extraction and optimization. The developed regression model indicates that the selected study variables were statistical determinants for the extraction yield, and the optimal conditions for maximum extraction were a temperature of 60 °C, a liquid:solid ratio of 10:1 (v/w) and a time of 26.7 min, corresponding to a predicted extraction yield of 37.84%. Through selective separation via precipitation with ethanol, fructans with a degree of polymerization of 29.1 were obtained. Box-Behnken designs are useful statistical methods for optimizing the extraction process of agave fructans. A mixture of carbohydrates was obtained from agave powder. This optimized method can be used to obtain fructans for use as prebiotics or as raw material for obtaining functional oligosaccharides. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Mason, Tyler B; Lewis, Robin J
2017-12-01
Binge eating is a significant concern among college age women-both Caucasian and African-American women. Research has shown that social support, coping, and optimism are associated with engaging in fewer negative health behaviors including binge eating among college students. However, the impact of sources of social support (i.e., support from family, friends, and a special person), rumination, and optimism on binge eating as a function of race/ethnicity has received less attention. The purpose of this study was to examine the association between social support, rumination, and optimism and binge eating among Caucasian and American-American women, separately. Caucasian (n = 100) and African-American (n = 84) women from a university in the Mid-Atlantic US completed an online survey about eating behaviors and psychosocial health. Social support from friends was associated with less likelihood of binge eating among Caucasian women. Social support from family was associated with less likelihood of binge eating among African-American women, but greater likelihood of binge eating among Caucasian women. Rumination was associated with greater likelihood of binge eating among Caucasian and African-American women. Optimism was associated with less likelihood of binge eating among African-American women. These results demonstrate similarities and differences in correlates of binge eating as a function of race/ethnicity.
scarlet: Source separation in multi-band images by Constrained Matrix Factorization
NASA Astrophysics Data System (ADS)
Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert
2018-03-01
SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.
Yang, Guoxiang; Best, Elly P H
2015-09-15
Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Qualitative modeling of silica plasma etching using neural network
NASA Astrophysics Data System (ADS)
Kim, Byungwhan; Kwon, Kwang Ho
2003-01-01
An etching of silica thin film is qualitatively modeled by using a neural network. The process was characterized by a 23 full factorial experiment plus one center point, in which the experimental factors and ranges include 100-800 W radio-frequency source power, 100-400 W bias power and gas flow rate ratio CHF3/CF4. The gas flow rate ratio varied from 0.2 to 5.0. The backpropagation neural network (BPNN) was trained on nine experiments and tested on six experiments, not pertaining to the original training data. The prediction ability of the BPNN was optimized as a function of the training parameters. Prediction errors are 180 Å/min and 1.33, for the etch rate and anisotropy models, respectively. Physical etch mechanisms were estimated from the three-dimensional plots generated from the optimized models. Predicted response surfaces were consistent with experimentally measured etch data. The dc bias was correlated to the etch responses to evaluate its contribution. Both the source power (plasma density) and bias power (ion directionality) strongly affected the etch rate. The source power was the most influential factor for the etch rate. A conflicting effect between the source and bias powers was noticed with respect to the anisotropy. The dc bias played an important role in understanding or separating physical etch mechanisms.
Studies of EGRET sources with a novel image restoration technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajima, Hiroyasu; Cohen-Tanugi, Johann; Kamae, Tuneyoshi
2007-07-12
We have developed an image restoration technique based on the Richardson-Lucy algorithm optimized for GLAST-LAT image analysis. Our algorithm is original since it utilizes the PSF (point spread function) that is calculated for each event. This is critical for EGRET and GLAST-LAT image analysis since the PSF depends on the energy and angle of incident gamma-rays and varies by more than one order of magnitude. EGRET and GLAST-LAT image analysis also faces Poisson noise due to low photon statistics. Our technique incorporates wavelet filtering to minimize noise effects. We present studies of EGRET sources using this novel image restoration techniquemore » for possible identification of extended gamma-ray sources.« less
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Vesselinov, Velimir V.
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.
Zhang, J L; Li, Y P; Huang, G H
2014-04-01
In this study, a robust simulation-optimization modeling system (RSOMS) is developed for supporting agricultural nonpoint source (NPS) effluent trading planning. The RSOMS can enhance effluent trading through incorporation of a distributed simulation model and an optimization model within its framework. The modeling system not only can handle uncertainties expressed as probability density functions and interval values but also deal with the variability of the second-stage costs that are above the expected level as well as capture the notion of risk under high-variability situations. A case study is conducted for mitigating agricultural NPS pollution with an effluent trading program in Xiangxi watershed. Compared with non-trading policy, trading scheme can successfully mitigate agricultural NPS pollution with an increased system benefit. Through trading scheme, [213.7, 288.8] × 10(3) kg of TN and [11.8, 30.2] × 10(3) kg of TP emissions from cropped area can be cut down during the planning horizon. The results can help identify desired effluent trading schemes for water quality management with the tradeoff between the system benefit and reliability being balanced and risk aversion being considered.
NASA Astrophysics Data System (ADS)
Hardy, Neil; Dvir, Hila; Fenton, Flavio
Existing pacemakers consider the rectangular pulse to be the optimal form of stimulation current. However, other waveforms for the use of pacemakers could save energy while still stimulating the heart. We aim to find the optimal waveform for pacemaker use, and to offer a theoretical explanation for its advantage. Since the pacemaker battery is a charge source, here we probe the stimulation current waveforms with respect to the total charge delivery. In this talk we present theoretical analysis and numerical simulations of myocyte ion-channel currents acting as an additional source of charge that adds to the external stimulating charge for stimulation purposes. Therefore, we find that as the action potential emerges, the external stimulating current can be reduced accordingly exponentially. We then performed experimental studies in rabbit and cat hearts and showed that indeed exponential truncated pulses with less total charge can still induce activation in the heart. From the experiments, we present curves showing the savings in charge as a function of exponential waveform and we calculated that the longevity of the pacemaker battery would be ten times higher for the exponential current compared to the rectangular waveforms. Thanks to Petit Undergraduate Research Scholars Program and NSF# 1413037.
Visualization tool for human-machine interface designers
NASA Astrophysics Data System (ADS)
Prevost, Michael P.; Banda, Carolyn P.
1991-06-01
As modern human-machine systems continue to grow in capabilities and complexity, system operators are faced with integrating and managing increased quantities of information. Since many information components are highly related to each other, optimizing the spatial and temporal aspects of presenting information to the operator has become a formidable task for the human-machine interface (HMI) designer. The authors describe a tool in an early stage of development, the Information Source Layout Editor (ISLE). This tool is to be used for information presentation design and analysis; it uses human factors guidelines to assist the HMI designer in the spatial layout of the information required by machine operators to perform their tasks effectively. These human factors guidelines address such areas as the functional and physical relatedness of information sources. By representing these relationships with metaphors such as spring tension, attractors, and repellers, the tool can help designers visualize the complex constraint space and interacting effects of moving displays to various alternate locations. The tool contains techniques for visualizing the relative 'goodness' of a configuration, as well as mechanisms such as optimization vectors to provide guidance toward a more optimal design. Also available is a rule-based design checker to determine compliance with selected human factors guidelines.
Identification of the fitness determinants of budding yeast on a natural substrate.
Filteau, Marie; Charron, Guillaume; Landry, Christian R
2017-04-01
The budding yeasts are prime models in genomics and cell biology, but the ecological factors that determine their success in non-human-associated habitats is poorly understood. In North America Saccharomyces yeasts are present on the bark of deciduous trees, where they feed on bark and sap exudates. In the North East, Saccharomyces paradoxus is found on maples, which makes maple sap a natural substrate for this species. We measured growth rates of S. paradoxus natural isolates on maple sap and found variation along a geographical gradient not explained by the inherent variation observed under optimal laboratory conditions. We used a functional genomic screen to reveal the ecologically relevant genes and conditions required for optimal growth in this substrate. We found that the allantoin degradation pathway is required for optimal growth in maple sap, in particular genes necessary for allantoate utilization, which we demonstrate is the major nitrogen source available to yeast in this environment. Growth with allantoin or allantoate as the sole nitrogen source recapitulated the variation in growth rates in maple sap among strains. We also show that two lineages of S. paradoxus display different life-history traits on allantoin and allantoate media, highlighting the ecological relevance of this pathway.
Topology-optimized silicon photonic wire mode (de)multiplexer
NASA Astrophysics Data System (ADS)
Frellsen, Louise F.; Frandsen, Lars H.; Ding, Yunhong; Elesin, Yuriy; Sigmund, Ole; Yvind, Kresten
2015-02-01
We have designed and for the first time experimentally verified a topology optimized mode (de)multiplexer, which demultiplexes the fundamental and the first order mode of a double mode photonic wire to two separate single mode waveguides (and multiplexes vice versa). The device has a footprint of ~4.4 μm x ~2.8 μm and was fabricated for different design resolutions and design threshold values to verify the robustness of the structure to fabrication tolerances. The multiplexing functionality was confirmed by recording mode profiles using an infrared camera and vertical grating couplers. All structures were experimentally found to maintain functionality throughout a 100 nm wavelength range limited by available laser sources and insertion losses were generally lower than 1.3 dB. The cross talk was around -12 dB and the extinction ratio was measured to be better than 8 dB.
INTEGRAL/SPI data segmentation to retrieve source intensity variations
NASA Astrophysics Data System (ADS)
Bouchet, L.; Amestoy, P. R.; Buttari, A.; Rouet, F.-H.; Chauvin, M.
2013-07-01
Context. The INTEGRAL/SPI, X/γ-ray spectrometer (20 keV-8 MeV) is an instrument for which recovering source intensity variations is not straightforward and can constitute a difficulty for data analysis. In most cases, determining the source intensity changes between exposures is largely based on a priori information. Aims: We propose techniques that help to overcome the difficulty related to source intensity variations, which make this step more rational. In addition, the constructed "synthetic" light curves should permit us to obtain a sky model that describes the data better and optimizes the source signal-to-noise ratios. Methods: For this purpose, the time intensity variation of each source was modeled as a combination of piecewise segments of time during which a given source exhibits a constant intensity. To optimize the signal-to-noise ratios, the number of segments was minimized. We present a first method that takes advantage of previous time series that can be obtained from another instrument on-board the INTEGRAL observatory. A data segmentation algorithm was then used to synthesize the time series into segments. The second method no longer needs external light curves, but solely SPI raw data. For this, we developed a specific algorithm that involves the SPI transfer function. Results: The time segmentation algorithms that were developed solve a difficulty inherent to the SPI instrument, which is the intensity variations of sources between exposures, and it allows us to obtain more information about the sources' behavior. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain, and Switzerland), Czech Republic and Poland with participation of Russia and the USA.
Parrish, Robert M; Burns, Lori A; Smith, Daniel G A; Simmonett, Andrew C; DePrince, A Eugene; Hohenstein, Edward G; Bozkaya, Uğur; Sokolov, Alexander Yu; Di Remigio, Roberto; Richard, Ryan M; Gonthier, Jérôme F; James, Andrew M; McAlexander, Harley R; Kumar, Ashutosh; Saitow, Masaaki; Wang, Xiao; Pritchard, Benjamin P; Verma, Prakash; Schaefer, Henry F; Patkowski, Konrad; King, Rollin A; Valeev, Edward F; Evangelista, Francesco A; Turney, Justin M; Crawford, T Daniel; Sherrill, C David
2017-07-11
Psi4 is an ab initio electronic structure program providing methods such as Hartree-Fock, density functional theory, configuration interaction, and coupled-cluster theory. The 1.1 release represents a major update meant to automate complex tasks, such as geometry optimization using complete-basis-set extrapolation or focal-point methods. Conversion of the top-level code to a Python module means that Psi4 can now be used in complex workflows alongside other Python tools. Several new features have been added with the aid of libraries providing easy access to techniques such as density fitting, Cholesky decomposition, and Laplace denominators. The build system has been completely rewritten to simplify interoperability with independent, reusable software components for quantum chemistry. Finally, a wide range of new theoretical methods and analyses have been added to the code base, including functional-group and open-shell symmetry adapted perturbation theory, density-fitted coupled cluster with frozen natural orbitals, orbital-optimized perturbation and coupled-cluster methods (e.g., OO-MP2 and OO-LCCD), density-fitted multiconfigurational self-consistent field, density cumulant functional theory, algebraic-diagrammatic construction excited states, improvements to the geometry optimizer, and the "X2C" approach to relativistic corrections, among many other improvements.
Cheirsilp, Benjamas; Suksawang, Suwannee; Yeesang, Jarucha; Boonsawang, Piyarat
2018-01-01
Kefiran is a functional exopolysaccharide produced by Lactobacillus kefiranofaciens originated from kefir, traditional fermented milk in the Caucasian Mountains, Russia. Kefiran is attractive as thickeners, stabilizers, emulsifiers, gelling agents and also has antimicrobial and antitumor activity. However, the production costs of kefiran are still high mainly due to high cost of carbon and nitrogen sources. This study aimed to produce kefiran and its co-product, lactic acid, from low-cost industrial byproducts. Among the sources tested, whey lactose (at 2% sugar concentration) and spent yeast cells hydrolysate (at 6 g-nitrogen/L) gave the highest kefiran of 480 ± 21 mg/L along with lactic acid of 20.1 ± 0.2 g/L. The combination of these two sources and initial pH were optimized through Response Surface Methodology. With the optimized medium, L. kefiranofaciens produced more kefiran and lactic acid up to 635 ± 7 mg/L and 32.9 ± 0.7 g/L, respectively. When the pH was controlled to alleviate the inhibition from acidic pH, L. kefiranofaciens could consume all sugars and produced kefiran and lactic acid up to 1693 ± 29 mg/L and 87.49 ± 0.23 g/L, respectively. Moreover, the fed-batch fermentation with intermittent adding of whey lactose improved kefiran and lactic acid productions up to 2514 ± 93 mg/L and 135 ± 1.75 g/L, respectively. These results indicate the promising approach to economically produce kefiran and lactic acid from low-cost nutrient sources.
Eskinazi, Ilan; Fregly, Benjamin J
2018-04-01
Concurrent estimation of muscle activations, joint contact forces, and joint kinematics by means of gradient-based optimization of musculoskeletal models is hindered by computationally expensive and non-smooth joint contact and muscle wrapping algorithms. We present a framework that simultaneously speeds up computation and removes sources of non-smoothness from muscle force optimizations using a combination of parallelization and surrogate modeling, with special emphasis on a novel method for modeling joint contact as a surrogate model of a static analysis. The approach allows one to efficiently introduce elastic joint contact models within static and dynamic optimizations of human motion. We demonstrate the approach by performing two optimizations, one static and one dynamic, using a pelvis-leg musculoskeletal model undergoing a gait cycle. We observed convergence on the order of seconds for a static optimization time frame and on the order of minutes for an entire dynamic optimization. The presented framework may facilitate model-based efforts to predict how planned surgical or rehabilitation interventions will affect post-treatment joint and muscle function. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Feidt, Michel; Costea, Monica
2018-04-01
Many works have been devoted to finite time thermodynamics since the Curzon and Ahlborn [1] contribution, which is generally considered as its origin. Nevertheless, previous works in this domain have been revealed [2], [3], and recently, results of the attempt to correlate Finite Time Thermodynamics with Linear Irreversible Thermodynamics according to Onsager's theory were reported [4]. The aim of the present paper is to extend and improve the approach relative to thermodynamic optimization of generic objective functions of a Carnot engine with linear response regime presented in [4]. The case study of the Carnot engine is revisited within the steady state hypothesis, when non-adiabaticity of the system is considered, and heat loss is accounted for by an overall heat leak between the engine heat reservoirs. The optimization is focused on the main objective functions connected to engineering conditions, namely maximum efficiency or power output, except the one relative to entropy that is more fundamental. Results given in reference [4] relative to the maximum power output and minimum entropy production as objective function are reconsidered and clarified, and the change from finite time to finite physical dimension was shown to be done by the heat flow rate at the source. Our modeling has led to new results of the Carnot engine optimization and proved that the primary interest for an engineer is mainly connected to what we called Finite Physical Dimensions Optimal Thermodynamics.
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Young Ki; Kwon, Junyeon; Hong, Seongin
Various strategies and mechanisms have been suggested for investigating a Schottky contact behavior in molybdenum disulfide (MoS{sub 2}) thin-film transistor (TFT), which are still in much debate and controversy. As one of promising breakthrough for transparent electronics with a high device performance, we have realized MoS{sub 2} TFTs with source/drain electrodes consisting of transparent bi-layers of a conducting oxide over a thin film of low work function metal. Intercalation of a low work function metal layer, such as aluminum, between MoS{sub 2} and transparent source/drain electrodes makes it possible to optimize the Schottky contact characteristics, resulting in about 24-fold andmore » 3 orders of magnitude enhancement of the field-effect mobility and on-off current ratio, respectively, as well as transmittance of 87.4 % in the visible wavelength range.« less
Ahmad, Moiz; Bazalova, Magdalena; Xiang, Liangzhong
2014-01-01
The purpose of this study was to increase the sensitivity of XFCT imaging by optimizing the data acquisition geometry for reduced scatter X-rays. The placement of detectors and detector energy window were chosen to minimize scatter X-rays. We performed both theoretical calculations and Monte Carlo simulations of this optimized detector configuration on a mouse-sized phantom containing various gold concentrations. The sensitivity limits were determined for three different X-ray spectra: a monoenergetic source, a Gaussian source, and a conventional X-ray tube source. Scatter X-rays were minimized using a backscatter detector orientation (scatter direction > 110° to the primary X-ray beam). The optimized configuration simultaneously reduced the number of detectors and improved the image signal-to-noise ratio. The sensitivity of the optimized configuration was 10 µg/mL (10 pM) at 2 mGy dose with the mono-energetic source, which is an order of magnitude improvement over the unoptimized configuration (102 pM without the optimization). Similar improvements were seen with the Gaussian spectrum source and conventional X-ray tube source. The optimization improvements were predicted in the theoretical model and also demonstrated in simulations. The sensitivity of XFCT imaging can be enhanced by an order of magnitude with the data acquisition optimization, greatly enhancing the potential of this modality for future use in clinical molecular imaging. PMID:24770916
Ahmad, Moiz; Bazalova, Magdalena; Xiang, Liangzhong; Xing, Lei
2014-05-01
The purpose of this study was to increase the sensitivity of XFCT imaging by optimizing the data acquisition geometry for reduced scatter X-rays. The placement of detectors and detector energy window were chosen to minimize scatter X-rays. We performed both theoretical calculations and Monte Carlo simulations of this optimized detector configuration on a mouse-sized phantom containing various gold concentrations. The sensitivity limits were determined for three different X-ray spectra: a monoenergetic source, a Gaussian source, and a conventional X-ray tube source. Scatter X-rays were minimized using a backscatter detector orientation (scatter direction > 110(°) to the primary X-ray beam). The optimized configuration simultaneously reduced the number of detectors and improved the image signal-to-noise ratio. The sensitivity of the optimized configuration was 10 μg/mL (10 pM) at 2 mGy dose with the mono-energetic source, which is an order of magnitude improvement over the unoptimized configuration (102 pM without the optimization). Similar improvements were seen with the Gaussian spectrum source and conventional X-ray tube source. The optimization improvements were predicted in the theoretical model and also demonstrated in simulations. The sensitivity of XFCT imaging can be enhanced by an order of magnitude with the data acquisition optimization, greatly enhancing the potential of this modality for future use in clinical molecular imaging.
Ayvaz, M Tamer
2010-09-20
This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy
NASA Astrophysics Data System (ADS)
Rendon, A.; Beck, J. C.; Lilge, Lothar
2008-02-01
Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.
NASA Astrophysics Data System (ADS)
Bostater, Charles R., Jr.; Rebbman, Jan; Hall, Carlton; Provancha, Mark; Vieglais, David
1995-11-01
Measurements of temporal reflectance signatures as a function of growing season for sand live oak (Quercus geminata), myrtle oak (Q. myrtifolia, and saw palmetto (Serenoa repens) were collected during a two year study period. Canopy level spectral reflectance signatures, as a function of 252 channels between 368 and 1115 nm, were collected using near nadir viewing geometry and a consistent sun illumination angle. Leaf level reflectance measurements were made in the laboratory using a halogen light source and an environmental optics chamber with a barium sulfate reflectance coating. Spectral measurements were related to several biophysical measurements utilizing optimal passive ambient correlation spectroscopy (OPACS) technique. Biophysical parameters included percent moisture, water potential (MPa), total chlorophyll, and total Kjeldahl nitrogen. Quantitative data processing techniques were used to determine optimal bands based on the utilization of a second order derivative or inflection estimator. An optical cleanup procedure was then employed that computes the double inflection ratio (DIR) spectra for all possible three band combinations normalized to the previously computed optimal bands. These results demonstrate a unique approach to the analysis of high spectral resolution reflectance signatures for estimation of several biophysical measures of plants at the leaf and canopy level from optimally selected bands or bandwidths.
Multiple Detector Optimization for Hidden Radiation Source Detection
2015-03-26
important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code
Library-based illumination synthesis for critical CMOS patterning.
Yu, Jue-Chin; Yu, Peichen; Chao, Hsueh-Yung
2013-07-01
In optical microlithography, the illumination source for critical complementary metal-oxide-semiconductor layers needs to be determined in the early stage of a technology node with very limited design information, leading to simple binary shapes. Recently, the availability of freeform sources permits us to increase pattern fidelity and relax mask complexities with minimal insertion risks to the current manufacturing flow. However, source optimization across many patterns is often treated as a design-of-experiments problem, which may not fully exploit the benefits of a freeform source. In this paper, a rigorous source-optimization algorithm is presented via linear superposition of optimal sources for pre-selected patterns. We show that analytical solutions are made possible by using Hopkins formulation and quadratic programming. The algorithm allows synthesized illumination to be linked with assorted pattern libraries, which has a direct impact on design rule studies for early planning and design automation for full wafer optimization.
Generalized slow roll in the unified effective field theory of inflation
NASA Astrophysics Data System (ADS)
Motohashi, Hayato; Hu, Wayne
2017-07-01
We provide a compact and unified treatment of power spectrum observables for the effective field theory (EFT) of inflation with the complete set of operators that lead to second-order equations of motion in metric perturbations in both space and time derivatives, including Horndeski and Gleyzes-Langlois-Piazza-Vernizzi theories. We relate the EFT operators in ADM form to the four additional free functions of time in the scalar and tensor equations. Using the generalized slow-roll formalism, we show that each power spectrum can be described by an integral over a single source that is a function of its respective sound horizon. With this correspondence, existing model independent constraints on the source function can be simply reinterpreted in the more general inflationary context. By expanding these sources around an optimized freeze-out epoch, we also provide characterizations of these spectra in terms of five slow-roll hierarchies whose leading-order forms are compact and accurate as long as EFT coefficients vary only on time scales greater than an e -fold. We also clarify the relationship between the unitary gauge observables employed in the EFT and the comoving gauge observables of the postinflationary universe.
A Functional High-Throughput Assay of Myelination in Vitro
2014-07-01
iPS cells derived from human astrocytes. These cell lines will serve as an excellent source of human cells from which our model systems may be...image the 3D rat dorsal root ganglion ( DRG ) cultures with sufficiently low background as to detect electrically-evoked depolarization events, as...stimulation and recording system specifically for this purpose. Further, we found that the limitations inherent in optimizing speed and FOV may
Alternative Energy Sources for United States Air Force Installations
1975-08-01
easy to maintain, and have a relatively long life expectancy. b. Linear Focus Parabolic trough collectors have been fabricated by two primary methods...engineered and economically manufactured and dis- tributed solar collectors . Development, optimization, production design, and manufacture of these units is...193 and domestic hnt water heating. These systems function by converting the solar energy incident on a collector surface to thermal energy in a working
NASA Astrophysics Data System (ADS)
Alnifro, M.; Taqvi, S. T.; Ahmad, M. S.; Bensaida, K.; Elkamel, A.
2017-08-01
With increasing global energy demand and declining energy return on energy invested (EROEI) of crude oil, global energy consumption by the O&G industry has increased drastically over the past few years. In addition, this energy increase has led to an increase GHG emissions, resulting in adverse environmental effects. On the other hand, electricity generation through renewable resources have become relatively cost competitive to fossil based energy sources in a much ‘cleaner’ way. In this study, renewable energy is integrated optimally into a refinery considering costs and CO2 emissions. Using Aspen HYSYS, a refinery in the Middle East was simulated to estimate the energy demand by different processing units. An LP problem was formulated based on existing solar energy systems and wind potential in the region. The multi-objective function, minimizing cost as well as CO2 emissions, was solved using GAMS to determine optimal energy distribution from each energy source to units within the refinery. Additionally, an economic feasibility study was carried out to determine the viability of renewable energy technology project implementation to overcome energy requirement of the refinery. Electricity generation through all renewable energy sources considered (i.e. solar PV, solar CSP and wind) were found feasible based on their low levelized cost of electricity (LCOE). The payback period for a Solar CSP project, with an annual capacity of about 411 GWh and a lifetime of 30 years, was found to be 10 years. In contrast, the payback period for Solar PV and Wind were calculated to be 7 and 6 years, respectively. This opens up possibilities for integrating renewables into the refining sector as well as optimizing multiple energy carrier systems within the crude oil industry
Prediction of noise constrained optimum takeoff procedures
NASA Technical Reports Server (NTRS)
Padula, S. L.
1980-01-01
An optimization method is used to predict safe, maximum-performance takeoff procedures which satisfy noise constraints at multiple observer locations. The takeoff flight is represented by two-degree-of-freedom dynamical equations with aircraft angle-of-attack and engine power setting as control functions. The engine thrust, mass flow and noise source parameters are assumed to be given functions of the engine power setting and aircraft Mach number. Effective Perceived Noise Levels at the observers are treated as functionals of the control functions. The method is demonstrated by applying it to an Advanced Supersonic Transport aircraft design. The results indicate that automated takeoff procedures (continuously varying controls) can be used to significantly reduce community and certification noise without jeopardizing safety or degrading performance.
NASA Astrophysics Data System (ADS)
Morávek, Zdenek; Rickhey, Mark; Hartmann, Matthias; Bogner, Ludwig
2009-08-01
Treatment plans for intensity-modulated proton therapy may be sensitive to some sources of uncertainty. One source is correlated with approximations of the algorithms applied in the treatment planning system and another one depends on how robust the optimization is with regard to intra-fractional tissue movements. The irradiated dose distribution may substantially deteriorate from the planning when systematic errors occur in the dose algorithm. This can influence proton ranges and lead to improper modeling of the Braggpeak degradation in heterogeneous structures or particle scatter or the nuclear interaction part. Additionally, systematic errors influence the optimization process, which leads to the convergence error. Uncertainties with regard to organ movements are related to the robustness of a chosen beam setup to tissue movements on irradiation. We present the inverse Monte Carlo treatment planning system IKO for protons (IKO-P), which tries to minimize the errors described above to a large extent. Additionally, robust planning is introduced by beam angle optimization according to an objective function penalizing paths representing strongly longitudinal and transversal tissue heterogeneities. The same score function is applied to optimize spot planning by the selection of a robust choice of spots. As spots can be positioned on different energy grids or on geometric grids with different space filling factors, a variety of grids were used to investigate the influence on the spot-weight distribution as a result of optimization. A tighter distribution of spot weights was assumed to result in a more robust plan with respect to movements. IKO-P is described in detail and demonstrated on a test case and a lung cancer case as well. Different options of spot planning and grid types are evaluated, yielding a superior plan quality with dose delivery to the spots from all beam directions over optimized beam directions. This option shows a tighter spot-weight distribution and should therefore be less sensitive to movements compared to optimized directions. But accepting a slight loss in plan quality, the latter choice could potentially improve robustness even further by accepting only spots from the most proper direction. The choice of a geometric grid instead of an energy grid for spot positioning has only a minor influence on the plan quality, at least for the investigated lung case.
Small unmanned aircraft system for remote contour mapping of a nuclear radiation field
NASA Astrophysics Data System (ADS)
Guss, Paul; McCall, Karen; Malchow, Russell; Fischer, Rick; Lukens, Michael; Adan, Mark; Park, Ki; Abbott, Roy; Howard, Michael; Wagner, Eric; Trainham, Clifford P.; Luke, Tanushree; Mukhopadhyay, Sanjoy; Oh, Paul; Brahmbhatt, Pareshkumar; Henderson, Eric; Han, Jinlu; Huang, Justin; Huang, Casey; Daniels, Jon
2017-09-01
For nuclear disasters involving radioactive contamination, small unmanned aircraft systems (sUASs) equipped with nuclear radiation detection and monitoring capability can be very important tools. Among the advantages of a sUAS are quick deployment, low-altitude flying that enhances sensitivity, wide area coverage, no radiation exposure health safety restriction, and the ability to access highly hazardous or radioactive areas. Additionally, the sUAS can be configured with the nuclear detecting sensor optimized to measure the radiation associated with the event. In this investigation, sUAS platforms were obtained for the installation of sensor payloads for radiation detection and electro-optical systems that were specifically developed for sUAS research, development, and operational testing. The sensor payloads were optimized for the contour mapping of a nuclear radiation field, which will result in a formula for low-cost sUAS platform operations with built-in formation flight control. Additional emphases of the investigation were to develop the relevant contouring algorithms; initiate the sUAS comprehensive testing using the Unmanned Systems, Inc. (USI) Sandstorm platforms and other acquired platforms; and both acquire and optimize the sensors for detection and localization. We demonstrated contour mapping through simulation and validated waypoint detection. We mounted a detector on a sUAS and operated it initially in the counts per second (cps) mode to perform field and flight tests to demonstrate that the equipment was functioning as designed. We performed ground truth measurements to determine the response of the detector as a function of source-to-detector distance. Operation of the radiation detector was tested using different unshielded sources.
A study of optimization techniques in HDR brachytherapy for the prostate
NASA Astrophysics Data System (ADS)
Pokharel, Ghana Shyam
Several studies carried out thus far are in favor of dose escalation to the prostate gland to have better local control of the disease. But optimal way of delivery of higher doses of radiation therapy to the prostate without hurting neighboring critical structures is still debatable. In this study, we proposed that real time high dose rate (HDR) brachytherapy with highly efficient and effective optimization could be an alternative means of precise delivery of such higher doses. This approach of delivery eliminates the critical issues such as treatment setup uncertainties and target localization as in external beam radiation therapy. Likewise, dosimetry in HDR brachytherapy is not influenced by organ edema and potential source migration as in permanent interstitial implants. Moreover, the recent report of radiobiological parameters further strengthen the argument of using hypofractionated HDR brachytherapy for the management of prostate cancer. Firstly, we studied the essential features and requirements of real time HDR brachytherapy treatment planning system. Automating catheter reconstruction with fast editing tools, fast yet accurate dose engine, robust and fast optimization and evaluation engine are some of the essential requirements for such procedures. Moreover, in most of the cases we performed, treatment plan optimization took significant amount of time of overall procedure. So, making treatment plan optimization automatic or semi-automatic with sufficient speed and accuracy was the goal of the remaining part of the project. Secondly, we studied the role of optimization function and constraints in overall quality of optimized plan. We have studied the gradient based deterministic algorithm with dose volume histogram (DVH) and more conventional variance based objective functions for optimization. In this optimization strategy, the relative weight of particular objective in aggregate objective function signifies its importance with respect to other objectives. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was done to keep time window desirable for real time procedures. Therefore, it requires further study with improved conditions to realize the full potential of the algorithm.
Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction.
Xu, Yonghui; Min, Huaqing; Wu, Qingyao; Song, Hengjie; Ye, Bicui
2017-02-06
Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods.
Zhou, Yangzhong; Cattley, Richard T.; Cario, Clinton L.; Bai, Qing; Burton, Edward A.
2014-01-01
This article describes a method to quantify the movements of larval zebrafish in multi-well plates, using the open-source MATLAB® applications LSRtrack and LSRanalyze. The protocol comprises four stages: generation of high-quality, flatly-illuminated video recordings with exposure settings that facilitate object recognition; analysis of the resulting recordings using tools provided in LSRtrack to optimize tracking accuracy and motion detection; analysis of tracking data using LSRanalyze or custom MATLAB® scripts; implementation of validation controls. The method is reliable, automated and flexible, requires less than one hour of hands-on work for completion once optimized, and shows excellent signal:noise characteristics. The resulting data can be analyzed to determine: positional preference; displacement, velocity and acceleration; duration and frequency of movement events and rest periods. This approach is widely applicable to analyze spontaneous or stimulus-evoked zebrafish larval neurobehavioral phenotypes resulting from a broad array of genetic and environmental manipulations, in a multi-well plate format suitable for high-throughput applications. PMID:24901738
Zhou, Yangzhong; Cattley, Richard T; Cario, Clinton L; Bai, Qing; Burton, Edward A
2014-07-01
This article describes a method to quantify the movements of larval zebrafish in multiwell plates, using the open-source MATLAB applications LSRtrack and LSRanalyze. The protocol comprises four stages: generation of high-quality, flatly illuminated video recordings with exposure settings that facilitate object recognition; analysis of the resulting recordings using tools provided in LSRtrack to optimize tracking accuracy and motion detection; analysis of tracking data using LSRanalyze or custom MATLAB scripts; and implementation of validation controls. The method is reliable, automated and flexible, requires <1 h of hands-on work for completion once optimized and shows excellent signal:noise characteristics. The resulting data can be analyzed to determine the following: positional preference; displacement, velocity and acceleration; and duration and frequency of movement events and rest periods. This approach is widely applicable to the analysis of spontaneous or stimulus-evoked zebrafish larval neurobehavioral phenotypes resulting from a broad array of genetic and environmental manipulations, in a multiwell plate format suitable for high-throughput applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polack, F.; Silly, M.; Chauvet, C.
A new insertion device beamline is now operational on straight section 8 at the SOLEIL synchrotron radiation source in France. The beamline and the experimental station were developed to optimize the study of the dynamics of electronic and magnetic properties of materials. Here we present the main technical characteristics of the installation and the general principles behind them. The source is composed of two APPLE II type insertion devices. The monochromator with plane gratings and spherical mirrors is working in the energy range 40-1500 eV. It is equipped with VLS, VGD gratings to allow the user optimization of flux ormore » higher harmonics rejection. The observed resonance structures measured in gas phase enable us to determine the available energy resolution: a resolving power higher than 10000 is obtained at the Ar 2p, N 1s and Ne K-edges when using all the optical elements at full aperture. The total flux as a function of the measured photon energy and the characterization of the focal spot size complete the beamline characterization.« less
Optimization of Compton Source Performance through Electron Beam Shaping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyzhenkov, Alexander; Yampolsky, Nikolai
2016-09-26
We investigate a novel scheme for significantly increasing the brightness of x-ray light sources based on inverse Compton scattering (ICS) - scattering laser pulses off relativistic electron beams. The brightness of ICS sources is limited by the electron beam quality since electrons traveling at different angles, and/or having different energies, produce photons with different energies. Therefore, the spectral brightness of the source is defined by the 6d electron phase space shape and size, as well as laser beam parameters. The peak brightness of the ICS source can be maximized then if the electron phase space is transformed in a waymore » so that all electrons scatter off the x-ray photons of same frequency in the same direction, arriving to the observer at the same time. We describe the x-ray photon beam quality through the Wigner function (6d photon phase space distribution) and derive it for the ICS source when the electron and laser rms matrices are arbitrary.« less
Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate
NASA Astrophysics Data System (ADS)
Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.
2008-08-01
The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.
NASA Astrophysics Data System (ADS)
Reiser, Fabienne; Schmelzbach, Cedric; Maurer, Hansruedi; Greenhalgh, Stewart; Hellwig, Olaf
2017-04-01
A primary focus of geothermal seismic imaging is to map dipping faults and fracture zones that control rock permeability and fluid flow. Vertical seismic profiling (VSP) is therefore a most valuable means to image the immediate surroundings of an existing borehole to guide, for example, the placing of new boreholes to optimize production from known faults and fractures. We simulated 2D and 3D acoustic synthetic seismic data and processed it through to pre-stack depth migration to optimize VSP survey layouts for mapping moderately to steeply dipping fracture zones within possible basement geothermal reservoirs. Our VSP survey optimization procedure for sequentially selecting source locations to define the area where source points are best located for optimal imaging makes use of a cross-correlation statistic, by which a subset of migrated shot gathers is compared with a target or reference image from a comprehensive set of source gathers. In geothermal exploration at established sites, it is reasonable to assume that sufficient à priori information is available to construct such a target image. We generally obtained good results with a relatively small number of optimally chosen source positions distributed over an ideal source location area for different fracture zone scenarios (different dips, azimuths, and distances from the surveying borehole). Adding further sources outside the optimal source area did not necessarily improve the results, but rather resulted in image distortions. It was found that fracture zones located at borehole-receiver depths and laterally offset from the borehole by 300 m can be imaged reliably for a range of the different dips, but more source positions and large offsets between sources and the borehole are required for imaging steeply dipping interfaces. When such features cross-cut the borehole, they are particularly difficult to image. For fracture zones with different azimuths, 3D effects are observed. Far offset source positions contribute less to the image quality as fracture zone azimuth increases. Our optimization methodology is best suited for designing future field surveys with a favorable benefit-cost ratio in areas with significant à priori knowledge. Moreover, our optimization workflow is valuable for selecting useful subsets of acquired data for optimum target-oriented processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
Traumatic brain injury rehabilitation: case management and insurance-related issues.
Pressman, Helaine Tobey
2007-02-01
Traumatic brain injury (TBI) cases are medically complex, involving the physical, cognitive, behavioral, social, and emotional aspects of the survivor. Often catastrophic, these cases require substantial financial resources not only for the patient's survival but to achieve the optimal outcome of a functional life with return to family and work responsibilities for the long term. TBI cases involve the injured person, the family, medical professionals such as treating physicians, therapists, attorneys, the employer, community resources, and the funding source, usually an insurance company. Case management is required to facilitate achievement of an optimal result by collaborating with all parties involved, assessing priorities and options, coordinating services, and educating and communicating with all concerned.
Computing the Partition Function for Kinetically Trapped RNA Secondary Structures
Lorenz, William A.; Clote, Peter
2011-01-01
An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972
Optimization of Typological Requirements for Low-Cost Detached Houses
NASA Astrophysics Data System (ADS)
Kuráň, Jozef
2017-09-01
The presented paper deals with an analysis of the legislative, hygienic, functional and operational requirements for the design of detached houses and individual dwellings in terms of typological requirements. The article also presents a sociological survey about the preferences and subjective requirements of relevant public group segments in terms of living in a detached house or an individual dwelling. The aim of the paper is to define the possibilities for the optimization of typological requirements. The optimization methods are based on principles already applied to contemporary detached house preferences and trends. The main idea is to reduce the amount of floor space, thus lowering construction and operating costs. The goal is to design an optimized floor plan, while preserving the hygienic criteria for individual residential dwellings. By applying optimization methods, a so-called rationalized and conditioned floor plan results in an individual dwelling floor plan design that can be compared to a reference model with an accurate quantification comparison. The significant sources of research are the legislative and normative requirements in the field of house construction in Slovakia, the Czech Republic and abroad.
Li, Ruirui; Gu, Pengfei; Fan, Xiangyu; Shen, Junyu; Wu, Yulian; Huang, Lixuan; Li, Qiang
2018-03-21
A polyhydroxyalkanoate (PHA)-producing strain was isolated from propylene oxide (PO) saponification wastewater activated sludge and was identified as Brevundimonas vesicularis UJN1 through 16S rDNA sequencing and Biolog microbiological identification. Single-factor and response surface methodology experiments were used to optimize the culture medium and conditions. The optimal C/N ratio was 100/1.04, and the optimal carbon and nitrogen sources were sucrose (10 g/L) and NH 4 Cl (0.104 g/L) respectively. The optimal culture conditions consisted of initial pH of 6.7 and an incubation temperature of 33.4 °C for 48 h, with 15% inoculum and 100 mL medium at an agitation rate of 180 rpm. The PHA concentration reached 34.1% of the cell dry weight and increased three times compared with that before optimization. The only report of PHA-producing bacteria by Brevundimonas vesicularis showed that the conversion rate of PHAs using glucose as the optimal carbon source was 1.67%. In our research, the conversion rate of PHAs with sucrose as the optimal carbon source was 3.05%, and PHA production using sucrose as the carbon source was much cheaper than that using glucose as the carbon source.
Regulation of the cellular and physiological effects of glutamine.
Chwals, Walter J
2004-10-01
Glutamine is the most abundant amino acid in humans and possesses many functions in the body. It is the major transporter of amino-nitrogen between cells and an important fuel source for rapidly dividing cells such as cells of the immune and gastrointestinal systems. It is important in the synthesis of nucleic acids, glutathione, citrulline, arginine, gamma aminobutyric acid, and glucose. It is important for growth, gastrointestinal integrity, acid-base homeostasis, and optimal immune function. The regulation of glutamine levels in cells via glutaminase and glutamine synthetase is discussed. The cellular and physiologic effects of glutamine upon the central nervous system, gastrointestinal function, during metabolic support, and following tissue injury and critical illness is also discussed.
NASA Astrophysics Data System (ADS)
Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin
2017-10-01
The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.
Vazquez, Alejandro L; H Sibley, Margaret; Campez, Mileini
2018-06-01
The DSM-5 requires clinicians to link ADHD symptoms to clinically meaningful impairments in daily life functioning. Measuring impairment during ADHD assessments may be particularly challenging in adolescence, when ADHD is often not the sole source of a youth's difficulties. Existing impairment rating scales are criticized for not specifying ADHD as the source of impairment in their instructions, leading to potential problems with rating scale specificity. The current study utilized a within subjects design (N = 107) to compare parent report of impairment on two versions of a global impairment measure: one that specified ADHD as the source of impairment (Impairment Rating Scale-ADHD) and a standard version that did not (Impairment Rating Scale). On the standard family impairment item, parents endorsed greater impairment as compared to the IRS-ADHD. This finding was particularly pronounced when parents reported high levels of parenting stress. More severe ADHD symptoms were associated with greater concordance between the two versions. Findings indicate that adolescent family related impairments reported during ADHD assessments may be due to sources other than ADHD symptoms, such as developmental maladjustment. To prevent false positive diagnoses, symptom-specific wording may optimize impairment measures when assessing family functioning in diagnostic assessments for adolescents with ADHD. Copyright © 2018 Elsevier B.V. All rights reserved.
Personalizing Protein Nourishment
DALLAS, DAVID C.; SANCTUARY, MEGAN R.; QU, YUNYAO; KHAJAVI, SHABNAM HAGHIGHAT; VAN ZANDT, ALEXANDRIA E.; DYANDRA, MELISSA; FRESE, STEVEN A.; BARILE, DANIELA; GERMAN, J. BRUCE
2016-01-01
Proteins are not equally digestible—their proteolytic susceptibility varies by their source and processing method. Incomplete digestion increases colonic microbial protein fermentation (putrefaction), which produces toxic metabolites that can induce inflammation in vitro and have been associated with inflammation in vivo. Individual humans differ in protein digestive capacity based on phenotypes, particularly disease states. To avoid putrefaction-induced intestinal inflammation, protein sources and processing methods must be tailored to the consumer’s digestive capacity. This review explores how food processing techniques alter protein digestibility and examines how physiological conditions alter digestive capacity. Possible solutions to improving digestive function or matching low digestive capacity with more digestible protein sources are explored. Beyond the ileal digestibility measurements of protein digestibility, less invasive, quicker and cheaper techniques for monitoring the extent of protein digestion and fermentation are needed to personalize protein nourishment. Biomarkers of protein digestive capacity and efficiency can be identified with the toolsets of peptidomics, metabolomics, microbial sequencing and multiplexed protein analysis of fecal and urine samples. By monitoring individual protein digestive function, the protein component of diets can be tailored via protein source and processing selection to match individual needs to minimize colonic putrefaction and, thus, optimize gut health. PMID:26713355
Pursiainen, S; Vorwerk, J; Wolters, C H
2016-12-21
The goal of this study is to develop focal, accurate and robust finite element method (FEM) based approaches which can predict the electric potential on the surface of the computational domain given its structure and internal primary source current distribution. While conducting an EEG evaluation, the placement of source currents to the geometrically complex grey matter compartment is a challenging but necessary task to avoid forward errors attributable to tissue conductivity jumps. Here, this task is approached via a mathematically rigorous formulation, in which the current field is modeled via divergence conforming H(div) basis functions. Both linear and quadratic functions are used while the potential field is discretized via the standard linear Lagrangian (nodal) basis. The resulting model includes dipolar sources which are interpolated into a random set of positions and orientations utilizing two alternative approaches: the position based optimization (PBO) and the mean position/orientation (MPO) method. These results demonstrate that the present dipolar approach can reach or even surpass, at least in some respects, the accuracy of two classical reference methods, the partial integration (PI) and St. Venant (SV) approach which utilize monopolar loads instead of dipolar currents.
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
NASA Astrophysics Data System (ADS)
Bartlett, M. K.; Detto, M.; Pacala, S. W.
2017-12-01
The accurate prediction of tropical forest carbon fluxes is key to forecasting global climate, but forest responses to projected increases in CO2 and drought are highly uncertain. Here we present a dynamic optimization that derives the trajectory of stomatal conductance (gs) during drought, a key source of model uncertainty, from plant and soil water relations and the carbon economy of the plant hydraulic system. This optimization scheme is novel in two ways. First, by accounting for the ability of capacitance (i.e., the release of water from plant storage tissue; C) to buffer evaporative water loss and maintain gs during drought, this optimization captures both drought tolerant and avoidant hydraulic strategies. Second, by determining the optimal trajectory of plant and soil water potentials, this optimization quantifies species' impacts on the water available to competing plants. These advances allowed us to apply this optimization across the range of physiology trait values observed in tropical species to evaluate shifts in the competitively optimal trait values, or evolutionarily stable hydraulic strategy (ESS), under increased drought and CO2. Increasing the length of the dry season shifted the ESS towards more drought tolerant, rather than avoidant, trait values, and these shifts were larger for longer individual drought periods (i.e., more consecutive days without rainfall), even if the total time spent in drought was the same. Concurrently doubling the CO2 level reduced the magnitude of these shifts and slightly favored drought avoidant strategies under wet conditions. Overall, these analyses predicted that short, frequent droughts would allow elevated CO2 to shift the functional composition in tropical forests towards more drought avoidant species, while infrequent but long drought periods would shift the ESS to more drought tolerant trait values, despite increased CO2. Overall, these analyses quantified the impact of physiology traits on plant performance and competitive ability, and provide a mechanistic, trait-based approach to predict shifts in the functional composition of tropical forests under projected climatic conditions.
Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards
Ackermann, John F.; Landy, Michael S.
2014-01-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822
Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.
Ackermann, John F; Landy, Michael S
2015-02-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.
NASA Astrophysics Data System (ADS)
Longting, M.; Ye, S.; Wu, J.
2014-12-01
Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.
NASA Astrophysics Data System (ADS)
Chang, En-Chih
2018-02-01
This paper presents a high-performance AC power source by applying robust stability control technology for precision material machining (PMM). The proposed technology associates the benefits of finite-time convergent sliding function (FTCSF) and firefly optimization algorithm (FOA). The FTCSF maintains the robustness of conventional sliding mode, and simultaneously speeds up the convergence speed of the system state. Unfortunately, when a highly nonlinear loading is applied, the chatter will occur. The chatter results in high total harmonic distortion (THD) output voltage of AC power source, and even deteriorates the stability of PMM. The FOA is therefore used to remove the chatter, and the FTCSF still preserves finite system-state convergence time. By combining FTCSF with FOA, the AC power source of PMM can yield good steady-state and transient performance. Experimental results are performed in support of the proposed technology.
A new Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin
2017-04-01
Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.
NASA Astrophysics Data System (ADS)
Osman, Ayat E.
Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a year. A Pareto-optimal frontier is also derived, which defines the minimum cost required to achieve any level of environmental emission or primary energy usage value or inversely the minimum environmental indicator and primary energy usage value that can be achieved and the cost required to achieve that value.
PlasmaPy: beginning a community developed Python package for plasma physics
NASA Astrophysics Data System (ADS)
Murphy, Nicholas A.; Huang, Yi-Min; PlasmaPy Collaboration
2016-10-01
In recent years, researchers in several disciplines have collaborated on community-developed open source Python packages such as Astropy, SunPy, and SpacePy. These packages provide core functionality, common frameworks for data analysis and visualization, and educational tools. We propose that our community begins the development of PlasmaPy: a new open source core Python package for plasma physics. PlasmaPy could include commonly used functions in plasma physics, easy-to-use plasma simulation codes, Grad-Shafranov solvers, eigenmode solvers, and tools to analyze both simulations and experiments. The development will include modern programming practices such as version control, embedding documentation in the code, unit tests, and avoiding premature optimization. We will describe early code development on PlasmaPy, and discuss plans moving forward. The success of PlasmaPy depends on active community involvement and a welcoming and inclusive environment, so anyone interested in joining this collaboration should contact the authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutter, John P., E-mail: john.sutter@diamond.ac.uk; Chater, Philip A.; Hillman, Michael R.
2016-07-27
The I15-1 beamline, the new side station to I15 at the Diamond Light Source, will be dedicated to the collection of atomic pair distribution function data. A Laue monochromator will be used consisting of three silicon crystals diffracting X-rays at a common Bragg angle of 2.83°. The crystals use the (1 1 1), (2 2 0), and (3 1 1) planes to select 40, 65, and 76 keV X-rays, respectively, and will be bent meridionally to horizontally focus the selected X-rays onto the sample. All crystals will be cut to the same optimized asymmetry angle in order to eliminate imagemore » broadening from the crystal thickness. Finite element calculations show that the thermal distortion of the crystals will affect the image size and bandpass.« less
OsiriX: an open-source software for navigating in multidimensional DICOM images.
Rosset, Antoine; Spadola, Luca; Ratib, Osman
2004-09-01
A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.
NASA Astrophysics Data System (ADS)
Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos
2017-12-01
An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.
Hansen, Scott K.; Vesselinov, Velimir Valentinov
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less
Solar radiation on Mars: Stationary photovoltaic array
NASA Technical Reports Server (NTRS)
Appelbaum, J.; Sherman, I.; Landis, G. A.
1993-01-01
Solar energy is likely to be an important power source for surface-based operation on Mars. Photovoltaic cells offer many advantages. In this article we have presented analytical expressions and solar radiation data for stationary flat surfaces (horizontal and inclined) as a function of latitude, season and atmospheric dust load (optical depth). The diffuse component of the solar radiation on Mars can be significant, thus greatly affecting the optimal inclination angle of the photovoltaic surface.
NASA Astrophysics Data System (ADS)
Salehi, Hassan S.; Li, Hai; Kumavor, Patrick D.; Merkulov, Aleksey; Sanders, Melinda; Brewer, Molly; Zhu, Quing
2015-03-01
In this paper, wavelength selection for multispectral photoacoustic/ultrasound tomography was optimized to obtain accurate images of hemoglobin oxygen saturation (sO2) in vivo. Although wavelengths can be selected by theoretical methods, in practice the accuracy of reconstructed images will be affected by wavelength-specific and system-specific factors such as laser source power and ultrasound transducer sensitivity. By performing photoacoustic spectroscopy of mouse tumor models using 14 different wavelengths between 710 and 840 nm, we were able to identify a wavelength set which most accurately reproduced the results obtained using all 14 wavelengths via selection criteria. In clinical studies, the optimal wavelength set was successfully used to image human ovaries in vivo and noninvasively. Although these results are specific to our co-registered photoacoustic/ultrasound imaging system, the approach we developed can be applied to other functional photoacoustic and optical imaging systems.
NASA Astrophysics Data System (ADS)
Ayaz-Maierhafer, Birsen; Britt, Carl G.; August, Andrew J.; Qi, Hairong; Seifert, Carolyn E.; Hayward, Jason P.
2017-10-01
In this study, we report on a constrained optimization and tradeoff study of a hybrid, wearable detector array having directional sensing based upon gamma-ray occlusion. One resulting design uses CLYC detectors while the second feasibility design involves the coupling of gamma-ray-sensitive CsI scintillators and a rubber LiCaAlF6 (LiCAF) neutron detector. The detector systems' responses were investigated through simulation as a function of angle in a two-dimensional plane. The expected total counts, peak-to-total ratio, directionality performance, and detection of 40 K for accurate gain stabilization were considered in the optimization. Source directionality estimation was investigated using Bayesian algorithms. Gamma-ray energies of 122 keV, 662 keV, and 1332 keV were considered. The equivalent neutron capture response compared with 3 He was also investigated for both designs.
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
Yelk, Joseph; Sukharev, Maxim; Seideman, Tamar
2008-08-14
An optimal control approach based on multiple parameter genetic algorithms is applied to the design of plasmonic nanoconstructs with predetermined optical properties and functionalities. We first develop nanoscale metallic lenses that focus an incident plane wave onto a prespecified, spatially confined spot. Our results illustrate the mechanism of energy flow through wires and cavities. Next we design a periodic array of silver particles to modify the polarization of an incident, linearly polarized plane wave in a desired fashion while localizing the light in space. The results provide insight into the structural features that determine the birefringence properties of metal nanoparticles and their arrays. Of the variety of potential applications that may be envisioned, we note the design of nanoscale light sources with controllable coherence and polarization properties that could serve for coherent control of molecular, electronic, or electromechanical dynamics in the nanoscale.
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
GOCI image enhancement using an MTF compensation technique for coastal water applications.
Oh, Eunsong; Choi, Jong-Kuk
2014-11-03
The Geostationary Ocean Color Imager (GOCI) is the first optical sensor in geostationary orbit for monitoring the ocean environment around the Korean Peninsula. This paper discusses on-orbit modulation transfer function (MTF) estimation with the pulse-source method and its compensation results for the GOCI. Additionally, by analyzing the relationship between the MTF compensation effect and the accuracy of the secondary ocean product, we confirmed the optimal MTF compensation parameter for enhancing image quality without variation in the accuracy. In this study, MTF assessment was performed using a natural target because the GOCI system has a spatial resolution of 500 m. For MTF compensation with the Wiener filter, we fitted a point spread function with a Gaussian curve controlled by a standard deviation value (σ). After a parametric analysis for finding the optimal degradation model, the σ value of 0.4 was determined to be an optimal indicator. Finally, the MTF value was enhanced from 0.1645 to 0.2152 without degradation of the accuracy of the ocean color product. Enhanced GOCI images by MTF compensation are expected to recognize small-scale ocean products in coastal areas with sharpened geometric performance.
Technical Note: spektr 3.0—A computational tool for x-ray spectrum modeling and analysis
Punnoose, J.; Xu, J.; Sisniega, A.; Zbijewski, W.; Siewerdsen, J. H.
2016-01-01
Purpose: A computational toolkit (spektr 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a matlab (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The spektr code generates x-ray spectra (photons/mm2/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins over beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, spektr, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the spektr function library, UI, and optimization tool are available. PMID:27487888
Technical Note: SPEKTR 3.0—A computational tool for x-ray spectrum modeling and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punnoose, J.; Xu, J.; Sisniega, A.
2016-08-15
Purpose: A computational toolkit (SPEKTR 3.0) has been developed to calculate x-ray spectra based on the tungsten anode spectral model using interpolating cubic splines (TASMICS) algorithm, updating previous work based on the tungsten anode spectral model using interpolating polynomials (TASMIP) spectral model. The toolkit includes a MATLAB (The Mathworks, Natick, MA) function library and improved user interface (UI) along with an optimization algorithm to match calculated beam quality with measurements. Methods: The SPEKTR code generates x-ray spectra (photons/mm{sup 2}/mAs at 100 cm from the source) using TASMICS as default (with TASMIP as an option) in 1 keV energy bins overmore » beam energies 20–150 kV, extensible to 640 kV using the TASMICS spectra. An optimization tool was implemented to compute the added filtration (Al and W) that provides a best match between calculated and measured x-ray tube output (mGy/mAs or mR/mAs) for individual x-ray tubes that may differ from that assumed in TASMICS or TASMIP and to account for factors such as anode angle. Results: The median percent difference in photon counts for a TASMICS and TASMIP spectrum was 4.15% for tube potentials in the range 30–140 kV with the largest percentage difference arising in the low and high energy bins due to measurement errors in the empirically based TASMIP model and inaccurate polynomial fitting. The optimization tool reported a close agreement between measured and calculated spectra with a Pearson coefficient of 0.98. Conclusions: The computational toolkit, SPEKTR, has been updated to version 3.0, validated against measurements and existing models, and made available as open source code. Video tutorials for the SPEKTR function library, UI, and optimization tool are available.« less
NASA Astrophysics Data System (ADS)
Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André
2018-03-01
There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.
MetaNET--a web-accessible interactive platform for biological metabolic network analysis.
Narang, Pankaj; Khan, Shawez; Hemrom, Anmol Jaywant; Lynn, Andrew Michael
2014-01-01
Metabolic reactions have been extensively studied and compiled over the last century. These have provided a theoretical base to implement models, simulations of which are used to identify drug targets and optimize metabolic throughput at a systemic level. While tools for the perturbation of metabolic networks are available, their applications are limited and restricted as they require varied dependencies and often a commercial platform for full functionality. We have developed MetaNET, an open source user-friendly platform-independent and web-accessible resource consisting of several pre-defined workflows for metabolic network analysis. MetaNET is a web-accessible platform that incorporates a range of functions which can be combined to produce different simulations related to metabolic networks. These include (i) optimization of an objective function for wild type strain, gene/catalyst/reaction knock-out/knock-down analysis using flux balance analysis. (ii) flux variability analysis (iii) chemical species participation (iv) cycles and extreme paths identification and (v) choke point reaction analysis to facilitate identification of potential drug targets. The platform is built using custom scripts along with the open-source Galaxy workflow and Systems Biology Research Tool as components. Pre-defined workflows are available for common processes, and an exhaustive list of over 50 functions are provided for user defined workflows. MetaNET, available at http://metanet.osdd.net , provides a user-friendly rich interface allowing the analysis of genome-scale metabolic networks under various genetic and environmental conditions. The framework permits the storage of previous results, the ability to repeat analysis and share results with other users over the internet as well as run different tools simultaneously using pre-defined workflows, and user-created custom workflows.
MacLaren, Robert; Brett McQueen, R; Campbell, Jon
2013-04-01
To compare pharmacist and prescriber perceptions of the clinical and financial outcomes of pharmacy services in the intensive care unit (ICU). ICU pharmacists were invited to participate in the survey and were asked to invite two ICU prescriber colleagues to complete questionnaires. ICUs with clinical pharmacy services. The questionnaires were designed to solicit frequency, efficiency, and perceptions about the clinical and financial impact (on a 10-point scale) of pharmacy services including patient care (eight functions), education (three functions), administration (three functions), and scholarship (four functions). Basic services were defined as fundamental, and higher-level services were categorized as desirable or optimal. Respondents were asked to suggest possible sources of funding and reimbursement for ICU pharmacy services. Eighty packets containing one 26-item pharmacy questionnaire and two 16-item prescriber questionnaires were distributed to ICU pharmacists. Forty-one pharmacists (51%) and 46 prescribers (29%) returned questionnaires. Pharmacists had worked in the ICU for 8.3 ± 6.4 years and devoted 50.3 ± 18.7% of their efforts to clinical practice. Prescribers generally rated the impact of pharmacy services more favorably than pharmacists. Fundamental services were provided more frequently and were rated more positively than desirable or optimal services across both groups. The percent efficiencies of providing services without the pharmacist ranged between 40% and 65%. Both groups indicated that salary support for the pharmacist should come from hospital departments of pharmacy or critical care or colleges of pharmacy. Prescribers were more likely to consider other sources of funding for pharmacist salaries. Both groups supported reimbursement of clinical pharmacy services. Critical care pharmacy activities were associated with perceptions of beneficial clinical and financial outcomes. Prescribers valued most services more than pharmacists. Fundamental services were viewed more favorably than desirable or optimal services, possibly because they occurred more frequently or were required for safe patient care. Substantial inefficiencies may occur if pharmacy services disappeared. Considerable support existed for funding and reimbursement of critical care pharmacy services. © 2013 Pharmacotherapy Publications, Inc.
Gientka, Iwona; Błażejak, Stanisław; Stasiak-Różańska, Lidia; Chlebowska-Śmigiel, Anna
2015-01-01
xopolysaccharides (EPS) are not a well-established group of metabolites. An industrial scale of this EPS production is limited mainly by low yield biosynthesis. Until now, enzymes and biosynthesis pathways, as well as the role of regulatory genes, have not been described. Some of yeast EPS show antitumor, immunostimulatory and antioxidant activity. Others, absorb heavy metals and can function as bioactive components of food. Also, the potential of yeast EPS as thickeners or stabilizers can be found. Optimal conditions for the biosynthesis of yeast exopolysaccharides require strong oxygenation and low temperature of the culture, due to the physiology of the producer strains. The medium should contain sucrose as a carbon source and ammonium sulfate as inorganic nitrogen source, wherein the C:N ratio in the substrate should be 15:1. The cultures are long and the largest accumulation of polymers is observed after 4 or 5 days of culturing. The structure of yeast EPS is complex which affects the strain and culture condition. The EPS from yeast are linear mannans, pullulan, glucooligosaccharides, galactooligosaccharides and other heteropolysaccharides containing α-1,2; α-1,3; α-1,6; β-1,3; β-1,4 bonds. Mannose and glucose have the largest participation of carbohydrates for. t exopolysaccharides (EPS) are not a well-established group of metabolites. An industrial scale of this EPS production is limited mainly by low yield biosynthesis. Until now, enzymes and biosynthesis pathways, as well as the role of regulatory genes, have not been described. Some of yeast EPS show antitumor, immunostimulatory and antioxidant activity. Others, absorb heavy metals and can function as bioactive components of food. Also, the potential of yeast EPS as thickeners or stabilizers can be found. Optimal conditions for the biosynthesis of yeast exopolysaccharides require strong oxygenation and low temperature of the culture, due to the physiology of the producer strains. The medium should contain sucrose as a carbon source and ammonium sulfate as inorganic nitrogen source, wherein the C:N ratio in the substrate should be 15:1. The cultures are long and the largest accumulation of polymers is observed after 4 or 5 days of culturing. The structure of yeast EPS is complex which affects the strain and culture condition. The EPS from yeast are linear mannans, pullulan, glucooligosaccharides, galactooligosaccharides and other heteropolysaccharides containing α-1,2; α-1,3; α-1,6; β-1,3; β-1,4 bonds. Mannose and glucose have the largest participation of carbohydrates formin. t exopolysaccharides (EPS) are not a well-established group of metabolites. An industrial scale of this EPS production is limited mainly by low yield biosynthesis. Until now, enzymes and biosynthesis pathways, as well as the role of regulatory genes, have not been described. Some of yeast EPS show antitumor, immunostimulatory and antioxidant activity. Others, absorb heavy metals and can function as bioactive components of food. Also, the potential of yeast EPS as thickeners or stabilizers can be found. Optimal conditions for the biosynthesis of yeast exopolysaccharides require strong oxygenation and low temperature of the culture, due to the physiology of the producer strains. The medium should contain sucrose as a carbon source and ammonium sulfate as inorganic nitrogen source, wherein the C:N ratio in the substrate should be 15:1. The cultures are long and the largest accumulation of polymers is observed after 4 or 5 days of culturing. The structure of yeast EPS is complex which affects the strain and culture condition. The EPS from yeast are linear mannans, pullulan, glucooligosaccharides, galactooligosaccharides and other heteropolysaccharides containing α-1,2; α-1,3; α-1,6; β-1,3; β-1,4 bonds. Mannose and glucose have the largest participation of carbohydrates forming EPS.
Linear diffusion into a Faraday cage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warne, Larry Kevin; Lin, Yau Tang; Merewether, Kimball O.
2011-11-01
Linear lightning diffusion into a Faraday cage is studied. An early-time integral valid for large ratios of enclosure size to enclosure thickness and small relative permeability ({mu}/{mu}{sub 0} {le} 10) is used for this study. Existing solutions for nearby lightning impulse responses of electrically thick-wall enclosures are refined and extended to calculate the nearby lightning magnetic field (H) and time-derivative magnetic field (HDOT) inside enclosures of varying thickness caused by a decaying exponential excitation. For a direct strike scenario, the early-time integral for a worst-case line source outside the enclosure caused by an impulse is simplified and numerically integrated tomore » give the interior H and HDOT at the location closest to the source as well as a function of distance from the source. H and HDOT enclosure response functions for decaying exponentials are considered for an enclosure wall of any thickness. Simple formulas are derived to provide a description of enclosure interior H and HDOT as well. Direct strike voltage and current bounds for a single-turn optimally-coupled loop for all three waveforms are also given.« less
Jiang, C Y; Tong, X; Brown, D R; Glavic, A; Ambaye, H; Goyette, R; Hoffmann, M; Parizzi, A A; Robertson, L; Lauter, V
2017-02-01
Modern spallation neutron sources generate high intensity neutron beams with a broad wavelength band applied to exploring new nano- and meso-scale materials from a few atomic monolayers thick to complicated prototype device-like systems with multiple buried interfaces. The availability of high performance neutron polarizers and analyzers in neutron scattering experiments is vital for understanding magnetism in systems with novel functionalities. We report the development of a new generation of the in situ polarized 3 He neutron polarization analyzer for the Magnetism Reflectometer at the Spallation Neutron Source at Oak Ridge National Laboratory. With a new optical layout and laser system, the 3 He polarization reached and maintained 84% as compared to 76% in the first-generation system. The polarization improvement allows achieving the transmission function varying from 50% to 15% for the polarized neutron beam with the wavelength band of 2-9 Angstroms. This achievement brings a new class of experiments with optimal performance in sensitivity to very small magnetic moments in nano systems and opens up the horizon for its applications.
Identification of the fitness determinants of budding yeast on a natural substrate
Filteau, Marie; Charron, Guillaume; Landry, Christian R
2017-01-01
The budding yeasts are prime models in genomics and cell biology, but the ecological factors that determine their success in non-human-associated habitats is poorly understood. In North America Saccharomyces yeasts are present on the bark of deciduous trees, where they feed on bark and sap exudates. In the North East, Saccharomyces paradoxus is found on maples, which makes maple sap a natural substrate for this species. We measured growth rates of S. paradoxus natural isolates on maple sap and found variation along a geographical gradient not explained by the inherent variation observed under optimal laboratory conditions. We used a functional genomic screen to reveal the ecologically relevant genes and conditions required for optimal growth in this substrate. We found that the allantoin degradation pathway is required for optimal growth in maple sap, in particular genes necessary for allantoate utilization, which we demonstrate is the major nitrogen source available to yeast in this environment. Growth with allantoin or allantoate as the sole nitrogen source recapitulated the variation in growth rates in maple sap among strains. We also show that two lineages of S. paradoxus display different life-history traits on allantoin and allantoate media, highlighting the ecological relevance of this pathway. PMID:27935595
Forbes, Thomas P.; Degertekin, F. Levent; Fedorov, Andrei G.
2010-01-01
Electrochemistry and ion transport in a planar array of mechanically-driven, droplet-based ion sources are investigated using an approximate time scale analysis and in-depth computational simulations. The ion source is modeled as a controlled-current electrolytic cell, in which the piezoelectric transducer electrode, which mechanically drives the charged droplet generation using ultrasonic atomization, also acts as the oxidizing/corroding anode (positive mode). The interplay between advective and diffusive ion transport of electrochemically generated ions is analyzed as a function of the transducer duty cycle and electrode location. A time scale analysis of the relative importance of advective vs. diffusive ion transport provides valuable insight into optimality, from the ionization prospective, of alternative design and operation modes of the ion source operation. A computational model based on the solution of time-averaged, quasi-steady advection-diffusion equations for electroactive species transport is used to substantiate the conclusions of the time scale analysis. The results show that electrochemical ion generation at the piezoelectric transducer electrodes located at the back-side of the ion source reservoir results in poor ionization efficiency due to insufficient time for the charged analyte to diffuse away from the electrode surface to the ejection location, especially at near 100% duty cycle operation. Reducing the duty cycle of droplet/analyte ejection increases the analyte residence time and, in turn, improves ionization efficiency, but at an expense of the reduced device throughput. For applications where this is undesirable, i.e., multiplexed and disposable device configurations, an alternative electrode location is incorporated. By moving the charging electrode to the nozzle surface, the diffusion length scale is greatly reduced, drastically improving ionization efficiency. The ionization efficiency of all operating conditions considered is expressed as a function of the dimensionless Peclet number, which defines the relative effect of advection as compared to diffusion. This analysis is general enough to elucidate an important role of electrochemistry in ionization efficiency of any arrayed ion sources, be they mechanically-driven or electrosprays, and is vital for determining optimal design and operation conditions. PMID:20607111
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
Applications of Elpasolites as a Multimode Radiation Sensor
NASA Astrophysics Data System (ADS)
Guckes, Amber
This study consists of both computational and experimental investigations. The computational results enabled detector design selections and confirmed experimental results. The experimental results determined that the CLYC scintillation detector can be applied as a functional and field-deployable multimode radiation sensor. The computational study utilized MCNP6 code to investigate the response of CLYC to various incident radiations and to determine the feasibility of its application as a handheld multimode sensor and as a single-scintillator collimated directional detection system. These simulations include: • Characterization of the response of the CLYC scintillator to gamma-rays and neutrons; • Study of the isotopic enrichment of 7Li versus 6Li in the CLYC for optimal detection of both thermal neutrons and fast neutrons; • Analysis of collimator designs to determine the optimal collimator for the single CLYC sensor directional detection system to assay gamma rays and neutrons; Simulations of a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system with the optimized collimator to determine the feasibility of detecting nuclear materials that could be encountered during field operations. These nuclear materials include depleted uranium, natural uranium, low-enriched uranium, highly-enriched uranium, reactor-grade plutonium, and weapons-grade plutonium. The experimental study includes the design, construction, and testing of both a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system. Both were designed in the Inventor CAD software and based on results of the computational study to optimize its performance. The handheld CLYC multimode sensor is modular, scalable, low?power, and optimized for high count rates. Commercial?off?the?shelf components were used where possible in order to optimize size, increase robustness, and minimize cost. The handheld CLYC multimode sensor was successfully tested to confirm its ability for gamma-ray and neutron detection, and gamma?ray and neutron spectroscopy. The sensor utilizes wireless data transfer for possible radiation mapping and network?centric deployment. The handheld multimode sensor was tested by performing laboratory measurements with various gamma-ray sources and neutron sources. The single CLYC scintillator collimated directional detection system is portable, robust, and capable of source localization and identification. The collimator was designed based on the results of the computational study and is constructed with high density polyethylene (HDPE) and lead (Pb). The collimator design and construction allows for the directional detection of gamma rays and fast neutrons utilizing only one scintillator which is interchangeable. For this study, a CLYC-7 scintillator was used. The collimated directional detection system was tested by performing laboratory directional measurements with various gamma-ray sources, 252Cf and a 239PuBe source.
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
Optimization of the Number and Location of Tsunami Stations in a Tsunami Warning System
NASA Astrophysics Data System (ADS)
An, C.; Liu, P. L. F.; Pritchard, M. E.
2014-12-01
Optimizing the number and location of tsunami stations in designing a tsunami warning system is an important and practical problem. It is always desirable to maximize the capability of the data obtained from the stations for constraining the earthquake source parameters, and to minimize the number of stations at the same time. During the 2011 Tohoku tsunami event, 28 coastal gauges and DART buoys in the near-field recorded tsunami waves, providing an opportunity for assessing the effectiveness of those stations in identifying the earthquake source parameters. Assuming a single-plane fault geometry, inversions of tsunami data from combinations of various number (1~28) of stations and locations are conducted and evaluated their effectiveness according to the residues of the inverse method. Results show that the optimized locations of stations depend on the number of stations used. If the stations are optimally located, 2~4 stations are sufficient to constrain the source parameters. Regarding the optimized location, stations must be uniformly spread in all directions, which is not surprising. It is also found that stations within the source region generally give worse constraint of earthquake source than stations farther from source, which is due to the exaggeration of model error in matching large amplitude waves at near-source stations. Quantitative discussions on these findings will be given in the presentation. Applying similar analysis to the Manila Trench based on artificial scenarios of earthquakes and tsunamis, the optimal location of tsunami stations are obtained, which provides guidance of deploying a tsunami warning system in this region.
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houser, Kevin W.; Royer, Michael P.; David, Aurelien
A system for evaluating the color rendition of light sources was recently published as IES TM-30-15 IES Method for Evaluating Light Source Color Rendition. The system includes a fidelity index (Rf) to quantify similarity to a reference illuminant, a relative-gamut index (Rg) to quantify saturation relative to a reference illuminant, and a color vector icon that visually presents information about color rendition. The calculation employs CAM02-UCS and uses a newly-developed set of reflectance functions, comprising 99 color evaluation samples (CES). The CES were down-selected from 105,000 real object samples and are uniformly distributed in color space (fairly representing different colors)more » and wavelength space (avoiding artificial increase of color rendition values by selective optimization).« less
Thermal conductivity of microporous layers: Analytical modeling and experimental validation
NASA Astrophysics Data System (ADS)
Andisheh-Tadbir, Mehdi; Kjeang, Erik; Bahrami, Majid
2015-11-01
A new compact relationship is developed for the thermal conductivity of the microporous layer (MPL) used in polymer electrolyte fuel cells as a function of pore size distribution, porosity, and compression pressure. The proposed model is successfully validated against experimental data obtained from a transient plane source thermal constants analyzer. The thermal conductivities of carbon paper samples with and without MPL were measured as a function of load (1-6 bars) and the MPL thermal conductivity was found between 0.13 and 0.17 W m-1 K-1. The proposed analytical model predicts the experimental thermal conductivities within 5%. A correlation generated from the analytical model was used in a multi objective genetic algorithm to predict the pore size distribution and porosity for an MPL with optimized thermal conductivity and mass diffusivity. The results suggest that an optimized MPL, in terms of heat and mass transfer coefficients, has an average pore size of 122 nm and 63% porosity.
Adaptation to sensory-motor reflex perturbations is blind to the source of errors.
Hudson, Todd E; Landy, Michael S
2012-01-06
In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.
Optimization of Cellulase Production from Bacteria Isolated from Soil
Sethi, Sonia; Datta, Aparna; Gupta, B. Lal; Gupta, Saksham
2013-01-01
Cellulase-producing bacteria were isolated from soil and identified as Pseudomonas fluorescens, Bacillus subtilIs, E. coli, and Serratia marcescens. Optimization of the fermentation medium for maximum cellulase production was carried out. The culture conditions like pH, temperature, carbon sources, and nitrogen sources were optimized. The optimum conditions found for cellulase production were 40°C at pH 10 with glucose as carbon source and ammonium sulphate as nitrogen source, and coconut cake stimulates the production of cellulase. Among bacteria, Pseudomonas fluorescens is the best cellulase producer among the four followed by Bacillus subtilis, E. coli, and Serratia marscens. PMID:25937986
On the ground state of Yang-Mills theory
NASA Astrophysics Data System (ADS)
Bakry, Ahmed S.; Leinweber, Derek B.; Williams, Anthony G.
2011-08-01
We investigate the overlap of the ground state meson potential with sets of mesonic-trial wave functions corresponding to different gluonic distributions. We probe the transverse structure of the flux tube through the creation of non-uniform smearing profiles for the string of glue connecting two color sources in Wilson loop operator. The non-uniformly UV-regulated flux-tube operators are found to optimize the overlap with the ground state and display interesting features in the ground state overlap.
2010-03-01
to a graphics card , and not the redesign of XML. The justification is that if XML is going to be prevalent, special optimized hardware is...the answer, similar to the specialized functions of a video card . Given the Moore’s law that processing power doubles every few years, let the...and numerous multimedia players such as iTunes from Apple. These applications are free to use, but the source is restricted by software licenses
Integration of optical imaging with a small animal irradiator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An
Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
Stone, M; Collins, A L; Silins, U; Emelko, M B; Zhang, Y S
2014-03-01
There is increasing global concern regarding the impacts of large scale land disturbance by wildfire on a wide range of water and related ecological services. This study explores the impact of the 2003 Lost Creek wildfire in the Crowsnest River basin, Alberta, Canada on regional scale sediment sources using a tracing approach. A composite geochemical fingerprinting procedure was used to apportion the sediment efflux among three key spatial sediment sources: 1) unburned (reference) 2) burned and 3) burned sub-basins that were subsequently salvage logged. Spatial sediment sources were characterized by collecting time-integrated suspended sediment samples using passive devices during the entire ice free periods in 2009 and 2010. The tracing procedure combines the Kruskal-Wallis H-test, principal component analysis and genetic-algorithm driven discriminant function analysis for source discrimination. Source apportionment was based on a numerical mass balance model deployed within a Monte Carlo framework incorporating both local optimization and global (genetic algorithm) optimization. The mean relative frequency-weighted average median inputs from the three spatial source units were estimated to be 17% (inter-quartile uncertainty range 0-32%) from the reference areas, 45% (inter-quartile uncertainty range 25-65%) from the burned areas and 38% (inter-quartile uncertainty range 14-59%) from the burned-salvage logged areas. High sediment inputs from burned and the burned-salvage logged areas, representing spatial source units 2 and 3, reflect the lasting effects of forest canopy and forest floor organic matter disturbance during the 2003 wildfire including increased runoff and sediment availability related to high terrestrial erosion, streamside mass wasting and river bank collapse. The results demonstrate the impact of wildfire and incremental pressures associated with salvage logging on catchment spatial sediment sources in higher elevation Montane regions where forest growth and vegetation recovery are relatively slow. Copyright © 2013 Elsevier B.V. All rights reserved.
Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat
2013-01-22
A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.
Integration and Optimization of Alternative Sources of Energy in a Remote Region
NASA Astrophysics Data System (ADS)
Berberi, Pellumb; Inodnorjani, Spiro; Aleti, Riza
2010-01-01
In a remote coastal region supply of energy from national grid is insufficient for a sustainable development. Integration and optimization of local alternative renewable energy sources is an optional solution of the problem. In this paper we have studied the energetic potential of local sources of renewable energy (water, solar, wind and biomass). A bottom-up energy system optimization model is proposed in order to support planning policies for promoting the use of renewable energy sources. A software, based on multiple factors and constrains analysis for optimization energy flow is proposed, which provides detailed information for exploitation each source of energy, power and heat generation, GHG emissions and end-use sectors. Economical analysis shows that with existing technologies both stand alone and regional facilities may be feasible. Improving specific legislation will foster investments from Central or Local Governments and also from individuals, private companies or small families. The study is carried on the frame work of a FP6 project "Integrated Renewable Energy System."
The performance of the upgraded Los Alamos Neutron Source
NASA Astrophysics Data System (ADS)
Ito, Takeyasu; LANL UCN Source Collaboration
2017-09-01
Los Alamos National Laboratory has been operating an ultracold (UCN) source based on a solid deuterium (SD2) UCN converter driven by spallation neutrons for over 10 years. It has recently been successfully upgraded, by replacing the cryostat that contains the cold neutron moderator, SD2 volume, and vertical UCN guide. The horizontal UCN guide that transports UCN out of the radiation shield was also replaced. The new design reflects lessons learned from the 10+ year long operation of the previous version of the UCN source and is optimized to maximize the cold neutron flux at the SD2 volume, featuring a close coupled cold neutron moderator, and maximize the transport of the UCN to experiments. During the commissioning of the upgraded UCN source, data were collected to measure its performance, including cold neutron spectra as a function of the cold moderator temperature, and the UCN density in a vessel outside the source. In this talk, after a brief overview of the design of the upgraded source, the results of the performance tests and comparison to prediction will be presented. This work was funded by LANL LDRD.
NASA Astrophysics Data System (ADS)
Harou, J. J.; Hansen, K. M.
2008-12-01
Increased scarcity of world water resources is inevitable given the limited supply and increased human pressures. The idea that "some scarcity is optimal" must be accepted for rational resource use and infrastructure management decisions to be made. Hydro-economic systems models are unique at representing the overlap of economic drivers, socio-political forces and distributed water resource systems. They demonstrate the tangible benefits of cooperation and integrated flexible system management. Further improvement of models, quality control practices and software will be needed for these academic policy tools to become accepted into mainstream water resource practice. Promising features include: calibration methods, limited foresight optimization formulations, linked simulation-optimization approaches (e.g. embedding pre-existing calibrated simulation models), spatial groundwater models, stream-aquifer interactions and stream routing, etc.. Conventional user-friendly decision support systems helped spread simulation models on a massive scale. Hydro-economic models must also find a means to facilitate construction, distribution and use. Some of these issues and model features are illustrated with a hydro-economic optimization model of the Sacramento Valley. Carry-over storage value functions are used to limit hydrologic foresight of the multi- period optimization model. Pumping costs are included in the formulation by tracking regional piezometric head of groundwater sub-basins. To help build and maintain this type of network model, an open-source water management modeling software platform is described and initial project work is discussed. The objective is to generically facilitate the connection of models, such as those developed in a modeling environment (GAMS, MatLab, Octave, "), to a geographic user interface (drag and drop node-link network) and a database (topology, parameters and time series). These features aim to incrementally move hydro- economic models in the direction of more practical implementation.
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Finite-fault source inversion using adjoint methods in 3-D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-07-01
Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Harmony search optimization for HDR prostate brachytherapy
NASA Astrophysics Data System (ADS)
Panchal, Aditya
In high dose-rate (HDR) prostate brachytherapy, multiple catheters are inserted interstitially into the target volume. The process of treating the prostate involves calculating and determining the best dose distribution to the target and organs-at-risk by means of optimizing the time that the radioactive source dwells at specified positions within the catheters. It is the goal of this work to investigate the use of a new optimization algorithm, known as Harmony Search, in order to optimize dwell times for HDR prostate brachytherapy. The new algorithm was tested on 9 different patients and also compared with the genetic algorithm. Simulations were performed to determine the optimal value of the Harmony Search parameters. Finally, multithreading of the simulation was examined to determine potential benefits. First, a simulation environment was created using the Python programming language and the wxPython graphical interface toolkit, which was necessary to run repeated optimizations. DICOM RT data from Varian BrachyVision was parsed and used to obtain patient anatomy and HDR catheter information. Once the structures were indexed, the volume of each structure was determined and compared to the original volume calculated in BrachyVision for validation. Dose was calculated using the AAPM TG-43 point source model of the GammaMed 192Ir HDR source and was validated against Varian BrachyVision. A DVH-based objective function was created and used for the optimization simulation. Harmony Search and the genetic algorithm were implemented as optimization algorithms for the simulation and were compared against each other. The optimal values for Harmony Search parameters (Harmony Memory Size [HMS], Harmony Memory Considering Rate [HMCR], and Pitch Adjusting Rate [PAR]) were also determined. Lastly, the simulation was modified to use multiple threads of execution in order to achieve faster computational times. Experimental results show that the volume calculation that was implemented in this thesis was within 2% of the values computed by Varian BrachyVision for the prostate, within 3% for the rectum and bladder and 6% for the urethra. The calculation of dose compared to BrachyVision was determined to be different by only 0.38%. Isodose curves were also generated and were found to be similar to BrachyVision. The comparison between Harmony Search and genetic algorithm showed that Harmony Search was over 4 times faster when compared over multiple data sets. The optimal Harmony Memory Size was found to be 5 or lower; the Harmony Memory Considering Rate was determined to be 0.95, and the Pitch Adjusting Rate was found to be 0.9. Ultimately, the effect of multithreading showed that as intensive computations such as optimization and dose calculation are involved, the threads of execution scale with the number of processors, achieving a speed increase proportional to the number of processor cores. In conclusion, this work showed that Harmony Search is a viable alternative to existing algorithms for use in HDR prostate brachytherapy optimization. Coupled with the optimal parameters for the algorithm and a multithreaded simulation, this combination has the capability to significantly decrease the time spent on minimizing optimization problems in the clinic that are time intensive, such as brachytherapy, IMRT and beam angle optimization.
NASA Astrophysics Data System (ADS)
Kühn, S.; Bibinov, N.; Gesche, R.; Awakowicz, P.
2010-01-01
A new miniature high-frequency (HF) plasma source intended for bio-medical applications is studied using nitrogen/oxygen mixture at atmospheric pressure. This plasma source can be used as an element of a plasma source array for applications in dermatology and surgery. Nitric oxide and ozone which are produced in this plasma source are well-known agents for proliferation of the cells, inhalation therapy for newborn infants, disinfection of wounds and blood ozonation. Using optical emission spectroscopy, microphotography and numerical simulation, the gas temperature in the active plasma region and plasma parameters (electron density and electron distribution function) are determined for varied nitrogen/oxygen flows. The influence of the gas flows on the plasma conditions is studied. Ozone and nitric oxide concentrations in the effluent of the plasma source are measured using absorption spectroscopy and electro-chemical NO-detector at variable gas flows. Correlations between plasma parameters and concentrations of the particles in the effluent of the plasma source are discussed. By varying the gas flows, the HF plasma source can be optimized for nitric oxide or ozone production. Maximum concentrations of 2750 ppm and 400 ppm of NO and O3, correspondingly, are generated.
Martian resource locations: Identification and optimization
NASA Astrophysics Data System (ADS)
Chamitoff, Gregory; James, George; Barker, Donald; Dershowitz, Adam
2005-04-01
The identification and utilization of in situ Martian natural resources is the key to enable cost-effective long-duration missions and permanent human settlements on Mars. This paper presents a powerful software tool for analyzing Martian data from all sources, and for optimizing mission site selection based on resource collocation. This program, called Planetary Resource Optimization and Mapping Tool (PROMT), provides a wide range of analysis and display functions that can be applied to raw data or imagery. Thresholds, contours, custom algorithms, and graphical editing are some of the various methods that can be used to process data. Output maps can be created to identify surface regions on Mars that meet any specific criteria. The use of this tool for analyzing data, generating maps, and collocating features is demonstrated using data from the Mars Global Surveyor and the Odyssey spacecraft. The overall mission design objective is to maximize a combination of scientific return and self-sufficiency based on utilization of local materials. Landing site optimization involves maximizing accessibility to collocated science and resource features within a given mission radius. Mission types are categorized according to duration, energy resources, and in situ resource utilization. Preliminary optimization results are shown for a number of mission scenarios.
NASA Astrophysics Data System (ADS)
D'Urzo, Lucia; Bayana, Hareen; Vandereyken, Jelle; Foubert, Philippe; Wu, Aiwen; Jaber, Jad; Hamzik, James
2017-03-01
Specific "killer-defects", such as micro-line-bridges are one of the key challenges in photolithography's advanced applications, such as multi-pattern. These defects generate from several sources and are very difficult to eliminate. Pointof-use filtration (POU) plays a crucial role on the mitigation, or elimination, of such defects. Previous studies have demonstrated how the contribution of POU filtration could not be studied independently from photoresists design and track hardware settings. Specifically, we investigated how an effective combination of optimized photoresist, filtration rate, filtration pressure, membrane and device cleaning, and single and multilayer filter membranes at optimized pore size could modulate the occurrence of such defects [1, 2, 3 and 4]. However, the ultimate desired behavior for POU filtration is the selective retention of defect precursor molecules contained in commercially available photoresist. This optimal behavior can be achieved via customized membrane functionalization. Membrane functionalization provides additional non-sieving interactions which combined with efficient size exclusion can selectively capture certain defect precursors. The goal of this study is to provide a comprehensive assessment of membrane functionalization applied on an asymmetric ultra-high molecular weight polyethylene (UPE) membrane at different pore size. Defectivity transferred in a 45 nm line 55 nm space (45L/55S) pattern, created through 193 nm immersion (193i) lithography with a positive tone chemically amplified resist (PT-CAR), has been evaluated on organic under-layer coated wafers. Lithography performance, such as critical dimensions (CD), line width roughness (LWR) and focus energy matrix (FEM) is also assessed.
Franco, Alexandre R; Ling, Josef; Caprihan, Arvind; Calhoun, Vince D; Jung, Rex E; Heileman, Gregory L; Mayer, Andrew R
2008-12-01
The human brain functions as an efficient system where signals arising from gray matter are transported via white matter tracts to other regions of the brain to facilitate human behavior. However, with a few exceptions, functional and structural neuroimaging data are typically optimized to maximize the quantification of signals arising from a single source. For example, functional magnetic resonance imaging (FMRI) is typically used as an index of gray matter functioning whereas diffusion tensor imaging (DTI) is typically used to determine white matter properties. While it is likely that these signals arising from different tissue sources contain complementary information, the signal processing algorithms necessary for the fusion of neuroimaging data across imaging modalities are still in a nascent stage. In the current paper we present a data-driven method for combining measures of functional connectivity arising from gray matter sources (FMRI resting state data) with different measures of white matter connectivity (DTI). Specifically, a joint independent component analysis (J-ICA) was used to combine these measures of functional connectivity following intensive signal processing and feature extraction within each of the individual modalities. Our results indicate that one of the most predominantly used measures of functional connectivity (activity in the default mode network) is highly dependent on the integrity of white matter connections between the two hemispheres (corpus callosum) and within the cingulate bundles. Importantly, the discovery of this complex relationship of connectivity was entirely facilitated by the signal processing and fusion techniques presented herein and could not have been revealed through separate analyses of both data types as is typically performed in the majority of neuroimaging experiments. We conclude by discussing future applications of this technique to other areas of neuroimaging and examining potential limitations of the methods.
Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Qing; Whaley, Richard Clint; Qasem, Apan
This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less
Leblond, Frederic; Tichauer, Kenneth M.; Pogue, Brian W.
2010-01-01
The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions. PMID:21258566
Selection biases in empirical p(z) methods for weak lensing
Gruen, D.; Brimioulle, F.
2017-02-23
To measure the mass of foreground objects with weak gravitational lensing, one needs to estimate the redshift distribution of lensed background sources. This is commonly done in an empirical fashion, i.e. with a reference sample of galaxies of known spectroscopic redshift, matched to the source population. In this paper, we develop a simple decision tree framework that, under the ideal conditions of a large, purely magnitude-limited reference sample, allows an unbiased recovery of the source redshift probability density function p(z), as a function of magnitude and colour. We use this framework to quantify biases in empirically estimated p(z) caused bymore » selection effects present in realistic reference and weak lensing source catalogues, namely (1) complex selection of reference objects by the targeting strategy and success rate of existing spectroscopic surveys and (2) selection of background sources by the success of object detection and shape measurement at low signal to noise. For intermediate-to-high redshift clusters, and for depths and filter combinations appropriate for ongoing lensing surveys, we find that (1) spectroscopic selection can cause biases above the 10 per cent level, which can be reduced to ≈5 per cent by optimal lensing weighting, while (2) selection effects in the shape catalogue bias mass estimates at or below the 2 per cent level. Finally, this illustrates the importance of completeness of the reference catalogues for empirical redshift estimation.« less
Boron-based nanostructures: Synthesis, functionalization, and characterization
NASA Astrophysics Data System (ADS)
Bedasso, Eyrusalam Kifyalew
Boron-based nanostructures have not been explored in detail; however, these structures have the potential to revolutionize many fields including electronics and biomedicine. The research discussed in this dissertation focuses on synthesis, functionalization, and characterization of boron-based zero-dimensional nanostructures (core/shell and nanoparticles) and one-dimensional nanostructures (nanorods). The first project investigates the synthesis and functionalization of boron-based core/shell nanoparticles. Two boron-containing core/shell nanoparticles, namely boron/iron oxide and boron/silica, were synthesized. Initially, boron nanoparticles with a diameter between 10-100 nm were prepared by decomposition of nido-decaborane (B10H14) followed by formation of a core/shell structure. The core/shell structures were prepared using the appropriate precursor, iron source and silica source, for the shell in the presence of boron nanoparticles. The formation of core/shell nanostructures was confirmed using high resolution TEM. Then, the core/shell nanoparticles underwent a surface modification. Boron/iron oxide core/shell nanoparticles were functionalized with oleic acid, citric acid, amine-terminated polyethylene glycol, folic acid, and dopamine, and boron/silica core/shell nanoparticles were modified with 3-(amino propyl) triethoxy silane, 3-(2-aminoethyleamino)propyltrimethoxysilane), citric acid, folic acid, amine-terminated polyethylene glycol, and O-(2-Carboxyethyl)polyethylene glycol. A UV-Vis and ATR-FTIR analysis established the success of surface modification. The cytotoxicity of water-soluble core/shell nanoparticles was studied in triple negative breast cancer cell line MDA-MB-231 and the result showed the compounds are not toxic. The second project highlights optimization of reaction conditions for the synthesis of boron nanorods. This synthesis, done via reduction of boron oxide with molten lithium, was studied to produce boron nanorods without any contamination and with a uniform size distribution. Various reaction parameters such as temperature, reaction time, and sonication were altered to find the optimal reaction conditions. Once these conditions were determined, boron nanorods were produced then functionalized with amine-terminated polyethylene glycol.
A quantitative approach to the loading rate of seismogenic sources in Italy
NASA Astrophysics Data System (ADS)
Caporali, Alessandro; Braitenberg, Carla; Montone, Paola; Rossi, Giuliana; Valensise, Gianluca; Viganò, Alfio; Zurutuza, Joaquin
2018-03-01
To investigate the transfer of elastic energy between a regional stress field and a set of localized faults we project the stress rate tensor inferred from the Italian GNSS velocity field onto faults selected from the Database of Individual Seismogenic Sources (DISS 3.2.0). For given Lamé constants and friction coefficient we compute the loading rate on each fault in terms of the Coulomb Failure Function (CFF) rate. By varying the strike, dip and rake angles around the nominal DISS values, we also estimate the geometry of planes that are optimally oriented for maximal CFF rate. Out of 86 Individual Seismogenic Sources (ISSs), all well covered by GNSS data, 78 to 81 (depending on the assumed friction coefficient) load energy at a rate of 0-4 kPa/yr. The faults displaying larger CFF rates (4 to 6 ± 1 kPa/yr) are located in the central Apennines and are all characterized by a significant strike-slip component. We also find that the loading rate of 75 per cent of the examined sources is less than 1 kPa/yr lower than that of optimally oriented faults. We also analyzed the 24 August and 30 October 2016, central Apennines earthquakes (Mw 6.0-6.5 respectively). The strike of their causative faults based on seismological and tectonic data and the geodetically inferred strike differ by < 30°. Some sources exhibit a strike oblique to the direction of maximum strain rate, suggesting that in some instances the present-day stress acts on inherited faults. The choice of the friction coefficient only marginally affects this result.
Multi-Objective Design Of Optimal Greenhouse Gas Observation Networks
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Bergmann, D. J.; Cameron-Smith, P. J.; Gard, E.; Guilderson, T. P.; Rotman, D.; Stolaroff, J. K.
2010-12-01
One of the primary scientific functions of a Greenhouse Gas Information System (GHGIS) is to infer GHG source emission rates and their uncertainties by combining measurements from an observational network with atmospheric transport modeling. Certain features of the observational networks that serve as inputs to a GHGIS --for example, sampling location and frequency-- can greatly impact the accuracy of the retrieved GHG emissions. Observation System Simulation Experiments (OSSEs) provide a framework to characterize emission uncertainties associated with a given network configuration. By minimizing these uncertainties, OSSEs can be used to determine optimal sampling strategies. Designing a real-world GHGIS observing network, however, will involve multiple, conflicting objectives; there will be trade-offs between sampling density, coverage and measurement costs. To address these issues, we have added multi-objective optimization capabilities to OSSEs. We demonstrate these capabilities by quantifying the trade-offs between retrieval error and measurement costs for a prototype GHGIS, and deriving GHG observing networks that are Pareto optimal. [LLNL-ABS-452333: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Granato, Daniel; de Castro, I Alves; Ellendersen, L Souza Neves; Masson, M Lucia
2010-04-01
Desserts made with soy cream, which are oil-in-water emulsions, are widely consumed by lactose-intolerant individuals in Brazil. In this regard, this study aimed at using response surface methodology (RSM) to optimize the sensory attributes of a soy-based emulsion over a range of pink guava juice (GJ: 22% to 32%) and soy protein (SP: 1% to 3%). WHC and backscattering were analyzed after 72 h of storage at 7 degrees C. Furthermore, a rating test was performed to determine the degree of liking of color, taste, creaminess, appearance, and overall acceptability. The data showed that the samples were stable against gravity and storage. The models developed by RSM adequately described the creaminess, taste, and appearance of the emulsions. The response surface of the desirability function was used successfully in the optimization of the sensory properties of dairy-free emulsions, suggesting that a product with 30.35% GJ and 3% SP was the best combination of these components. The optimized sample presented suitable sensory properties, in addition to being a source of dietary fiber, iron, copper, and ascorbic acid.
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang
2017-10-01
The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.
Optimal Doppler centroid estimation for SAR data from a quasi-homogeneous source
NASA Technical Reports Server (NTRS)
Jin, M. Y.
1986-01-01
This correspondence briefly describes two Doppler centroid estimation (DCE) algorithms, provides a performance summary for these algorithms, and presents the experimental results. These algorithms include that of Li et al. (1985) and a newly developed one that is optimized for quasi-homogeneous sources. The performance enhancement achieved by the optimal DCE algorithm is clearly demonstrated by the experimental results.
NASA Astrophysics Data System (ADS)
Diallo, M. S.; Holschneider, M.; Kulesh, M.; Scherbaum, F.; Ohrnberger, M.; Lück, E.
2004-05-01
This contribution is concerned with the estimate of attenuation and dispersion characteristics of surface waves observed on a shallow seismic record. The analysis is based on a initial parameterization of the phase and attenuation functions which are then estimated by minimizing a properly defined merit function. To minimize the effect of random noise on the estimates of dispersion and attenuation we use cross-correlations (in Fourier domain) of preselected traces from some region of interest along the survey line. These cross-correlations are then expressed in terms of the parameterized attenuation and phase functions and the auto-correlation of the so-called source trace or reference trace. Cross-corelation that enter the optimization are selected so as to provide an average estimate of both the attenuation function and the phase (group) velocity of the area under investigation. The advantage of the method over the standard two stations method using Fourier technique is that uncertainties related to the phase unwrapping and the estimate of the number of 2π cycle skip in the phase phase are eliminated. However when mutliple modes arrival are observed, its become merely impossible to obtain reliable estimate the dipsersion curves for the different modes using optimization method alone. To circumvent this limitations we using the presented approach in conjunction with the wavelet propagation operator (Kulesh et al., 2003) which allows the application of band pass filtering in (ω -t) domain, to select a particular mode for the minimization. Also by expressing the cost function in the wavelet domain the optimization can be performed either with respect to the phase, the modulus of the transform or a combination of both. This flexibility in the design of the cost function provides an additional mean of constraining the optimization results. Results from the application of this dispersion and attenuation analysis method are shown for both synthetic and real 2D shallow seismic data sets. M. Kulesh, M. Holschneider, M. S. Diallo, Q. Xie and F. Scherbaum, Modeling of Wave Dispersion Using Wavelet Transfrom (Submitted to Pure and Applied Geophysics).
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin
2013-04-01
Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such functionality is not typically included in other water DSS. Based on the resulting water resources allocation, the model calculates operating and water scarcity costs caused by supply deficits based on economic demand functions for each demand node. The optimization model allocates the available resource over time based on economic criteria (net benefits from demand curves and cost functions), minimizing the total water scarcity and operating cost of water use. This approach provides solutions that optimize the economic efficiency (as total net benefit) in water resources management over the optimization period. Both models must be used together in water resource planning and management. The optimization model provides an initial insight on economically efficient solutions, from which different operating rules can be further developed and tested using the simulation model. The hydro-economic simulation model allows assessing economic impacts of alternative policies or operating criteria, avoiding the perfect foresight issues associated with the optimization. The tools have been applied to the Jucar river basin (Spain) in order to assess the economic results corresponding to the current modus operandi of the system and compare them with the solution from the optimization that maximizes economic efficiency. Acknowledgments: The study has been partially supported by the European Community 7th Framework Project (GENESIS project, n. 226536) and the Plan Nacional I+D+I 2008-2011 of the Spanish Ministry of Science and Innovation (CGL2009-13238-C02-01 and CGL2009-13238-C02-02).
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
Evaluation and inversion of a net ecosystem carbon exchange model for grasslands and croplands
NASA Astrophysics Data System (ADS)
Herbst, M.; Klosterhalfen, A.; Weihermueller, L.; Graf, A.; Schmidt, M.; Huisman, J. A.; Vereecken, H.
2017-12-01
A one-dimensional soil water, heat, and CO2 flux model (SOILCO2), a pool concept of soil carbon turnover (RothC), and a crop growth module (SUCROS) was coupled to predict the net ecosystem exchange (NEE) of carbon. This model, further referred to as AgroC, was extended with routines for managed grassland as well as for root exudation and root decay. In a first step, the coupled model was applied to two winter wheat sites and one upland grassland site in Germany. The model was calibrated based on soil water content, soil temperature, biometric, and soil respiration measurements for each site, and validated in terms of hourly NEE measured with the eddy covariance technique. The overall model performance of AgroC was acceptable with a model efficiency >0.78 for NEE. In a second step, AgroC was optimized with the eddy covariance NEE measurements to examine the effect of various objective functions, constraints, and data-transformations on estimated NEE, which showed a distinct sensitivity to the choice of objective function and the inclusion of soil respiration data in the optimization process. Both, day and nighttime fluxes, were found to be sensitive to the selected optimization strategy. Additional consideration of soil respiration measurements improved the simulation of small positive fluxes remarkably. Even though the model performance of the selected optimization strategies did not diverge substantially, the resulting annual NEE differed substantially. We conclude that data-transformation, definition of objective functions, and data sources have to be considered cautiously when using a terrestrial ecosystem model to determine carbon balances by means of eddy covariance measurements.
Modelling microbial metabolic rewiring during growth in a complex medium.
Fondi, Marco; Bosi, Emanuele; Presta, Luana; Natoli, Diletta; Fani, Renato
2016-11-24
In their natural environment, bacteria face a wide range of environmental conditions that change over time and that impose continuous rearrangements at all the cellular levels (e.g. gene expression, metabolism). When facing a nutritionally rich environment, for example, microbes first use the preferred compound(s) and only later start metabolizing the other one(s). A systemic re-organization of the overall microbial metabolic network in response to a variation in the composition/concentration of the surrounding nutrients has been suggested, although the range and the entity of such modifications in organisms other than a few model microbes has been scarcely described up to now. We used multi-step constraint-based metabolic modelling to simulate the growth in a complex medium over several time steps of the Antarctic model organism Pseudoalteromonas haloplanktis TAC125. As each of these phases is characterized by a specific set of amino acids to be used as carbon and energy source our modelling framework describes the major consequences of nutrients switching at the system level. The model predicts that a deep metabolic reprogramming might be required to achieve optimal biomass production in different stages of growth (different medium composition), with at least half of the cellular metabolic network involved (more than 50% of the metabolic genes). Additionally, we show that our modelling framework is able to capture metabolic functional association and/or common regulatory features of the genes embedded in our reconstruction (e.g. the presence of common regulatory motifs). Finally, to explore the possibility of a sub-optimal biomass objective function (i.e. that cells use resources in alternative metabolic processes at the expense of optimal growth) we have implemented a MOMA-based approach (called nutritional-MOMA) and compared the outcomes with those obtained with Flux Balance Analysis (FBA). Growth simulations under this scenario revealed the deep impact of choosing among alternative objective functions on the resulting predictions of fluxes distribution. Here we provide a time-resolved, systems-level scheme of PhTAC125 metabolic re-wiring as a consequence of carbon source switching in a nutritionally complex medium. Our analyses suggest the presence of a potential efficient metabolic reprogramming machinery to continuously and promptly adapt to this nutritionally changing environment, consistent with adaptation to fast growth in a fairly, but probably inconstant and highly competitive, environment. Also, we show i) how functional partnership and co-regulation features can be predicted by integrating multi-step constraint-based metabolic modelling with fed-batch growth data and ii) that performing simulations under a sub-optimal objective function may lead to different flux distributions in respect to canonical FBA.
Optimal cost design of water distribution networks using a decomposition approach
NASA Astrophysics Data System (ADS)
Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon
2016-12-01
Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.
1986-12-01
optimal value can be stated as, Marginal Productivity of Marginal Productivity of Good A Good B " Price of Good A Price of Good B This...contractor proposed production costs could be used. _11 i4 W Vi..:. II. CONTRACT PROPOSAL EVALUATION A. PRICE ANALYSIS Price analysis, in its broadest sense...enters the market with a supply function represented by line S2, then the new price will be reestablished at price OP2 and quantity OQ2. Price
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
NASA Astrophysics Data System (ADS)
Zhu, Jun; Zhang, David Wei; Kuo, Chinte; Wang, Qing; Wei, Fang; Zhang, Chenming; Chen, Han; He, Daquan; Hsu, Stephen D.
2017-07-01
As technology node shrinks, aggressive design rules for contact and other back end of line (BEOL) layers continue to drive the need for more effective full chip patterning optimization. Resist top loss is one of the major challenges for 28 nm and below technology nodes, which can lead to post-etch hotspots that are difficult to predict and eventually degrade the process window significantly. To tackle this problem, we used an advanced programmable illuminator (FlexRay) and Tachyon SMO (Source Mask Optimization) platform to make resistaware source optimization possible, and it is proved to greatly improve the imaging contrast, enhance focus and exposure latitude, and minimize resist top loss thus improving the yield.
The impact of realistic source shape and flexibility on source mask optimization
NASA Astrophysics Data System (ADS)
Aoyama, Hajime; Mizuno, Yasushi; Hirayanagi, Noriyuki; Kita, Naonori; Matsui, Ryota; Izumi, Hirohiko; Tajima, Keiichi; Siebert, Joachim; Demmerle, Wolfgang; Matsuyama, Tomoyuki
2013-04-01
Source mask optimization (SMO) is widely used to make state-of-the-art semiconductor devices in high volume manufacturing. To realize mature SMO solutions in production, the Intelligent Illuminator, which is an illumination system on Nikon scanner, is useful because it can provide generation of freeform sources with high fidelity to the target. Proteus SMO, which employs co-optimization method and an insertion of validation with mask 3D effect and resist properties for an accurate prediction of wafer printing, can take into account the properties of Intelligent Illuminator. We investigate an impact of the source properties on the SMO to pattern of a static-random access memory. Quality of a source made on the scanner compared to the SMO target is evaluated with in-situ measurement and aerial image simulation using its measurement data. Furthermore we discuss an evaluation of a universality of the source to use it in multiple scanners with a validation with estimated value of scanner errors.
Modeling Brain Dynamics in Brain Tumor Patients Using the Virtual Brain.
Aerts, Hannelore; Schirner, Michael; Jeurissen, Ben; Van Roost, Dirk; Achten, Eric; Ritter, Petra; Marinazzo, Daniele
2018-01-01
Presurgical planning for brain tumor resection aims at delineating eloquent tissue in the vicinity of the lesion to spare during surgery. To this end, noninvasive neuroimaging techniques such as functional MRI and diffusion-weighted imaging fiber tracking are currently employed. However, taking into account this information is often still insufficient, as the complex nonlinear dynamics of the brain impede straightforward prediction of functional outcome after surgical intervention. Large-scale brain network modeling carries the potential to bridge this gap by integrating neuroimaging data with biophysically based models to predict collective brain dynamics. As a first step in this direction, an appropriate computational model has to be selected, after which suitable model parameter values have to be determined. To this end, we simulated large-scale brain dynamics in 25 human brain tumor patients and 11 human control participants using The Virtual Brain, an open-source neuroinformatics platform. Local and global model parameters of the Reduced Wong-Wang model were individually optimized and compared between brain tumor patients and control subjects. In addition, the relationship between model parameters and structural network topology and cognitive performance was assessed. Results showed (1) significantly improved prediction accuracy of individual functional connectivity when using individually optimized model parameters; (2) local model parameters that can differentiate between regions directly affected by a tumor, regions distant from a tumor, and regions in a healthy brain; and (3) interesting associations between individually optimized model parameters and structural network topology and cognitive performance.
THz optical design considerations and optimization for medical imaging applications
NASA Astrophysics Data System (ADS)
Sung, Shijun; Garritano, James; Bajwa, Neha; Nowroozi, Bryan; Llombart, Nuria; Grundfest, Warren; Taylor, Zachary D.
2014-09-01
THz imaging system design will play an important role making possible imaging of targets with arbitrary properties and geometries. This study discusses design consideration and imaging performance optimization techniques in THz quasioptical imaging system optics. Analysis of field and polarization distortion by off-axis parabolic (OAP) mirrors in THz imaging optics shows how distortions are carried in a series of mirrors while guiding the THz beam. While distortions of the beam profile by individual mirrors are not significant, these effects are compounded by a series of mirrors in antisymmetric orientation. It is shown that symmetric orientation of the OAP mirror effectively cancels this distortion to recover the original beam profile. Additionally, symmetric orientation can correct for some geometrical off-focusing due to misalignment. We also demonstrate an alternative method to test for overall system optics alignment by investigating the imaging performance of the tilted target plane. Asymmetric signal profile as a function of the target plane's tilt angle indicates when one or more imaging components are misaligned, giving a preferred tilt direction. Such analysis can offer additional insight into often elusive source device misalignment at an integrated system. Imaging plane tilting characteristics are representative of a 3-D modulation transfer function of the imaging system. A symmetric tilted plane is preferred to optimize imaging performance.
Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-27
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.
Joint spectral characterization of photon-pair sources
NASA Astrophysics Data System (ADS)
Zielnicki, Kevin; Garay-Palmett, Karina; Cruz-Delgado, Daniel; Cruz-Ramirez, Hector; O'Boyle, Michael F.; Fang, Bin; Lorenz, Virginia O.; U'Ren, Alfred B.; Kwiat, Paul G.
2018-06-01
The ability to determine the joint spectral properties of photon pairs produced by the processes of spontaneous parametric downconversion (SPDC) and spontaneous four-wave mixing (SFWM) is crucial for guaranteeing the usability of heralded single photons and polarization-entangled pairs for multi-photon protocols. In this paper, we compare six different techniques that yield either a characterization of the joint spectral intensity or of the closely related purity of heralded single photons. These six techniques include: (i) scanning monochromator measurements, (ii) a variant of Fourier transform spectroscopy designed to extract the desired information exploiting a resource-optimized technique, (iii) dispersive fibre spectroscopy, (iv) stimulated-emission-based measurement, (v) measurement of the second-order correlation function ? for one of the two photons, and (vi) two-source Hong-Ou-Mandel interferometry. We discuss the relative performance of these techniques for the specific cases of a SPDC source designed to be factorable and SFWM sources of varying purity, and compare the techniques' relative advantages and disadvantages.
Energy & mass-charge distribution peculiarities of ion emitted from penning source
NASA Astrophysics Data System (ADS)
Mamedov, N. V.; Kolodko, D. V.; Sorokin, I. A.; Kanshin, I. A.; Sinelnikov, D. N.
2017-05-01
The optimization of hydrogen Penning sources used, in particular, in plasma chemical processing of materials and DLC deposition, is still very important. Investigations of mass-charge composition of these ion source emitted beams are particular relevant for miniature linear accelerators (neutron flux generators) nowadays. The Penning ion source energy and mass-charge ion distributions are presented. The relation between the discharge current abrupt jumps with increasing plasma density in the discharge center and increasing potential whipping (up to 50% of the anode voltage) is shown. Also the energy spectra in the discharge different modes as the pressure and anode potential functions are presented. It has been revealed that the atomic hydrogen ion concentration is about 5-10%, and it weakly depends on the pressure and the discharge current (in the investigated range from 1 to 10 mTorr and from 50 to 1000 μA) and increases with the anode voltage (up 1 to 3,5 kV).
Optimization of a mirror-based neutron source using differential evolution algorithm
NASA Astrophysics Data System (ADS)
Yurov, D. V.; Prikhodko, V. V.
2016-12-01
This study is dedicated to the assessment of capabilities of gas-dynamic trap (GDT) and gas-dynamic multiple-mirror trap (GDMT) as potential neutron sources for subcritical hybrids. In mathematical terms the problem of the study has been formulated as determining the global maximum of fusion gain (Q pl), the latter represented as a function of trap parameters. A differential evolution method has been applied to perform the search. Considered in all calculations has been a configuration of the neutron source with 20 m long distance between the mirrors and 100 MW heating power. It is important to mention that the numerical study has also taken into account a number of constraints on plasma characteristics so as to provide physical credibility of searched-for trap configurations. According to the results obtained the traps considered have demonstrated fusion gain up to 0.2, depending on the constraints applied. This enables them to be used either as neutron sources within subcritical reactors for minor actinides incineration or as material-testing facilities.
Forbes, Thomas P; Sisco, Edward
2014-08-05
We demonstrate the coupling of desorption electro-flow focusing ionization (DEFFI) with in-source collision induced dissociation (CID) for the mass spectrometric (MS) detection and imaging of explosive device components, including both inorganic and organic explosives and energetic materials. We utilize in-source CID to enhance ion collisions with atmospheric gas, thereby reducing adducts and minimizing organic contaminants. Optimization of the MS signal response as a function of in-source CID potential demonstrated contrasting trends for the detection of inorganic and organic explosive device components. DEFFI-MS and in-source CID enabled isotopic and molecular speciation of inorganic components, providing further physicochemical information. The developed system facilitated the direct detection and chemical mapping of trace analytes collected with Nomex swabs and spatially resolved distributions within artificial fingerprints from forensic lift tape. The results presented here provide the forensic and security sectors a powerful tool for the detection, chemical imaging, and inorganic speciation of explosives device signatures.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Genetic algorithms - What fitness scaling is optimal?
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac
1993-01-01
A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.
Jia, Shuying; Yang, Zhen; Ren, Kexin; Tian, Ziqi; Dong, Chang; Ma, Ruixue; Yu, Ge; Yang, Weiben
2016-11-05
Contamination of trace antibiotics is widely found in surface water sources. This work delineates removal of trace antibiotics (norfloxacin (NOR), sulfadiazine (SDZ) or tylosin (TYL)) from synthetic surface water by flocculation, in the coexistence of inorganic suspended particles (kaolin) and natural organic matter (humic acid, HA). To avoid extra pollution caused by petrochemical products-based modification reagents, environmental-friendly amino-acid-modified-chitosan flocculants, Ctrp and Ctyr, with different functional aromatic-rings structures were employed. Jar tests at various pHs exhibited that, Ctyr, owning phenol groups as electron donors, was favored for elimination of cationic NOR (∼50% removal; optimal pH: 6; optimal dosage: 4mg/L) and TYL (∼60% removal; optimal pH: 7; optimal dosage: 7.5mg/L), due to π-π electron donator-acceptor (EDA) effect and unconventional H-bonds. Differently, Ctrp with indole groups as electron acceptor had better removal rate (∼50%) of SDZ anions (electron donator). According to correlation analysis, the coexisted kaolin and HA played positive roles in antibiotics' removal. Detailed pairwise interactions in molecular level among different components were clarified by spectral analysis and theoretical calculations (density functional theory), which are important for both the structural design of new flocculants aiming at targeted contaminants and understanding the environmental behaviors of antibiotics in water. Copyright © 2016 Elsevier B.V. All rights reserved.
Tunability of the circadian action of tetrachromatic solid-state light sources
NASA Astrophysics Data System (ADS)
Žukauskas, A.; Vaicekauskas, R.
2015-01-01
An approach to the optimization of the spectral power distribution of solid-state light sources with the tunable non-image forming photobiological effect on the human circadian rhythm is proposed. For tetrachromatic clusters of model narrow-band (direct-emission) light-emitting diodes (LEDs), the limiting tunability of the circadian action factor (CAF), which is the ratio of the circadian efficacy to luminous efficacy of radiation, was established as a function of constraining color fidelity and luminous efficacy of radiation. For constant correlated color temperatures (CCTs), the CAF of the LED clusters can be tuned above and below that of the corresponding blackbody radiators, whereas for variable CCT, the clusters can have circadian tunability covering that of a temperature-tunable blackbody radiator.
Discontinuity minimization for omnidirectional video projections
NASA Astrophysics Data System (ADS)
Alshina, Elena; Zakharchenko, Vladyslav
2017-09-01
Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.
The journey from proton to gamma knife.
Ganz, Jeremy C
2014-01-01
It was generally accepted by the early 1960s that proton beam radiosurgery was too complex and impractical. The need was seen for a new machine. The beam design had to be as good as a proton beam. It was also decided that a static design was preferable even if the evolution of that notion is no longer clear. Complex collimators were designed that using sources of cobalt-60 could produce beams with characteristics adequately close to those of proton beams. The geometry of the machine was determined including the distance of the sources from the patient the optimal distance between the sources. The first gamma unit was built with private money with no contribution from the Swedish state, which nonetheless required detailed design information in order to ensure radiation safety. This original machine was built with rectangular collimators to produce lesions for thalamotomy for functional work. However, with the introduction of dopamine analogs, this indication virtually disappeared overnight.
NASA Astrophysics Data System (ADS)
Bode, F.; Reuschen, S.; Nowak, W.
2015-12-01
Drinking-water well catchments include many potential sources of contaminations like gas stations or agriculture. Finding optimal positions of early-warning monitoring wells is challenging because there are various parameters (and their uncertainties) that influence the reliability and optimality of any suggested monitoring location or monitoring network.The overall goal of this project is to develop and establish a concept to assess, design and optimize early-warning systems within well catchments. Such optimal monitoring networks need to optimize three competing objectives: a high detection probability, which can be reached by maximizing the "field of vision" of the monitoring network, a long early-warning time such that there is enough time left to install counter measures after first detection, and the overall operating costs of the monitoring network, which should ideally be reduced to a minimum. The method is based on numerical simulation of flow and transport in heterogeneous porous media coupled with geostatistics and Monte-Carlo, scenario analyses for real data, respectively, wrapped up within the framework of formal multi-objective optimization using a genetic algorithm.In order to speed up the optimization process and to better explore the Pareto-front, we developed a concept that forces the algorithm to search only in regions of the search space where promising solutions can be expected. We are going to show how to define these regions beforehand, using knowledge of the optimization problem, but also how to define them independently of problem attributes. With that, our method can be used with and/or without detailed knowledge of the objective functions.In summary, our study helps to improve optimization results in less optimization time by meaningful restrictions of the search space. These restrictions can be done independently of the optimization problem, but also in a problem-specific manner.
Ott, Wayne R; Klepeis, Neil E; Switzer, Paul
2003-08-01
This paper derives the analytical solutions to multi-compartment indoor air quality models for predicting indoor air pollutant concentrations in the home and evaluates the solutions using experimental measurements in the rooms of a single-story residence. The model uses Laplace transform methods to solve the mass balance equations for two interconnected compartments, obtaining analytical solutions that can be applied without a computer. Environmental tobacco smoke (ETS) sources such as the cigarette typically emit pollutants for relatively short times (7-11 min) and are represented mathematically by a "rectangular" source emission time function, or approximated by a short-duration source called an "impulse" time function. Other time-varying indoor sources also can be represented by Laplace transforms. The two-compartment model is more complicated than the single-compartment model and has more parameters, including the cigarette or combustion source emission rate as a function of time, room volumes, compartmental air change rates, and interzonal air flow factors expressed as dimensionless ratios. This paper provides analytical solutions for the impulse, step (Heaviside), and rectangular source emission time functions. It evaluates the indoor model in an unoccupied two-bedroom home using cigars and cigarettes as sources with continuous measurements of carbon monoxide (CO), respirable suspended particles (RSP), and particulate polycyclic aromatic hydrocarbons (PPAH). Fine particle mass concentrations (RSP or PM3.5) are measured using real-time monitors. In our experiments, simultaneous measurements of concentrations at three heights in a bedroom confirm an important assumption of the model-spatial uniformity of mixing. The parameter values of the two-compartment model were obtained using a "grid search" optimization method, and the predicted solutions agreed well with the measured concentration time series in the rooms of the home. The door and window positions in each room had considerable effect on the pollutant concentrations observed in the home. Because of the small volumes and low air change rates of most homes, indoor pollutant concentrations from smoking activity in a home can be very high and can persist at measurable levels indoors for many hours.
Analysis and optimization of cyclic methods in orbit computation
NASA Technical Reports Server (NTRS)
Pierce, S.
1973-01-01
The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.
Water Quality Planning in Rivers: Assimilative Capacity and Dilution Flow.
Hashemi Monfared, Seyed Arman; Dehghani Darmian, Mohsen; Snyder, Shane A; Azizyan, Gholamreza; Pirzadeh, Bahareh; Azhdary Moghaddam, Mehdi
2017-11-01
Population growth, urbanization and industrial expansion are consequentially linked to increasing pollution around the world. The sources of pollution are so vast and also include point and nonpoint sources, with intrinsic challenge for control and abatement. This paper focuses on pollutant concentrations and also the distance that the pollution is in contact with the river water as objective functions to determine two main necessary characteristics for water quality management in the river. These two necessary characteristics are named assimilative capacity and dilution flow. The mean area of unacceptable concentration [Formula: see text] and affected distance (X) are considered as two objective functions to determine the dilution flow by a non-dominated sorting genetic algorithm II (NSGA-II) optimization algorithm. The results demonstrate that the variation of river flow discharge in different seasons can modify the assimilation capacity up to 97%. Moreover, when using dilution flow as a water quality management tool, results reveal that the content of [Formula: see text] and X change up to 97% and 93%, respectively.
NASA Astrophysics Data System (ADS)
Kale, Sumit; Kondekar, Pravin N.
2018-01-01
This paper reports a novel device structure for charge plasma based Schottky Barrier (SB) MOSFET on ultrathin SOI to suppress the ambipolar leakage current and improvement of the radio frequency (RF) performance. In the proposed device, we employ dual material for the source and drain formation. Therefore, source/drain is divided into two parts as main source/drain and source/drain extension. Erbium silicide (ErSi1.7) is used as main source/drain material and Hafnium metal is used as source/drain extension material. The source extension induces the electron plasma in the ultrathin SOI body resulting reduction of SB width at the source side. Similarly, drain extension also induces the electron plasma at the drain side. This significantly increases the SB width due to increased depletion at the drain end. As a result, the ambipolar leakage current can be suppressed. In addition, drain extension also reduces the parasitic capacitances of the proposed device to improve the RF performance. The optimization of length and work function of metal used in the drain extension is performed to achieve improvement in device performance. Moreover, the proposed device makes fabrication simpler, requires low thermal budget and free from random dopant fluctuations.
Comparison of artificial intelligence classifiers for SIP attack data
NASA Astrophysics Data System (ADS)
Safarik, Jakub; Slachta, Jiri
2016-05-01
Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.
Global Simulation of Aviation Operations
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Sheth, Kapil; Ng, Hok Kwan; Morando, Alex; Li, Jinhua
2016-01-01
The simulation and analysis of global air traffic is limited due to a lack of simulation tools and the difficulty in accessing data sources. This paper provides a global simulation of aviation operations combining flight plans and real air traffic data with historical commercial city-pair aircraft type and schedule data and global atmospheric data. The resulting capability extends the simulation and optimization functions of NASA's Future Air Traffic Management Concept Evaluation Tool (FACET) to global scale. This new capability is used to present results on the evolution of global air traffic patterns from a concentration of traffic inside US, Europe and across the Atlantic Ocean to a more diverse traffic pattern across the globe with accelerated growth in Asia, Australia, Africa and South America. The simulation analyzes seasonal variation in the long-haul wind-optimal traffic patterns in six major regions of the world and provides potential time-savings of wind-optimal routes compared with either great circle routes or current flight-plans if available.
Crystallization and preliminary X-ray analysis of membrane-bound pyrophosphatases.
Kellosalo, Juho; Kajander, Tommi; Honkanen, Riina; Goldman, Adrian
2013-02-01
Membrane-bound pyrophosphatases (M-PPases) are enzymes that enhance the survival of plants, protozoans and prokaryotes in energy constraining stress conditions. These proteins use pyrophosphate, a waste product of cellular metabolism, as an energy source for sodium or proton pumping. To study the structure and function of these enzymes we have crystallized two membrane-bound pyrophosphatases recombinantly produced in Saccharomyces cerevisae: the sodium pumping enzyme of Thermotoga maritima (TmPPase) and the proton pumping enzyme of Pyrobaculum aerophilum (PaPPase). Extensive crystal optimization has allowed us to grow crystals of TmPPase that diffract to a resolution of 2.6 Å. The decisive step in this optimization was in-column detergent exchange during the two-step purification procedure. Dodecyl maltoside was used for high temperature solubilization of TmPPase and then exchanged to a series of different detergents. After extensive screening, the new detergent, octyl glucose neopentyl glycol, was found to be the optimal for TmPPase but not PaPPase.
Pythran: enabling static optimization of scientific Python programs
NASA Astrophysics Data System (ADS)
Guelton, Serge; Brunet, Pierrick; Amini, Mehdi; Merlini, Adrien; Corbillon, Xavier; Raynaud, Alan
2015-01-01
Pythran is an open source static compiler that turns modules written in a subset of Python language into native ones. Assuming that scientific modules do not rely much on the dynamic features of the language, it trades them for powerful, possibly inter-procedural, optimizations. These optimizations include detection of pure functions, temporary allocation removal, constant folding, Numpy ufunc fusion and parallelization, explicit thread-level parallelism through OpenMP annotations, false variable polymorphism pruning, and automatic vector instruction generation such as AVX or SSE. In addition to these compilation steps, Pythran provides a C++ runtime library that leverages the C++ STL to provide generic containers, and the Numeric Template Toolbox for Numpy support. It takes advantage of modern C++11 features such as variadic templates, type inference, move semantics and perfect forwarding, as well as classical idioms such as expression templates. Unlike the Cython approach, Pythran input code remains compatible with the Python interpreter. Output code is generally as efficient as the annotated Cython equivalent, if not more, but without the backward compatibility loss.
Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach
NASA Technical Reports Server (NTRS)
Das, Santanu; Oza, Nikunj C.
2011-01-01
In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.
Falch, Ken Vidar; Detlefs, Carsten; Snigirev, Anatoly; Mathiesen, Ragnvald H
2018-01-01
Analytical expressions for the transmission cross-coefficients for x-ray microscopes based on compound refractive lenses are derived based on Gaussian approximations of the source shape and energy spectrum. The effects of partial coherence, defocus, beam convergence, as well as lateral and longitudinal chromatic aberrations are accounted for and discussed. Taking the incoherent limit of the transmission cross-coefficients, a compact analytical expression for the modulation transfer function of the system is obtained, and the resulting point, line and edge spread functions are presented. Finally, analytical expressions for optimal numerical aperture, coherence ratio, and bandwidth are given. Copyright © 2017 Elsevier B.V. All rights reserved.
REGIONAL SEISMIC CHEMICAL AND NUCLEAR EXPLOSION DISCRIMINATION: WESTERN U.S. EXAMPLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, W R; Taylor, S R; Matzel, E
2006-07-07
We continue exploring methodologies to improve regional explosion discrimination using the western U.S. as a natural laboratory. The western U.S. has abundant natural seismicity, historic nuclear explosion data, and widespread mine blasts, making it a good testing ground to study the performance of regional explosion discrimination techniques. We have assembled and measured a large set of these events to systematically explore how to best optimize discrimination performance. Nuclear explosions can be discriminated from a background of earthquakes using regional phase (Pn, Pg, Sn, Lg) amplitude measures such as high frequency P/S ratios. The discrimination performance is improved if the amplitudesmore » can be corrected for source size and path length effects. We show good results are achieved using earthquakes alone to calibrate for these effects with the MDAC technique (Walter and Taylor, 2001). We show significant further improvement is then possible by combining multiple MDAC amplitude ratios using an optimized weighting technique such as Linear Discriminant Analysis (LDA). However this requires data or models for both earthquakes and explosions. In many areas of the world regional distance nuclear explosion data is lacking, but mine blast data is available. Mine explosions are often designed to fracture and/or move rock, giving them different frequency and amplitude behavior than contained chemical shots, which seismically look like nuclear tests. Here we explore discrimination performance differences between explosion types, the possible disparity in the optimization parameters that would be chosen if only chemical explosions were available and the corresponding effect of that disparity on nuclear explosion discrimination. Even after correcting for average path and site effects, regional phase ratios contain a large amount of scatter. This scatter appears to be due to variations in source properties such as depth, focal mechanism, stress drop, in the near source material properties (including emplacement conditions in the case of explosions) and in variations from the average path and site correction. Here we look at several kinds of averaging as a means to try and reduce variance in earthquake and explosion populations and better understand the factors going into a minimum variance level as a function of epicenter (see Anderson ee et al. this volume). We focus on the performance of P/S ratios over the frequency range from 1 to 16 Hz finding some improvements in discrimination as frequency increases. We also explore averaging and optimally combining P/S ratios in multiple frequency bands as a means to reduce variance. Similarly we explore the effects of azimuthally averaging both regional amplitudes and amplitude ratios over multiple stations to reduce variance. Finally we look at optimal performance as a function of magnitude and path length, as these put limits the availability of good high frequency discrimination measures.« less
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
Topology optimization of two-dimensional elastic wave barriers
NASA Astrophysics Data System (ADS)
Van hoorickx, C.; Sigmund, O.; Schevenels, M.; Lazarov, B. S.; Lombaert, G.
2016-08-01
Topology optimization is a method that optimally distributes material in a given design domain. In this paper, topology optimization is used to design two-dimensional wave barriers embedded in an elastic halfspace. First, harmonic vibration sources are considered, and stiffened material is inserted into a design domain situated between the source and the receiver to minimize wave transmission. At low frequencies, the stiffened material reflects and guides waves away from the surface. At high frequencies, destructive interference is obtained that leads to high values of the insertion loss. To handle harmonic sources at a frequency in a given range, a uniform reduction of the response over a frequency range is pursued. The minimal insertion loss over the frequency range of interest is maximized. The resulting design contains features at depth leading to a reduction of the insertion loss at the lowest frequencies and features close to the surface leading to a reduction at the highest frequencies. For broadband sources, the average insertion loss in a frequency range is optimized. This leads to designs that especially reduce the response at high frequencies. The designs optimized for the frequency averaged insertion loss are found to be sensitive to geometric imperfections. In order to obtain a robust design, a worst case approach is followed.
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yaghoobpour Tari, S; Wachowicz, K; Fallone, B
2016-06-15
Purpose: A prototype rotating hybrid MR imaging system and linac has been developed to allow for simultaneous imaging and radiation delivery parallel to B{sub 0}. However, the design of a compact magnet capable of rotation in a small vault with sufficient patient access and a typical clinical source-to-surface distance (SSD) is challenging. This work presents a novel superconducting magnet design that allows for a reduced SSD and ample patient access by moving the superconducting coils to the side of the yoke. The yoke and pole-plate structures are shaped to direct the magnetic flux appropriately. Methods: The surface of the polemore » plate for the magnet assembly is optimized. The magnetic field calculations required in this work are performed with the 3D finite element method software package Opera-3D. Each tentative design strategy is virtually modeled in this software package and externally controlled by MATLAB, with its key geometries defined as variables. The particle swarm optimization algorithm is used to optimize the variables subject to the minimization of a cost function. At each iteration, Opera-3D will solve the magnetic field solution over a field-of-view suitable for MR imaging and the degree of field uniformity will be assessed to calculate the value of the cost function associated with that iteration. Results: An optimized magnet assembly that generates a homogenous 0.2T magnetic field over an ellipsoid with large axis of 30 cm and small axes of 20 cm is obtained. Conclusion: The distinct features of this model are the minimal distance between the yoke’s top and the isocentre and the improved patient access. On the other hand, having homogeneity over an ellipsoid give us a larger field-of-view, essential for geometric accuracy of the MRI system. The increase of B{sub 0} from 0.2T in the present model to 0.5T is the subject of future work. Funding Sources: Alberta Innovates - Health Solutions (AIHS)| Disclosure and Conflict of Interest: B. Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta biplanar linac MR for commercialization).« less
NASA Astrophysics Data System (ADS)
Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.
2017-02-01
r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.
unWISE: Unblurred Coadds of the WISE Imaging
NASA Astrophysics Data System (ADS)
Lang, Dustin
2014-05-01
The Wide-field Infrared Survey Explorer (WISE) satellite observed the full sky in four mid-infrared bands in the 2.8-28 μm range. The primary mission was completed in 2010. The WISE team has done a superb job of producing a series of high-quality, well-documented, complete data releases in a timely manner. However, the "Atlas Image" coadds that are part of the recent AllWISE and previous data releases were intentionally blurred. Convolving the images by the point-spread function while coadding results in "matched-filtered" images that are close to optimal for detecting isolated point sources. But these matched-filtered images are sub-optimal or inappropriate for other purposes. For example, we are photometering the WISE images at the locations of sources detected in the Sloan Digital Sky Survey through forward modeling, and this blurring decreases the available signal-to-noise by effectively broadening the point-spread function. This paper presents a new set of coadds of the WISE images that have not been blurred. These images retain the intrinsic resolution of the data and are appropriate for photometry preserving the available signal-to-noise. Users should be cautioned, however, that the W3- and W4-band coadds contain artifacts around large, bright structures (large galaxies, dusty nebulae, etc.); eliminating these artifacts is the subject of ongoing work. These new coadds, and the code used to produce them, are publicly available at http://unwise.me.
How Well Do We Know Pareto Optimality?
ERIC Educational Resources Information Center
Mathur, Vijay K.
1991-01-01
Identifies sources of ambiguity in economics textbooks' discussion of the condition for efficient output mix. Points out that diverse statements without accompanying explanations create confusion among students. Argues that conflicting views concerning the concept of Pareto optimality as one source of ambiguity. Suggests clarifying additions to…
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan
2018-05-21
The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan
2018-01-01
The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
A quantitative approach to the loading rate of seismogenic sources in Italy
NASA Astrophysics Data System (ADS)
Caporali, Alessandro; Braitenberg, Carla; Montone, Paola; Rossi, Giuliana; Valensise, Gianluca; Viganò, Alfio; Zurutuza, Joaquin
2018-06-01
To investigate the transfer of elastic energy between a regional stress field and a set of localized faults, we project the stress rate tensor inferred from the Italian GNSS (Global Navigation Satellite Systems) velocity field onto faults selected from the Database of Individual Seismogenic Sources (DISS 3.2.0). For given Lamé constants and friction coefficient, we compute the loading rate on each fault in terms of the Coulomb failure function (CFF) rate. By varying the strike, dip and rake angles around the nominal DISS values, we also estimate the geometry of planes that are optimally oriented for maximal CFF rate. Out of 86 Individual Seismogenic Sources (ISSs), all well covered by GNSS data, 78-81 (depending on the assumed friction coefficient) load energy at a rate of 0-4 kPa yr-1. The faults displaying larger CFF rates (4-6 ± 1 kPa yr-1) are located in the central Apennines and are all characterized by a significant strike-slip component. We also find that the loading rate of 75% of the examined sources is less than 1 kPa yr-1 lower than that of optimally oriented faults. We also analysed 2016 August 24 and October 30 central Apennines earthquakes (Mw 6.0-6.5, respectively). The strike of their causative faults based on seismological and tectonic data and the geodetically inferred strike differ by <30°. Some sources exhibit a strike oblique to the direction of maximum strain rate, suggesting that in some instances the present-day stress acts on inherited faults. The choice of the friction coefficient only marginally affects this result.
SPLICER - A GENETIC ALGORITHM TOOL FOR SEARCH AND OPTIMIZATION, VERSION 1.0 (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Wang, L.
1994-01-01
SPLICER is a genetic algorithm tool which can be used to solve search and optimization problems. Genetic algorithms are adaptive search procedures (i.e. problem solving methods) based loosely on the processes of natural selection and Darwinian "survival of the fittest." SPLICER provides the underlying framework and structure for building a genetic algorithm application. These algorithms apply genetically-inspired operators to populations of potential solutions in an iterative fashion, creating new populations while searching for an optimal or near-optimal solution to the problem at hand. SPLICER 1.0 was created using a modular architecture that includes a Genetic Algorithm Kernel, interchangeable Representation Libraries, Fitness Modules and User Interface Libraries, and well-defined interfaces between these components. The architecture supports portability, flexibility, and extensibility. SPLICER comes with all source code and several examples. For instance, a "traveling salesperson" example searches for the minimum distance through a number of cities visiting each city only once. Stand-alone SPLICER applications can be used without any programming knowledge. However, to fully utilize SPLICER within new problem domains, familiarity with C language programming is essential. SPLICER's genetic algorithm (GA) kernel was developed independent of representation (i.e. problem encoding), fitness function or user interface type. The GA kernel comprises all functions necessary for the manipulation of populations. These functions include the creation of populations and population members, the iterative population model, fitness scaling, parent selection and sampling, and the generation of population statistics. In addition, miscellaneous functions are included in the kernel (e.g., random number generators). Different problem-encoding schemes and functions are defined and stored in interchangeable representation libraries. This allows the GA kernel to be used with any representation scheme. The SPLICER tool provides representation libraries for binary strings and for permutations. These libraries contain functions for the definition, creation, and decoding of genetic strings, as well as multiple crossover and mutation operators. Furthermore, the SPLICER tool defines the appropriate interfaces to allow users to create new representation libraries. Fitness modules are the only component of the SPLICER system a user will normally need to create or alter to solve a particular problem. Fitness functions are defined and stored in interchangeable fitness modules which must be created using C language. Within a fitness module, a user can create a fitness (or scoring) function, set the initial values for various SPLICER control parameters (e.g., population size), create a function which graphically displays the best solutions as they are found, and provide descriptive information about the problem. The tool comes with several example fitness modules, while the process of developing a fitness module is fully discussed in the accompanying documentation. The user interface is event-driven and provides graphic output in windows. SPLICER is written in Think C for Apple Macintosh computers running System 6.0.3 or later and Sun series workstations running SunOS. The UNIX version is easily ported to other UNIX platforms and requires MIT's X Window System, Version 11 Revision 4 or 5, MIT's Athena Widget Set, and the Xw Widget Set. Example executables and source code are included for each machine version. The standard distribution media for the Macintosh version is a set of three 3.5 inch Macintosh format diskettes. The standard distribution medium for the UNIX version is a .25 inch streaming magnetic tape cartridge in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. SPLICER was developed in 1991.
Optimizing Within-Subject Experimental Designs for jICA of Multi-Channel ERP and fMRI
Mangalathu-Arumana, Jain; Liebenthal, Einat; Beardsley, Scott A.
2018-01-01
Joint independent component analysis (jICA) can be applied within subject for fusion of multi-channel event-related potentials (ERP) and functional magnetic resonance imaging (fMRI), to measure brain function at high spatiotemporal resolution (Mangalathu-Arumana et al., 2012). However, the impact of experimental design choices on jICA performance has not been systematically studied. Here, the sensitivity of jICA for recovering neural sources in individual data was evaluated as a function of imaging SNR, number of independent representations of the ERP/fMRI data, relationship between instantiations of the joint ERP/fMRI activity (linear, non-linear, uncoupled), and type of sources (varying parametrically and non-parametrically across representations of the data), using computer simulations. Neural sources were simulated with spatiotemporal and noise attributes derived from experimental data. The best performance, maximizing both cross-modal data fusion and the separation of brain sources, occurred with a moderate number of representations of the ERP/fMRI data (10–30), as in a mixed block/event related experimental design. Importantly, the type of relationship between instantiations of the ERP/fMRI activity, whether linear, non-linear or uncoupled, did not in itself impact jICA performance, and was accurately recovered in the common profiles (i.e., mixing coefficients). Thus, jICA provides an unbiased way to characterize the relationship between ERP and fMRI activity across brain regions, in individual data, rendering it potentially useful for characterizing pathological conditions in which neurovascular coupling is adversely affected. PMID:29410611
Compact blackbody calibration sources for in-flight calibration of spaceborne infrared instruments
NASA Astrophysics Data System (ADS)
Scheiding, S.; Driescher, H.; Walter, I.; Hanbuch, K.; Paul, M.; Hartmann, M.; Scheiding, M.
2017-11-01
High-emissivity blackbodies are mandatory as calibration sources in infrared radiometers. Besides the requirements on the high spectral emissivity and low reflectance, constraints regarding energy consumption, installation space and mass must be considered during instrument design. Cavity radiators provide an outstanding spectral emissivity to the price of installation space and mass of the calibration source. Surface radiation sources are mainly limited by the spectral emissivity of the functional coating and the homogeneity of the temperature distribution. The effective emissivity of a "black" surface can be optimized, by structuring the substrate with the aim to enlarge the ratio of the surface to its projection. Based on the experiences of the Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) calibration source MBB3, the results of the surface structuring on the effective emissivity are described analytically and compared to the experimental performance. Different geometries are analyzed and the production methods are discussed. The high-emissivity temperature calibration source features values of 0.99 for wavelength from 5 μm to 10 μm and emissivity larger than 0.95 for the spectral range from 10 μm to 40 μm.
Real-time Adaptive EEG Source Separation using Online Recursive Independent Component Analysis
Hsu, Sheng-Hsiou; Mullen, Tim; Jung, Tzyy-Ping; Cauwenberghs, Gert
2016-01-01
Independent Component Analysis (ICA) has been widely applied to electroencephalographic (EEG) biosignal processing and brain-computer interfaces. The practical use of ICA, however, is limited by its computational complexity, data requirements for convergence, and assumption of data stationarity, especially for high-density data. Here we study and validate an optimized online recursive ICA algorithm (ORICA) with online recursive least squares (RLS) whitening for blind source separation of high-density EEG data, which offers instantaneous incremental convergence upon presentation of new data. Empirical results of this study demonstrate the algorithm's: (a) suitability for accurate and efficient source identification in high-density (64-channel) realistically-simulated EEG data; (b) capability to detect and adapt to non-stationarity in 64-ch simulated EEG data; and (c) utility for rapidly extracting principal brain and artifact sources in real 61-channel EEG data recorded by a dry and wearable EEG system in a cognitive experiment. ORICA was implemented as functions in BCILAB and EEGLAB and was integrated in an open-source Real-time EEG Source-mapping Toolbox (REST), supporting applications in ICA-based online artifact rejection, feature extraction for real-time biosignal monitoring in clinical environments, and adaptable classifications in brain-computer interfaces. PMID:26685257
Xiong, Yi; Wu, Vincent W.; Lubbe, Andrea; ...
2017-05-03
In Neurospora crassa, the transcription factor COL-26 functions as a regulator of glucose signaling and metabolism. Its loss leads to resistance to carbon catabolite repression. Here, we report that COL-26 is necessary for the expression of amylolytic genes in N. crassa and is required for the utilization of maltose and starch. Additionally, the Δcol-26 mutant shows growth defects on preferred carbon sources, such as glucose, an effect that was alleviated if glutamine replaced ammonium as the primary nitrogen source. This rescue did not occur when maltose was used as a sole carbon source. Transcriptome and metabolic analyses of the Δcol-26more » mutant relative to its wild type parental strain revealed that amino acid and nitrogen metabolism, the TCA cycle and GABA shunt were adversely affected. Phylogenetic analysis showed a single col-26 homolog in Sordariales, Ophilostomatales, and the Magnaporthales, but an expanded number of col-26 homologs in other filamentous fungal species. Deletion of the closest homolog of col-26 in Trichoderma reesei, bglR, resulted in a mutant with similar preferred carbon source growth deficiency, and which was alleviated if glutamine was the sole nitrogen source, suggesting conservation of COL-26 and BglR function. Our finding provides novel insight into the role of COL-26 for utilization of starch and in integrating carbon and nitrogen metabolism for balanced metabolic activities for optimal carbon and nitrogen distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Yi; Wu, Vincent W.; Lubbe, Andrea
In Neurospora crassa, the transcription factor COL-26 functions as a regulator of glucose signaling and metabolism. Its loss leads to resistance to carbon catabolite repression. Here, we report that COL-26 is necessary for the expression of amylolytic genes in N. crassa and is required for the utilization of maltose and starch. Additionally, the Δcol-26 mutant shows growth defects on preferred carbon sources, such as glucose, an effect that was alleviated if glutamine replaced ammonium as the primary nitrogen source. This rescue did not occur when maltose was used as a sole carbon source. Transcriptome and metabolic analyses of the Δcol-26more » mutant relative to its wild type parental strain revealed that amino acid and nitrogen metabolism, the TCA cycle and GABA shunt were adversely affected. Phylogenetic analysis showed a single col-26 homolog in Sordariales, Ophilostomatales, and the Magnaporthales, but an expanded number of col-26 homologs in other filamentous fungal species. Deletion of the closest homolog of col-26 in Trichoderma reesei, bglR, resulted in a mutant with similar preferred carbon source growth deficiency, and which was alleviated if glutamine was the sole nitrogen source, suggesting conservation of COL-26 and BglR function. Our finding provides novel insight into the role of COL-26 for utilization of starch and in integrating carbon and nitrogen metabolism for balanced metabolic activities for optimal carbon and nitrogen distribution.« less
Xiong, Yi; Qin, Lina; Kennedy, Megan; Bauer, Diane; Barry, Kerrie; Northen, Trent R.; Grigoriev, Igor V.
2017-01-01
In Neurospora crassa, the transcription factor COL-26 functions as a regulator of glucose signaling and metabolism. Its loss leads to resistance to carbon catabolite repression. Here, we report that COL-26 is necessary for the expression of amylolytic genes in N. crassa and is required for the utilization of maltose and starch. Additionally, the Δcol-26 mutant shows growth defects on preferred carbon sources, such as glucose, an effect that was alleviated if glutamine replaced ammonium as the primary nitrogen source. This rescue did not occur when maltose was used as a sole carbon source. Transcriptome and metabolic analyses of the Δcol-26 mutant relative to its wild type parental strain revealed that amino acid and nitrogen metabolism, the TCA cycle and GABA shunt were adversely affected. Phylogenetic analysis showed a single col-26 homolog in Sordariales, Ophilostomatales, and the Magnaporthales, but an expanded number of col-26 homologs in other filamentous fungal species. Deletion of the closest homolog of col-26 in Trichoderma reesei, bglR, resulted in a mutant with similar preferred carbon source growth deficiency, and which was alleviated if glutamine was the sole nitrogen source, suggesting conservation of COL-26 and BglR function. Our finding provides novel insight into the role of COL-26 for utilization of starch and in integrating carbon and nitrogen metabolism for balanced metabolic activities for optimal carbon and nitrogen distribution. PMID:28467421
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, B; Liu, B; Li, Y
2016-06-15
Purpose: Treatment plan optimization in multi-Co60 source focused radiotherapy with multiple isocenters is challenging, because dose distribution is normalized to maximum dose during optimization and evaluation. The objective functions are traditionally defined based on relative dosimetric distribution. This study presents an alternative absolute dose-volume constraint (ADC) based deterministic optimization framework (ADC-DOF). Methods: The initial isocenters are placed on the eroded target surface. Collimator size is chosen based on the area of 2D contour on corresponding axial slice. The isocenter spacing is determined by adjacent collimator sizes. The weights are optimized by minimizing the deviation from ADCs using the steepest descentmore » technique. An iterative procedure is developed to reduce the number of isocenters, where the isocenter with lowest weight is removed without affecting plan quality. The ADC-DOF is compared with the genetic algorithm (GA) using the same arbitrary shaped target (254cc), with a 15mm margin ring structure representing normal tissues. Results: For ADC-DOF, the ADCs imposed on target and ring are (D100>10Gy, D50,10, 0<12Gy, 15Gy and 20Gy) and (D40<10Gy). The resulting D100, 50, 10, 0 and D40 are (9.9Gy, 12.0Gy, 14.1Gy and 16.2Gy) and (10.2Gy). The objectives of GA are to maximize 50% isodose target coverage (TC) while minimize the dose delivered to the ring structure, which results in 97% TC and 47.2% average dose in ring structure. For ADC-DOF (GA) techniques, 20 out of 38 (10 out of 12) initial isocenters are used in the final plan, and the computation time is 8.7s (412.2s) on an i5 computer. Conclusion: We have developed a new optimization technique using ADC and deterministic optimization. Compared with GA, ADC-DOF uses more isocenters but is faster and more robust, and achieves a better conformity. For future work, we will focus on developing a more effective mechanism for initial isocenter determination.« less
Course 4: Density Functional Theory, Methods, Techniques, and Applications
NASA Astrophysics Data System (ADS)
Chrétien, S.; Salahub, D. R.
Contents 1 Introduction 2 Density functional theory 2.1 Hohenberg and Kohn theorems 2.2 Levy's constrained search 2.3 Kohn-Sham method 3 Density matrices and pair correlation functions 4 Adiabatic connection or coupling strength integration 5 Comparing and constrasting KS-DFT and HF-CI 6 Preparing new functionals 7 Approximate exchange and correlation functionals 7.1 The Local Spin Density Approximation (LSDA) 7.2 Gradient Expansion Approximation (GEA) 7.3 Generalized Gradient Approximation (GGA) 7.4 meta-Generalized Gradient Approximation (meta-GGA) 7.5 Hybrid functionals 7.6 The Optimized Effective Potential method (OEP) 7.7 Comparison between various approximate functionals 8 LAP correlation functional 9 Solving the Kohn-Sham equations 9.1 The Kohn-Sham orbitals 9.2 Coulomb potential 9.3 Exchange-correlation potential 9.4 Core potential 9.5 Other choices and sources of error 9.6 Functionality 10 Applications 10.1 Ab initio molecular dynamics for an alanine dipeptide model 10.2 Transition metal clusters: The ecstasy, and the agony... 10.3 The conversion of acetylene to benzene on Fe clusters 11 Conclusions
Wilén, Britt-Marie; Liébana, Raquel; Persson, Frank; Modin, Oskar; Hermansson, Malte
2018-06-01
Granular activated sludge has gained increasing interest due to its potential in treating wastewater in a compact and efficient way. It is well-established that activated sludge can form granules under certain environmental conditions such as batch-wise operation with feast-famine feeding, high hydrodynamic shear forces, and short settling time which select for dense microbial aggregates. Aerobic granules with stable structure and functionality have been obtained with a range of different wastewaters seeded with different sources of sludge at different operational conditions, but the microbial communities developed differed substantially. In spite of this, granule instability occurs. In this review, the available literature on the mechanisms involved in granulation and how it affects the effluent quality is assessed with special attention given to the microbial interactions involved. To be able to optimize the process further, more knowledge is needed regarding the influence of microbial communities and their metabolism on granule stability and functionality. Studies performed at conditions similar to full-scale such as fluctuation in organic loading rate, hydrodynamic conditions, temperature, incoming particles, and feed water microorganisms need further investigations.
Optimization of lamp spectrum for vegetable growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prikupets, L.B.; Tikhomirov, A.A.
1994-12-31
Commmercial light sources were evaluated as to the optimum conditions for the production of tomatoes and cucumbers. Data is presented which corresponds to the maximum productivity and optimal spectral ratios. It is suggested that the commercial light sources evaluated were not efficient for the growing of the vegetables.
Eytan, Danny; Pang, Elizabeth W; Doesburg, Sam M; Nenadovic, Vera; Gavrilovic, Bojan; Laussen, Peter; Guerguerian, Anne-Marie
2016-01-01
Acute brain injury is a common cause of death and critical illness in children and young adults. Fundamental management focuses on early characterization of the extent of injury and optimizing recovery by preventing secondary damage during the days following the primary injury. Currently, bedside technology for measuring neurological function is mainly limited to using electroencephalography (EEG) for detection of seizures and encephalopathic features, and evoked potentials. We present a proof of concept study in patients with acute brain injury in the intensive care setting, featuring a bedside functional imaging set-up designed to map cortical brain activation patterns by combining high density EEG recordings, multi-modal sensory stimulation (auditory, visual, and somatosensory), and EEG source modeling. Use of source-modeling allows for examination of spatiotemporal activation patterns at the cortical region level as opposed to the traditional scalp potential maps. The application of this system in both healthy and brain-injured participants is demonstrated with modality-specific source-reconstructed cortical activation patterns. By combining stimulation obtained with different modalities, most of the cortical surface can be monitored for changes in functional activation without having to physically transport the subject to an imaging suite. The results in patients in an intensive care setting with anatomically well-defined brain lesions suggest a topographic association between their injuries and activation patterns. Moreover, we report the reproducible application of a protocol examining a higher-level cortical processing with an auditory oddball paradigm involving presentation of the patient's own name. This study reports the first successful application of a bedside functional brain mapping tool in the intensive care setting. This application has the potential to provide clinicians with an additional dimension of information to manage critically-ill children and adults, and potentially patients not suited for magnetic resonance imaging technologies.
NASA Astrophysics Data System (ADS)
An, Chenjie; Zhu, Rui; Xu, Jun; Liu, Yaqi; Hu, Xiaopeng; Zhang, Jiasen; Yu, Dapeng
2018-05-01
Electron sources driven by femtosecond laser have important applications in many aspects, and the research about the intrinsic emittance is becoming more and more crucial. The intrinsic emittance of polycrystalline copper cathode, which was illuminated by femtosecond pulses (FWHM of the pulse duration was about 100 fs) with photon energies above and below the work function, was measured with an extremely low bunch charge (single-electron pulses) based on free expansion method. A minimum emittance was obtained at the photon energy very close to the effective work function of the cathode. When the photon energy decreased below the effective work function, emittance increased rather than decreased or flattened out to a constant. By investigating the dependence of photocurrent density on the incident laser intensity, we found the emission excited by pulsed photons with sub-work-function energies contained two-photon photoemission. In addition, the portion of two-photon photoemission current increased with the reduction of photon energy. We attributed the increase of emittance to the effect of two-photon photoemission. This work shows that conventional method of reducing the photon energy of excited light source to approach the room temperature limit of the intrinsic emittance may be infeasible for femtosecond laser. There would be an optimized photon energy value near the work function to obtain the lowest emittance for pulsed laser pumped photocathode.
Identifying Attributes of CO2 Leakage Zones in Shallow Aquifers Using a Parametric Level Set Method
NASA Astrophysics Data System (ADS)
Sun, A. Y.; Islam, A.; Wheeler, M.
2016-12-01
Leakage through abandoned wells and geologic faults poses the greatest risk to CO2 storage permanence. For shallow aquifers, secondary CO2 plumes emanating from the leak zones may go undetected for a sustained period of time and has the greatest potential to cause large-scale and long-term environmental impacts. Identification of the attributes of leak zones, including their shape, location, and strength, is required for proper environmental risk assessment. This study applies a parametric level set (PaLS) method to characterize the leakage zone. Level set methods are appealing for tracking topological changes and recovering unknown shapes of objects. However, level set evolution using the conventional level set methods is challenging. In PaLS, the level set function is approximated using a weighted sum of basis functions and the level set evolution problem is replaced by an optimization problem. The efficacy of PaLS is demonstrated through recovering the source zone created by CO2 leakage into a carbonate aquifer. Our results show that PaLS is a robust source identification method that can recover the approximate source locations in the presence of measurement errors, model parameter uncertainty, and inaccurate initial guesses of source flux strengths. The PaLS inversion framework introduced in this work is generic and can be adapted for any reactive transport model by switching the pre- and post-processing routines.
A novel method for energy harvesting simulation based on scenario generation
NASA Astrophysics Data System (ADS)
Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min
2018-06-01
Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.
Noise in Charge Amplifiers— A gm/ID Approach
NASA Astrophysics Data System (ADS)
Alvarez, Enrique; Avila, Diego; Campillo, Hernan; Dragone, Angelo; Abusleme, Angel
2012-10-01
Charge amplifiers represent the standard solution to amplify signals from capacitive detectors in high energy physics experiments. In a typical front-end, the noise due to the charge amplifier, and particularly from its input transistor, limits the achievable resolution. The classic approach to attenuate noise effects in MOSFET charge amplifiers is to use the maximum power available, to use a minimum-length input device, and to establish the input transistor width in order to achieve the optimal capacitive matching at the input node. These conclusions, reached by analysis based on simple noise models, lead to sub-optimal results. In this work, a new approach on noise analysis for charge amplifiers based on an extension of the gm/ID methodology is presented. This method combines circuit equations and results from SPICE simulations, both valid for all operation regions and including all noise sources. The method, which allows to find the optimal operation point of the charge amplifier input device for maximum resolution, shows that the minimum device length is not necessarily the optimal, that flicker noise is responsible for the non-monotonic noise versus current function, and provides a deeper insight on the noise limits mechanism from an alternative and more design-oriented point of view.
Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.
Kopp, O; Markert, S; Tornow, R P
2002-01-01
To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.
A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.
Leung, Chi-Sing; Wan, Wai Yan; Feng, Ruibin
2017-06-01
Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks. Afterward, we define the objective function for training fault-tolerant RBF networks. Based on the objective function, we then develop two learning algorithms, one batch mode and one online mode. Besides, the convergent conditions of our online algorithm are investigated. Finally, we develop a formula to estimate the test set error of faulty networks trained from our approach. This formula helps us to optimize some tuning parameters, such as RBF width.
Acoustic field of a pulsating cylinder in a rarefied gas: Thermoviscous and curvature effects
NASA Astrophysics Data System (ADS)
Ben Ami, Y.; Manela, A.
2017-09-01
We study the acoustic field of a circular cylinder immersed in a rarefied gas and subject to harmonic small-amplitude normal-to-wall displacement and heat-flux excitations. The problem is analyzed in the entire range of gas rarefaction rates and excitation frequencies, considering both single cylinder and coaxial cylinders setups. Numerical calculations are carried out via the direct simulation Monte Carlo method, applying a noniterative algorithm to impose the boundary heat-flux condition. Analytical predictions are obtained in the limits of ballistic- and continuum-flow conditions. Comparing with a reference inviscid continuum solution, the results illustrate the specific impacts of gas rarefaction and boundary curvature on the acoustic source efficiency. Inspecting the far-field properties of the generated disturbance, the continuum-limit solution exhibits an exponential decay of the signal with the distance from the source, reflecting thermoviscous effects, and accompanied by an inverse square-root decay, characteristic of the inviscid problem. Stronger attenuation is observed in the ballistic limit, where boundary curvature results in "geometric reduction" of the molecular layer affected by the source, and the signal vanishes at a distance of few acoustic wavelengths from the cylinder. The combined effects of mechanical and thermal excitations are studied to seek for optimal conditions to monitor the vibroacoustic signal. The impact of boundary curvature becomes significant in the ballistic-flow regime, where the optimal heat-flux amplitude required for sound reduction decreases with the distance from the source and is essentially a function of the acoustic-wavelength-scaled distance only.
Factors influencing the genesis of neurosurgical technology.
Bergman, William C; Schulz, Raymond A; Davis, Deanna S
2009-09-01
For any new technology to gain acceptance, it must not only adequately fill a true need, but must also function optimally within the confines of coexisting technology and concurrently available support systems. As an example, over the first decades of the 20th century, a number of drill designs used to perform cranial bone cuts appeared, fell out of favor, and later reappeared as certain supportive technologies emerged. Ultimately, it was the power source that caused one device to prevail. In contrast, a brilliant imaging device, designed to demonstrate an axial view of the lumbar spine, was never allowed to gain acceptance because it was immediately superseded by another device of no greater innovation, but one that performed optimally with popular support technology. The authors discuss the factors that have bearing on the evolution of neurosurgical technology.
NASA Astrophysics Data System (ADS)
Crouse, Michael; Liebmann, Lars; Plachecki, Vince; Salama, Mohamed; Chen, Yulu; Saulnier, Nicole; Dunn, Derren; Matthew, Itty; Hsu, Stephen; Gronlund, Keith; Goodwin, Francis
2017-03-01
The initial readiness of EUV patterning was demonstrated in 2016 with IBM Alliance's 7nm device technology. The focus has now shifted to driving the 'effective' k1 factor and enabling the second generation of EUV patterning. Thus, Design Technology Co-optimization (DTCO) has become a critical part of technology enablement as scaling has become more challenging and the industry pushes the limits of EUV lithography. The working partnership between the design teams and the process development teams typically involves an iterative approach to evaluate the manufacturability of proposed designs, subsequent modifications to those designs and finally a design manual for the technology. While this approach has served the industry well for many generations, the challenges at the Beyond 7nm node require a more efficient approach. In this work, we describe the use of "Design Intent" lithographic layout optimization where we remove the iterative component of DTCO and replace it with an optimization that achieves both a "patterning friendly" design and minimizes the well-known EUV stochastic effects. Solved together, this "design intent" approach can more quickly achieve superior lithographic results while still meeting the original device's functional specifications. Specifically, in this work we will demonstrate "design intent" optimization for critical BEOL layers using design tolerance bands to guide the source mask co-optimization. The design tolerance bands can be either supplied as part of the original design or derived from some basic rules. Additionally, the EUV stochastic behavior is mitigated by enhancing the image log slope (ILS) for specific key features as part of the overall optimization. We will show the benefit of the "design intent approach" on both bidirectional and unidirectional 28nm min pitch standard logic layouts and compare the more typical iterative SMO approach. Thus demonstrating the benefit of allowing the design to float within the specified range. Lastly, we discuss how the evolution of this approach could lead to layout optimization based entirely on some minimal set of functional requirements and process constraints.
Zakaria, Siti Maisurah; Kamal, Siti Mazlina Mustapa; Harun, Mohd Razif; Omar, Rozita; Siajam, Shamsul Izhar
2017-07-03
Chlorella sp . microalgae is a potential source of antioxidants and natural bioactive compounds used in the food and pharmaceutical industries. In this study, a subcritical water (SW) technology was applied to determine the phenolic content and antioxidant activity of Chlorella sp . This study focused on maximizing the recovery of Chlorella sp. phenolic content and antioxidant activity measured by 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay as a function of extraction temperature (100-250 °C), time (5-20 min) and microalgae concentration (5-20 wt. %) using response surface methodology. The optimal operating conditions for the extraction process were found to be 5 min at 163 °C with 20 wt. % microalgae concentration, which resulted in products with 58.73 mg gallic acid equivalent (GAE)/g phenolic content and 68.5% inhibition of the DPPH radical. Under optimized conditions, the experimental values were in close agreement with values predicted by the model. The phenolic content was highly correlated (R² = 0.935) with the antioxidant capacity. Results indicated that extraction by SW technology was effective and that Chlorella sp . could be a useful source of natural antioxidants.
Stochastic Optimization for Nuclear Facility Deployment Scenarios
NASA Astrophysics Data System (ADS)
Hays, Ross Daniel
Single-use, low-enriched uranium oxide fuel, consumed through several cycles in a light-water reactor (LWR) before being disposed, has become the dominant source of commercial-scale nuclear electric generation in the United States and throughout the world. However, it is not without its drawbacks and is not the only potential nuclear fuel cycle available. Numerous alternative fuel cycles have been proposed at various times which, through the use of different reactor and recycling technologies, offer to counteract many of the perceived shortcomings with regards to waste management, resource utilization, and proliferation resistance. However, due to the varying maturity levels of these technologies, the complicated material flow feedback interactions their use would require, and the large capital investments in the current technology, one should not deploy these advanced designs without first investigating the potential costs and benefits of so doing. As the interactions among these systems can be complicated, and the ways in which they may be deployed are many, the application of automated numerical optimization to the simulation of the fuel cycle could potentially be of great benefit to researchers and interested policy planners. To investigate the potential of these methods, a computational program has been developed that applies a parallel, multi-objective simulated annealing algorithm to a computational optimization problem defined by a library of relevant objective functions applied to the Ver ifiable Fuel Cycle Simulati on Model (VISION, developed at the Idaho National Laboratory). The VISION model, when given a specified fuel cycle deployment scenario, computes the numbers and types of, and construction, operation, and utilization schedules for, the nuclear facilities required to meet a predetermined electric power demand function. Additionally, it calculates the location and composition of the nuclear fuels within the fuel cycle, from initial mining through to eventual disposal. By varying the specifications of the deployment scenario, the simulated annealing algorithm will seek to either minimize the value of a single objective function, or enumerate the trade-off surface between multiple competing objective functions. The available objective functions represent key stakeholder values, minimizing such important factors as high-level waste disposal burden, required uranium ore supply, relative proliferation potential, and economic cost and uncertainty. The optimization program itself is designed to be modular, allowing for continued expansion and exploration as research needs and curiosity indicate. The utility and functionality of this optimization program are demonstrated through its application to one potential fuel cycle scenario of interest. In this scenario, an existing legacy LWR fleet is assumed at the year 2000. The electric power demand grows exponentially at a rate of 1.8% per year through the year 2100. Initially, new demand is met by the construction of 1-GW(e) LWRs. However, beginning in the year 2040, 600-MW(e) sodium-cooled, fast-spectrum reactors operating in a transuranic burning regime with full recycling of spent fuel become available to meet demand. By varying the fraction of new capacity allocated to each reactor type, the optimization program is able to explicitly show the relationships that exist between uranium utilization, long-term heat for geologic disposal, and cost-of-electricity objective functions. The trends associated with these trade-off surfaces tend to confirm many common expectations about the use of nuclear power, namely that while overall it is quite insensitive to variations in the cost of uranium ore, it is quite sensitive to changes in the capital costs of facilities. The optimization algorithm has shown itself to be robust and extensible, with possible extensions to many further fuel cycle optimization problems of interest.
Optimization of the Monte Carlo code for modeling of photon migration in tissue.
Zołek, Norbert S; Liebert, Adam; Maniewski, Roman
2006-10-01
The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.
Optimal Path Determination for Flying Vehicle to Search an Object
NASA Astrophysics Data System (ADS)
Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.
2018-01-01
In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
Comparative study of beam losses and heat loads reduction methods in MITICA beam source
NASA Astrophysics Data System (ADS)
Sartori, E.; Agostinetti, P.; Dal Bello, S.; Marcuzzi, D.; Serianni, G.; Sonato, P.; Veltri, P.
2014-02-01
In negative ion electrostatic accelerators a considerable fraction of extracted ions is lost by collision processes causing efficiency loss and heat deposition over the components. Stripping is proportional to the local density of gas, which is steadily injected in the plasma source; its pumping from the extraction and acceleration stages is a key functionality for the prototype of the ITER Neutral Beam Injector, and it can be simulated with the 3D code AVOCADO. Different geometric solutions were tested aiming at the reduction of the gas density. The parameter space considered is limited by constraints given by optics, aiming, voltage holding, beam uniformity, and mechanical feasibility. The guidelines of the optimization process are presented together with the proposed solutions and the results of numerical simulations.
Constraint-Muse: A Soft-Constraint Based System for Music Therapy
NASA Astrophysics Data System (ADS)
Hölzl, Matthias; Denker, Grit; Meier, Max; Wirsing, Martin
Monoidal soft constraints are a versatile formalism for specifying and solving multi-criteria optimization problems with dynamically changing user preferences. We have developed a prototype tool for interactive music creation, called Constraint Muse, that uses monoidal soft constraints to ensure that a dynamically generated melody harmonizes with input from other sources. Constraint Muse provides an easy to use interface based on Nintendo Wii controllers and is intended to be used in music therapy for people with Parkinson’s disease and for children with high-functioning autism or Asperger’s syndrome.
Optimizing X-Ray Optical Prescriptions for Wide-Field Applications
NASA Technical Reports Server (NTRS)
Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C.
2010-01-01
X-ray telescopes with spatial resolution optimized over the field of view (FOV) are of special interest for missions, such as WFXT, focused on moderately deep and deep surveys of the x-ray sky, and for solar x-ray observations. Here we report on the present status of an on-going study of the properties of Wolter I and polynominal grazing incidence designs with a view to gain a deeper insight into their properties and simply the design process. With these goals in mind, we present some results in the complementary topics of (1) properties of Wolter I x-ray optics and polynominal x-ray optic ray tracing. Of crucial importance for the design of wide-field x-ray optics is the optimization criteria. Here we have adopted the minimization of a merit function, M, which measures the spatial resolution averaged over the FOV: M= ((integral of d phi) between the limits of 0 and 2 pi) (integral of d theta theta w(theta) sigma square (theta,phi) between the limits of 0 and theta(sub FOV)) (integral of d phi between the limits of 0 and phi/4) (Integral of d theta theta w(theta) between the limits of 0 and theta(sub FOV) where w(theta(sub 1) is a weighting function and Merit function: sigma-square (theta, phi) = summation of (x,y,z) [
Stock price change rate prediction by utilizing social network activities.
Deng, Shangkun; Mitsubuchi, Takashi; Sakurai, Akito
2014-01-01
Predicting stock price change rates for providing valuable information to investors is a challenging task. Individual participants may express their opinions in social network service (SNS) before or after their transactions in the market; we hypothesize that stock price change rate is better predicted by a function of social network service activities and technical indicators than by a function of just stock market activities. The hypothesis is tested by accuracy of predictions as well as performance of simulated trading because success or failure of prediction is better measured by profits or losses the investors gain or suffer. In this paper, we propose a hybrid model that combines multiple kernel learning (MKL) and genetic algorithm (GA). MKL is adopted to optimize the stock price change rate prediction models that are expressed in a multiple kernel linear function of different types of features extracted from different sources. GA is used to optimize the trading rules used in the simulated trading by fusing the return predictions and values of three well-known overbought and oversold technical indicators. Accumulated return and Sharpe ratio were used to test the goodness of performance of the simulated trading. Experimental results show that our proposed model performed better than other models including ones using state of the art techniques.
Stock Price Change Rate Prediction by Utilizing Social Network Activities
Mitsubuchi, Takashi; Sakurai, Akito
2014-01-01
Predicting stock price change rates for providing valuable information to investors is a challenging task. Individual participants may express their opinions in social network service (SNS) before or after their transactions in the market; we hypothesize that stock price change rate is better predicted by a function of social network service activities and technical indicators than by a function of just stock market activities. The hypothesis is tested by accuracy of predictions as well as performance of simulated trading because success or failure of prediction is better measured by profits or losses the investors gain or suffer. In this paper, we propose a hybrid model that combines multiple kernel learning (MKL) and genetic algorithm (GA). MKL is adopted to optimize the stock price change rate prediction models that are expressed in a multiple kernel linear function of different types of features extracted from different sources. GA is used to optimize the trading rules used in the simulated trading by fusing the return predictions and values of three well-known overbought and oversold technical indicators. Accumulated return and Sharpe ratio were used to test the goodness of performance of the simulated trading. Experimental results show that our proposed model performed better than other models including ones using state of the art techniques. PMID:24790586
Wu, Jun-Zheng; Liu, Qin; Geng, Xiao-Shan; Li, Kai-Mian; Luo, Li-Juan; Liu, Jin-Ping
2017-03-14
Cassava (Manihot esculenta Crantz) is a major crop extensively cultivated in the tropics as both an important source of calories and a promising source for biofuel production. Although stable gene expression have been used for transgenic breeding and gene function study, a quick, easy and large-scale transformation platform has been in urgent need for gene functional characterization, especially after the cassava full genome was sequenced. Fully expanded leaves from in vitro plantlets of Manihot esculenta were used to optimize the concentrations of cellulase R-10 and macerozyme R-10 for obtaining protoplasts with the highest yield and viability. Then, the optimum conditions (PEG4000 concentration and transfection time) were determined for cassava protoplast transient gene expression. In addition, the reliability of the established protocol was confirmed for subcellular protein localization. In this work we optimized the main influencing factors and developed an efficient mesophyll protoplast isolation and PEG-mediated transient gene expression in cassava. The suitable enzyme digestion system was established with the combination of 1.6% cellulase R-10 and 0.8% macerozyme R-10 for 16 h of digestion in the dark at 25 °C, resulting in the high yield (4.4 × 10 7 protoplasts/g FW) and vitality (92.6%) of mesophyll protoplasts. The maximum transfection efficiency (70.8%) was obtained with the incubation of the protoplasts/vector DNA mixture with 25% PEG4000 for 10 min. We validated the applicability of the system for studying the subcellular localization of MeSTP7 (an H + /monosaccharide cotransporter) with our transient expression protocol and a heterologous Arabidopsis transient gene expression system. We optimized the main influencing factors and developed an efficient mesophyll protoplast isolation and transient gene expression in cassava, which will facilitate large-scale characterization of genes and pathways in cassava.
Niwas, Ram; Osama, Khwaja; Khan, Saif; Haque, Shafiul; Tripathi, C. K. M.; Mishra, B. N.
2015-01-01
Cholesterol oxidase (COD) is a bi-functional FAD-containing oxidoreductase which catalyzes the oxidation of cholesterol into 4-cholesten-3-one. The wider biological functions and clinical applications of COD have urged the screening, isolation and characterization of newer microbes from diverse habitats as a source of COD and optimization and over-production of COD for various uses. The practicability of statistical/ artificial intelligence techniques, such as response surface methodology (RSM), artificial neural network (ANN) and genetic algorithm (GA) have been tested to optimize the medium composition for the production of COD from novel strain Streptomyces sp. NCIM 5500. All experiments were performed according to the five factor central composite design (CCD) and the generated data was analysed using RSM and ANN. GA was employed to optimize the models generated by RSM and ANN. Based upon the predicted COD concentration, the model developed with ANN was found to be superior to the model developed with RSM. The RSM-GA approach predicted maximum of 6.283 U/mL COD production, whereas the ANN-GA approach predicted a maximum of 9.93 U/mL COD concentration. The optimum concentrations of the medium variables predicted through ANN-GA approach were: 1.431 g/50 mL soybean, 1.389 g/50 mL maltose, 0.029 g/50 mL MgSO4, 0.45 g/50 mL NaCl and 2.235 ml/50 mL glycerol. The experimental COD concentration was concurrent with the GA predicted yield and led to 9.75 U/mL COD production, which was nearly two times higher than the yield (4.2 U/mL) obtained with the un-optimized medium. This is the very first time we are reporting the statistical versus artificial intelligence based modeling and optimization of COD production by Streptomyces sp. NCIM 5500. PMID:26368924
Pathak, Lakshmi; Singh, Vineeta; Niwas, Ram; Osama, Khwaja; Khan, Saif; Haque, Shafiul; Tripathi, C K M; Mishra, B N
2015-01-01
Cholesterol oxidase (COD) is a bi-functional FAD-containing oxidoreductase which catalyzes the oxidation of cholesterol into 4-cholesten-3-one. The wider biological functions and clinical applications of COD have urged the screening, isolation and characterization of newer microbes from diverse habitats as a source of COD and optimization and over-production of COD for various uses. The practicability of statistical/ artificial intelligence techniques, such as response surface methodology (RSM), artificial neural network (ANN) and genetic algorithm (GA) have been tested to optimize the medium composition for the production of COD from novel strain Streptomyces sp. NCIM 5500. All experiments were performed according to the five factor central composite design (CCD) and the generated data was analysed using RSM and ANN. GA was employed to optimize the models generated by RSM and ANN. Based upon the predicted COD concentration, the model developed with ANN was found to be superior to the model developed with RSM. The RSM-GA approach predicted maximum of 6.283 U/mL COD production, whereas the ANN-GA approach predicted a maximum of 9.93 U/mL COD concentration. The optimum concentrations of the medium variables predicted through ANN-GA approach were: 1.431 g/50 mL soybean, 1.389 g/50 mL maltose, 0.029 g/50 mL MgSO4, 0.45 g/50 mL NaCl and 2.235 ml/50 mL glycerol. The experimental COD concentration was concurrent with the GA predicted yield and led to 9.75 U/mL COD production, which was nearly two times higher than the yield (4.2 U/mL) obtained with the un-optimized medium. This is the very first time we are reporting the statistical versus artificial intelligence based modeling and optimization of COD production by Streptomyces sp. NCIM 5500.
Guo, Liang; Zhang, Jiawen; Yin, Li; Zhao, Yangguo; Gao, Mengchun; She, Zonglian
2015-01-01
An acidification metabolite such as volatile fatty acids (VFAs) and ethanol could be used as denitrification carbon sources for solving the difficult problem of carbon source shortages and low nitrogen removal efficiency. A proper control of environmental factors could be essential for obtaining the optimal contents of VFAs and ethanol. In this study, suspended solids (SS), oxidation reduction potential (ORP) and shaking rate were chosen to investigate the interactive effects on VFAs and ethanol production with waste sludge. It was indicated that T-VFA yield could be enhanced at lower ORP and shaking rate. Changing the SS, ORP and shaking rate could influence the distribution of acetic, propionic, butyric, valeric acids and ethanol. The optimal conditions for VFAs and ethanol production used as a denitrification carbon source were predicted by analyzing response surface methodology (RSM).
PredictProtein—an open resource for online prediction of protein structural and functional features
Yachdav, Guy; Kloppmann, Edda; Kajan, Laszlo; Hecht, Maximilian; Goldberg, Tatyana; Hamp, Tobias; Hönigschmid, Peter; Schafferhans, Andrea; Roos, Manfred; Bernhofer, Michael; Richter, Lothar; Ashkenazy, Haim; Punta, Marco; Schlessinger, Avner; Bromberg, Yana; Schneider, Reinhard; Vriend, Gerrit; Sander, Chris; Ben-Tal, Nir; Rost, Burkhard
2014-01-01
PredictProtein is a meta-service for sequence analysis that has been predicting structural and functional features of proteins since 1992. Queried with a protein sequence it returns: multiple sequence alignments, predicted aspects of structure (secondary structure, solvent accessibility, transmembrane helices (TMSEG) and strands, coiled-coil regions, disulfide bonds and disordered regions) and function. The service incorporates analysis methods for the identification of functional regions (ConSurf), homology-based inference of Gene Ontology terms (metastudent), comprehensive subcellular localization prediction (LocTree3), protein–protein binding sites (ISIS2), protein–polynucleotide binding sites (SomeNA) and predictions of the effect of point mutations (non-synonymous SNPs) on protein function (SNAP2). Our goal has always been to develop a system optimized to meet the demands of experimentalists not highly experienced in bioinformatics. To this end, the PredictProtein results are presented as both text and a series of intuitive, interactive and visually appealing figures. The web server and sources are available at http://ppopen.rostlab.org. PMID:24799431
Luo, Yan; Wu, Wanxing; Chen, Dan; Lin, Yuping; Ma, Yage; Chen, Chaoyin; Zhao, Shenglan
2017-12-01
Walnut is a traditional food as well as a traditional medicine recorded in the Chinese Pharmacopoeia; however, the large amounts of walnut flour (WF) generated in walnut oil production have not been well utilized. This study maximized the total polyphenolic yield (TPY) from the walnut flour (WF) by optimizing simultaneous ultrasound/microwave-assisted hydroalcoholic extraction (SUMAE). Response surface methodology was used to optimize the processing parameters for the TPY, including microwave power (20-140 W), ultrasonic power (75-525 W), extraction temperature (25-55 °C), and time (0.5-9.5 min). The polyphenol components were analysed by LC-MS. A second-order polynomial model satisfactorily fit the experimental TPY data (R 2 = 0.9932, P < 0.0001 and R adj 2 = 0.9868). The optimized quick extraction conditions were microwave power 294.38 W, ultrasonic power 93.5 W, temperature 43.38 °C and time 4.33 min, with a maximum TPY of 34.91 mg GAE/g, which was a rapid extraction. The major phenolic components in the WF extracts were glansreginin A, ellagic acid, and gallic acid with peak areas of 22.15%, 14.99% and 10.96%, respectively, which might be used as functional components for health food, cosmetics and medicines. The results indicated that walnut flour, a waste product from the oil industry, was a rich source of polyphenolic compounds and thus could be used as a high-value functional food ingredient.
NASA Astrophysics Data System (ADS)
Fefer, M.; Dogan, M. S.; Herman, J. D.
2017-12-01
Long-term shifts in the timing and magnitude of reservoir inflows will potentially have significant impacts on water supply reliability in California, though projections remain uncertain. Here we assess the vulnerability of the statewide system to changes in total annual runoff (a function of precipitation) and the fraction of runoff occurring during the winter months (primarily a function of temperature). An ensemble of scenarios is sampled using a bottom-up approach and compared to the most recent available streamflow projections from the state's 4th Climate Assessment. We evaluate these scenarios using a new open-source version of the CALVIN model, a network flow optimization model encompassing roughly 90% of the urban and agricultural water demands in California, which is capable of running scenario ensembles on a high-performance computing cluster. The economic representation of water demand in the model yields several advantages for this type of analysis: optimized reservoir operating policies to minimize shortage cost and the marginal value of adaptation opportunities, defined by shadow prices on infrastructure and regulatory constraints. Results indicate a shift in optimal reservoir operations and high marginal value of additional reservoir storage in the winter months. The collaborative management of reservoirs in CALVIN yields increased storage in downstream reservoirs to store the increased winter runoff. This study contributes an ensemble evaluation of a large-scale network model to investigate uncertain climate projections, and an approach to interpret the results of economic optimization through the lens of long-term adaptation strategies.
Homotopy method for optimization of variable-specific-impulse low-thrust trajectories
NASA Astrophysics Data System (ADS)
Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng
2017-11-01
The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Subhash, Hrebesh M.; Wang, Ruikang K.; Chen, Fangyi; Nuttall, Alfred L.
2013-03-01
Most of the optical coherence tomographic (OCT) systems for high resolution imaging of biological specimens are based on refractive type microscope objectives, which are optimized for specific wave length of the optical source. In this study, we present the feasibility of using commercially available reflective type objective for high sensitive and high resolution structural and functional imaging of cochlear microstructures of an excised guinea pig through intact temporal bone. Unlike conventional refractive type microscopic objective, reflective objective are free from chromatic aberrations due to their all-reflecting nature and can support a broadband of spectrum with very high light collection efficiency.
Interplay between intrinsic noise and the stochasticity of the cell cycle in bacterial colonies.
Canela-Xandri, Oriol; Sagués, Francesc; Buceta, Javier
2010-06-02
Herein we report on the effects that different stochastic contributions induce in bacterial colonies in terms of protein concentration and production. In particular, we consider for what we believe to be the first time cell-to-cell diversity due to the unavoidable randomness of the cell-cycle duration and its interplay with other noise sources. To that end, we model a recent experimental setup that implements a protein dilution protocol by means of division events to characterize the gene regulatory function at the single cell level. This approach allows us to investigate the effect of different stochastic terms upon the total randomness experimentally reported for the gene regulatory function. In addition, we show that the interplay between intrinsic fluctuations and the stochasticity of the cell-cycle duration leads to different constructive roles. On the one hand, we show that there is an optimal value of protein concentration (alternatively an optimal value of the cell cycle phase) such that the noise in protein concentration attains a minimum. On the other hand, we reveal that there is an optimal value of the stochasticity of the cell cycle duration such that the coherence of the protein production with respect to the colony average production is maximized. The latter can be considered as a novel example of the recently reported phenomenon of diversity induced resonance. Copyright (c) 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Jin, Zhengyu; Chang, Fengmin; Meng, Fanlin; Wang, Cuiping; Meng, Yao; Liu, Xiaoji; Wu, Jing; Zuo, Jiane; Wang, Kaijun
2017-10-01
Aiming at closed-loop sustainable sewage sludge treatment, an optimal and economical pyrolytic temperature was found at 400-450 °C considering its pyrolysis efficiency of 65%, fast cracking of hydrocarbons, proteins and lipids and development of aromatized porous structure. Fourier-transform infrared (FTIR) and X-ray diffraction (XRD) tests demonstrated the development of adsorptive functional groups and crystallographic phases of adsorptive minerals. The optimal sludge-char, with a medium specific surface area of 39.6 m 2 g -1 and an iodine number of 327 mgI 2 g -1 , performed low heavy metals lixiviation. The application of sludge-char in raw sewage could remove 30% of soluble chemical oxygen demand (SCOD), along with an acetic acid adsorption capacity of 18.0 mg g -1 . The developed mesopore and/or macropore structures, containing rich acidic and basic functional groups, led to good biofilm matrices for enhanced microbial activities and improved autotrophic nitrification in anoxic stage of an A/O reactor through adsorbed extra carbon source, and hence achieved the total nitrogen (TN) removal up to 50.3%. It is demonstrated that the closed-loop sewage sludge treatment that incorporates pyrolytic sludge-char into in-situ biological sewage treatment can be a promising sustainable strategy by further optimization. Copyright © 2017 Elsevier Ltd. All rights reserved.
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.
NASA Astrophysics Data System (ADS)
Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves
2017-10-01
Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
A Risk-Based Multi-Objective Optimization Concept for Early-Warning Monitoring Networks
NASA Astrophysics Data System (ADS)
Bode, F.; Loschko, M.; Nowak, W.
2014-12-01
Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources which cannot be eliminated, especially in urban regions. As matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs.In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations and the early warning time and to minimize the installation and operating costs of the monitoring network. A qualitative risk ranking is used to prioritize the known risk sources for monitoring. The unknown risk sources can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well.We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks which are valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrade) to also cover moderate, tolerable and unknown risk sources. Monitoring networks which are valid for the remaining risk also cover all other risk sources but the early-warning time suffers.The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. To avoid numerical dispersion during the transport simulations we use the particle-tracking random walk method.
NASA Astrophysics Data System (ADS)
Peralta, Richard C.; Forghani, Ali; Fayad, Hala
2014-04-01
Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.
Multi Dimensional Honey Bee Foraging Algorithm Based on Optimal Energy Consumption
NASA Astrophysics Data System (ADS)
Saritha, R.; Vinod Chandra, S. S.
2017-10-01
In this paper a new nature inspired algorithm is proposed based on natural foraging behavior of multi-dimensional honey bee colonies. This method handles issues that arise when food is shared from multiple sources by multiple swarms at multiple destinations. The self organizing nature of natural honey bee swarms in multiple colonies is based on the principle of energy consumption. Swarms of multiple colonies select a food source to optimally fulfill the requirements of its colonies. This is based on the energy requirement for transporting food between a source and destination. Minimum use of energy leads to maximizing profit in each colony. The mathematical model proposed here is based on this principle. This has been successfully evaluated by applying it on multi-objective transportation problem for optimizing cost and time. The algorithm optimizes the needs at each destination in linear time.
SIFT optimization and automation for matching images from multiple temporal sources
NASA Astrophysics Data System (ADS)
Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio
2017-05-01
Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.
NASA Astrophysics Data System (ADS)
Massin, F.; Malcolm, A. E.
2017-12-01
Knowing earthquake source mechanisms gives valuable information for earthquake response planning and hazard mitigation. Earthquake source mechanisms can be analyzed using long period waveform inversion (for moderate size sources with sufficient signal to noise ratio) and body-wave first motion polarity or amplitude ratio inversion (for micro-earthquakes with sufficient data coverage). A robust approach that gives both source mechanisms and their associated probabilities across all source scales would greatly simplify the determination of source mechanisms and allow for more consistent interpretations of the results. Following previous work on shift and stack approaches, we develop such a probabilistic source mechanism analysis, using waveforms, which does not require polarity picking. For a given source mechanism, the first period of the observed body-waves is selected for all stations, multiplied by their corresponding theoretical polarity and stacked together. (The first period is found from a manually picked travel time by measuring the central period where the signal power is concentrated, using the second moment of the power spectral density function.) As in other shift and stack approaches, our method is not based on the optimization of an objective function through an inversion. Instead, the power of the polarity-corrected stack is a proxy for the likelihood of the trial source mechanism, with the most powerful stack corresponding to the most likely source mechanism. Using synthetic data, we test our method for robustness to the data coverage, coverage gap, signal to noise ratio, travel-time picking errors and non-double couple component. We then present results for field data in a volcano-tectonic context. Our results are reliable when constrained by 15 body-wavelets, with gap below 150 degrees, signal to noise ratio over 1 and arrival time error below a fifth of the period (0.2T) of the body-wave. We demonstrate that the source scanning approach for source mechanism analysis has similar advantages to waveform inversion (full waveform data, no manual intervention, probabilistic approach) and similar applicability to polarity inversion (any source size, any instrument type).
Optimal Placement of Dynamic Var Sources by Using Empirical Controllability Covariance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Huang, Weihong; Sun, Kai
In this paper, the empirical controllability covariance (ECC), which is calculated around the considered operating condition of a power system, is applied to quantify the degree of controllability of system voltages under specific dynamic var source locations. An optimal dynamic var source placement method addressing fault-induced delayed voltage recovery (FIDVR) issues is further formulated as an optimization problem that maximizes the determinant of ECC. The optimization problem is effectively solved by the NOMAD solver, which implements the mesh adaptive direct search algorithm. The proposed method is tested on an NPCC 140-bus system and the results show that the proposed methodmore » with fault specified ECC can solve the FIDVR issue caused by the most severe contingency with fewer dynamic var sources than the voltage sensitivity index (VSI)-based method. The proposed method with fault unspecified ECC does not depend on the settings of the contingency and can address more FIDVR issues than the VSI method when placing the same number of SVCs under different fault durations. It is also shown that the proposed method can help mitigate voltage collapse.« less
Comparison of two optimized readout chains for low light CIS
NASA Astrophysics Data System (ADS)
Boukhayma, A.; Peizerat, A.; Dupret, A.; Enz, C.
2014-03-01
We compare the noise performance of two optimized readout chains that are based on 4T pixels and featuring the same bandwidth of 265kHz (enough to read 1Megapixel with 50frame/s). Both chains contain a 4T pixel, a column amplifier and a single slope analog-to-digital converter operating a CDS. In one case, the pixel operates in source follower configuration, and in common source configuration in the other case. Based on analytical noise calculation of both readout chains, an optimization methodology is presented. Analytical results are confirmed by transient simulations using 130nm process. A total input referred noise bellow 0.4 electrons RMS is reached for a simulated conversion gain of 160μV/e-. Both optimized readout chains show the same input referred 1/f noise. The common source based readout chain shows better performance for thermal noise and requires smaller silicon area. We discuss the possible drawbacks of the common source configuration and provide the reader with a comparative table between the two readout chains. The table contains several variants (column amplifier gain, in-pixel transistor sizes and type).
NASA Astrophysics Data System (ADS)
Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein
2018-05-01
Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.
2011-04-01
Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutionsmore » and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.« less
Adaptive behaviors in multi-agent source localization using passive sensing.
Shaukat, Mansoor; Chitre, Mandar
2016-12-01
In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent's adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent's sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.
Refrigeration of rainbow trout gametes and embryos.
Babiak, Igor; Dabrowski, Konrad
2003-12-01
Prolonged access to early embryos composed of undifferentiated, totipotent blastomeres is desirable in situations when multiple collections of gametes are not possible. The objective of the present study is to examine whether the refrigeration of rainbow trout Oncorhynchus mykiss gametes and early embryos would be a suitable, reliable, and efficient tool for prolonging the availability of early developmental stages up to the advanced blastula stage. The study was conducted continuously during fall, winter, and spring spawning seasons. In all, more than 500 experimental variants were performed involving individual samples from 26 females and 33 males derived from three strains. These strains represented three possible circumstances. In optimal one, gametes from good quality donors were obtained soon after ovulation. In the two non-optimal sources, either donors were of poor genetic quality or gametes were collected from a distant location and transported as unfertilized gametes. A highly significant effect of variability of individual sample quality on efficiency of gamete and embryo refrigeration was revealed. The source of gametes significantly affected viability of refrigerated oocytes and embryos, but not spermatozoa. On average, oocytes from optimal source retained full fertilization viability for seven days of chilled storage, significantly longer than from non-optimal sources. Spermatozoa, regardless of storage method, retained full fertilization ability for the first week of storage. Refrigeration of embryos at 1.4+/-0.4 degrees C significantly slowed the development. Two- week-old embryos were still in blastula stage. Average survival rate of embryos refrigerated for 10 days and then transferred to regular incubation temperatures of 9-14 degrees C was 92% in optimal and 51 and 71% in non-optimal source variants. No effect of gamete and embryo refrigeration on the occurrence of developmental abnormalities was observed. Cumulative refrigeration of oocytes and embryos resulted in an average embryo survival rate of 71% in optimal source variants after 17 days of refrigeration (7 days oocytes+10 days embryos). The study shows that both gamete and embryo refrigeration can be successfully used as an efficient tool for prolonging availability of rainbow trout embryos in early developmental stages. Copyright 2003 Wiley-Liss, Inc.
Functional and Structural Optimality in Plant Growth: A Crop Modelling Case Study
NASA Astrophysics Data System (ADS)
Caldararu, S.; Purves, D. W.; Smith, M. J.
2014-12-01
Simple mechanistic models of vegetation processes are essential both to our understanding of plant behaviour and to our ability to predict future changes in vegetation. One concept that can take us closer to such models is that of plant optimality, the hypothesis that plants aim to achieve an optimal state. Conceptually, plant optimality can be either structural or functional optimality. A structural constraint would mean that plants aim to achieve a certain structural characteristic such as an allometric relationship or nutrient content that allows optimal function. A functional condition refers to plants achieving optimal functionality, in most cases by maximising carbon gain. Functional optimality conditions are applied on shorter time scales and lead to higher plasticity, making plants more adaptable to changes in their environment. In contrast, structural constraints are optimal given the specific environmental conditions that plants are adapted to and offer less flexibility. We exemplify these concepts using a simple model of crop growth. The model represents annual cycles of growth from sowing date to harvest, including both vegetative and reproductive growth and phenology. Structural constraints to growth are represented as an optimal C:N ratio in all plant organs, which drives allocation throughout the vegetative growing stage. Reproductive phenology - i.e. the onset of flowering and grain filling - is determined by a functional optimality condition in the form of maximising final seed mass, so that vegetative growth stops when the plant reaches maximum nitrogen or carbon uptake. We investigate the plants' response to variations in environmental conditions within these two optimality constraints and show that final yield is most affected by changes during vegetative growth which affect the structural constraint.
[GIS and scenario analysis aid to water pollution control planning of river basin].
Wang, Shao-ping; Cheng, Sheng-tong; Jia, Hai-feng; Ou, Zhi-dan; Tan, Bin
2004-07-01
The forward and backward algorithms for watershed water pollution control planning were summarized in this paper as well as their advantages and shortages. The spatial databases of water environmental function region, pollution sources, monitoring sections and sewer outlets were built with ARCGIS8.1 as the platform in the case study of Ganjiang valley, Jiangxi province. Based on the principles of the forward algorithm, four scenarios were designed for the watershed pollution control. Under these scenarios, ten sets of planning schemes were generated to implement cascade pollution source control. The investment costs of sewage treatment for these schemes were estimated by means of a series of cost-effective functions; with pollution source prediction, the water quality was modeled with CSTR model for each planning scheme. The modeled results of different planning schemes were visualized through GIS to aid decision-making. With the results of investment cost and water quality attainment as decision-making accords and based on the analysis of the economic endurable capacity for water pollution control in Ganjiang river basin, two optimized schemes were proposed. The research shows that GIS technology and scenario analysis can provide a good guidance to the synthesis, integrity and sustainability aspects for river basin water quality planning.
NASA Astrophysics Data System (ADS)
Utama, D. N.; Ani, N.; Iqbal, M. M.
2018-03-01
Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.
Internal heat gain from different light sources in the building lighting systems
NASA Astrophysics Data System (ADS)
Suszanowicz, Dariusz
2017-10-01
EU directives and the Construction Law have for some time required investors to report the energy consumption of buildings, and this has indeed caused low energy consumption buildings to proliferate. Of particular interest, internal heat gains from installed lighting affect the final energy consumption for heating of both public and residential buildings. This article presents the results of analyses of the electricity consumption and the luminous flux and the heat flux emitted by different types of light sources used in buildings. Incandescent light, halogen, compact fluorescent bulbs, and LED bulbs from various manufacturers were individually placed in a closed and isolated chamber, and the parameters for their functioning under identical conditions were recorded. The heat flux emitted by 1 W nominal power of each light source was determined. Based on the study results, the empirical coefficients of heat emission and energy efficiency ratios for different types of lighting sources (dependent lamp power and the light output) were designated. In the heat balance of the building, the designated rates allow for precise determination of the internal heat gains coming from lighting systems using various light sources and also enable optimization of lighting systems of buildings that are used in different ways.
Roebeling, P C; Cunha, M C; Arroja, L; van Grieken, M E
2015-01-01
Marine ecosystems are affected by water pollution originating from coastal catchments. The delivery of water pollutants can be reduced through water pollution abatement as well as water pollution treatment. Hence, sustainable economic development of coastal regions requires balancing of the marginal costs from water pollution abatement and/or treatment and the associated marginal benefits from marine resource appreciation. Water pollution delivery reduction costs are, however, not equal across abatement and treatment options. In this paper, an optimal control approach is developed and applied to explore welfare maximizing rates of water pollution abatement and/or treatment for efficient diffuse source water pollution management in terrestrial-marine systems. For the case of diffuse source dissolved inorganic nitrogen water pollution in the Tully-Murray region, Queensland, Australia, (agricultural) water pollution abatement cost, (wetland) water pollution treatment cost and marine benefit functions are determined to explore welfare maximizing rates of water pollution abatement and/or treatment. Considering partial (wetland) treatment costs and positive water quality improvement benefits, results show that welfare gains can be obtained, primarily, through diffuse source water pollution abatement (improved agricultural management practices) and, to a minor extent, through diffuse source water pollution treatment (wetland restoration).
The optimal on-source region size for detections with counting-type telescopes
NASA Astrophysics Data System (ADS)
Klepser, S.
2017-03-01
Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.
Optimizing acoustical conditions for speech intelligibility in classrooms
NASA Astrophysics Data System (ADS)
Yang, Wonyoung
High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with SNS = 4 dB and increased to 0.8 and 1.2 s with decreased SNS = 0 dB, for both normal and hearing-impaired listeners. Hearing-impaired listeners required more early energy than normal-hearing listeners. Reflective ceiling barriers and ceiling reflectors---in particular, parallel front-back rows of semi-circular reflectors---achieved the goal of decreasing reverberation with the least speech-level reduction.
NASA Astrophysics Data System (ADS)
Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki
2015-06-01
Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.
A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)
NASA Astrophysics Data System (ADS)
Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.
2017-12-01
Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.
Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation.
Dmochowski, Jacek P; Koessler, Laurent; Norcia, Anthony M; Bikson, Marom; Parra, Lucas C
2017-08-15
To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4-7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation
Dmochowski, Jacek P.; Koessler, Laurent; Norcia, Anthony M.; Bikson, Marom; Parra, Lucas C.
2018-01-01
To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4–7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. PMID:28578130
Pepper, Andrew R.; Pawlick, Rena L.; Gala-Lopez, Boris
2016-01-01
ABSTRACT Clinical islet transplantation has routinely been demonstrated to be an efficacious means of restoring glycemic control in select patients with autoimmune diabetes. Notwithstanding marked progress and improvements, the broad-spectrum application of this treatment option is restricted by the complications associated with intrahepatic portal cellular infusion and the scarcity of human donor pancreata. Recent progress in stem cell biology has demonstrated that the potential to expand new β cells for clinical transplantation is now a reality. As such, research focus is being directed toward optimizing safe extrahepatic transplant sites to house future alternative β cell sources for clinical use. The present study expands on our previous development of a prevascularized subcutaneous device-less (DL) technique for cellular transplantation, by demonstrating long-term (>365 d) durable syngeneic murine islet graft function. Furthermore, histological analysis of tissue specimens collected immediately post-DL site creation and acutely post-human islet transplantation demonstrates that this technique results in close apposition of the neovascularized collagen to the transplanted cells without dead space, thereby avoiding hypoxic luminal dead-space. Murine islets transplanted into the DL site created by a larger luminal diameter (6-Fr.) (n = 11), reversed diabetes to the similar capacity as our standard DL method (5-Fr.)(n = 9). Furthermore, glucose tolerance testing did not differ between these 2 transplant groups (p > 0 .05). Taken together, this further refinement of the DL transplant approach facilitates a simplistic means of islet infusion, increases the transplant volume capacity and may provide an effective microenvironment to house future alternative β cell sources. PMID:27820660
Pepper, Andrew R; Bruni, Antonio; Pawlick, Rena L; Gala-Lopez, Boris; Rafiei, Yasmin; Wink, John; Kin, Tatsuya; Shapiro, A M James
2016-11-01
Clinical islet transplantation has routinely been demonstrated to be an efficacious means of restoring glycemic control in select patients with autoimmune diabetes. Notwithstanding marked progress and improvements, the broad-spectrum application of this treatment option is restricted by the complications associated with intrahepatic portal cellular infusion and the scarcity of human donor pancreata. Recent progress in stem cell biology has demonstrated that the potential to expand new β cells for clinical transplantation is now a reality. As such, research focus is being directed toward optimizing safe extrahepatic transplant sites to house future alternative β cell sources for clinical use. The present study expands on our previous development of a prevascularized subcutaneous device-less (DL) technique for cellular transplantation, by demonstrating long-term (>365 d) durable syngeneic murine islet graft function. Furthermore, histological analysis of tissue specimens collected immediately post-DL site creation and acutely post-human islet transplantation demonstrates that this technique results in close apposition of the neovascularized collagen to the transplanted cells without dead space, thereby avoiding hypoxic luminal dead-space. Murine islets transplanted into the DL site created by a larger luminal diameter (6-Fr.) (n = 11), reversed diabetes to the similar capacity as our standard DL method (5-Fr.)(n = 9). Furthermore, glucose tolerance testing did not differ between these 2 transplant groups (p > 0 .05). Taken together, this further refinement of the DL transplant approach facilitates a simplistic means of islet infusion, increases the transplant volume capacity and may provide an effective microenvironment to house future alternative β cell sources.
Deformation data modeling through numerical models: an efficient method for tracking magma transport
NASA Astrophysics Data System (ADS)
Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.
2017-12-01
Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Optimization of Regional Geodynamic Models for Mantle Dynamics
NASA Astrophysics Data System (ADS)
Knepley, M.; Isaac, T.; Jadamec, M. A.
2016-12-01
The SubductionGenerator program is used to construct high resolution, 3D regional thermal structures for mantle convection simulations using a variety of data sources, including sea floor ages and geographically referenced 3D slab locations based on seismic observations. The initial bulk temperature field is constructed using a half-space cooling model or plate cooling model, and related smoothing functions based on a diffusion length-scale analysis. In this work, we seek to improve the 3D thermal model and test different model geometries and dynamically driven flow fields using constraints from observed seismic velocities and plate motions. Through a formal adjoint analysis, we construct the primal-dual version of the multi-objective PDE-constrained optimization problem for the plate motions and seismic misfit. We have efficient, scalable preconditioners for both the forward and adjoint problems based upon a block preconditioning strategy, and a simple gradient update is used to improve the control residual. The full optimal control problem is formulated on a nested hierarchy of grids, allowing a nonlinear multigrid method to accelerate the solution.
NASA Astrophysics Data System (ADS)
Kuo, Hung-Fei; Kao, Guan-Hsuan; Zhu, Liang-Xiu; Hung, Kuo-Shu; Lin, Yu-Hsin
2018-02-01
This study used a digital micromirror device (DMD) to produce point-array patterns and employed a self-developed optical system to define line-and-space patterns on nonplanar substrates. First, field tracing was employed to analyze the aerial images of the lithographic system, which comprised an optical system and the DMD. Multiobjective particle swarm optimization was then applied to determine the spot overlapping rate used. The objective functions were set to minimize linewidth and maximize image log slope, through which the dose of the exposure agent could be effectively controlled and the quality of the nonplanar lithography could be enhanced. Laser beams with 405-nm wavelength were employed as the light source. Silicon substrates coated with photoresist were placed on a nonplanar translation stage. The DMD was used to produce lithographic patterns, during which the parameters were analyzed and optimized. The optimal delay time-sequence combinations were used to scan images of the patterns. Finally, an exposure linewidth of less than 10 μm was successfully achieved using the nonplanar lithographic process.
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Oxide vapor distribution from a high-frequency sweep e-beam system
NASA Astrophysics Data System (ADS)
Chow, R.; Tassano, P. L.; Tsujimoto, N.
1995-03-01
Oxide vapor distributions have been determined as a function of operating parameters of a high frequency sweep e-beam source combined with a programmable sweep controller. We will show which parameters are significant, the parameters that yield the broadest oxide deposition distribution, and the procedure used to arrive at these conclusions. A design-of-experimental strategy was used with five operating parameters: evaporation rate, sweep speed, sweep pattern (pre-programmed), phase speed (azimuthal rotation of the pattern), profile (dwell time as a function of radial position). A design was chosen that would show which of the parameters and parameter pairs have a statistically significant effect on the vapor distribution. Witness flats were placed symmetrically across a 25 inches diameter platen. The stationary platen was centered 24 inches above the e-gun crucible. An oxide material was evaporated under 27 different conditions. Thickness measurements were made with a stylus profilometer. The information will enable users of the high frequency e-gun systems to optimally locate the source in a vacuum system and understand which parameters have a major effect on the vapor distribution.
Gonda, Sándor; Kiss-Szikszai, Attila; Szűcs, Zsolt; Máthé, Csaba; Vasas, Gábor
2014-10-01
Tissue cultures of a medicinal plant, Plantago lanceolata L. were screened for phenylethanoid glycosides (PGs) and other natural products (NPs) with LC-ESI-MS(3). The effects of N source concentration and NH4(+)/NO3(-) ratio were evaluated in a full-factorial (FF) experiment. N concentrations of 10, 20, 40 and 60mM, and NH4(+)/NO3(-) ratios of 0, 0.11, 0.20 and 0.33 (ratio of NH4(+) in total N source) were tested. Several peaks could be identified as PGs, of which, 16 could be putatively identified from the MS/MS/MS spectra. N source concentration and NH4(+)/NO3(-) ratio had significant effects on the metabolome, their effects on individual PGs were different despite these metabolites were of the same biosynthethic class. Chief PGs were plantamajoside and acteoside (verbascoside), their highest concentrations were 3.54±0.83% and 1.30±0.40% of dry weight, on media 10(0.33) and 40(0.33), respectively. NH4(+)/NO3(-) ratio and N source concentration effects were examined on a set of 89 NPs. For most NPs, high increases in abundance were observed compared to Murashige-Skoog medium. Abundances of 42 and 10 NPs were significantly influenced by the N source concentration and the NH4(+)/NO3(-) ratio, respectively. Optimal media for production of different NP clusters were 10(0), 10(0.11) and 40(0.33). Interaction was observed between NH4(+)/NO3(-) ratio and N source concentration for many NPs. It was shown in simulated experiments, that one-factor at a time (OFAT) experimental designs lead to sub-optimal media compositions for production of many NPs, and alternative experimental designs (e.g. FF) should be preferred when optimizing medium N source for optimal yield of NPs. If using OFAT, the N source concentration is to be optimized first, followed by NH4(+)/NO3(-) ratio, as this reduces the likeliness of suboptimal yield results. Copyright © 2014 Elsevier Ltd. All rights reserved.
Energy Efficiency Optimization in Relay-Assisted MIMO Systems With Perfect and Statistical CSI
NASA Astrophysics Data System (ADS)
Zappone, Alessio; Cao, Pan; Jorswieck, Eduard A.
2014-01-01
A framework for energy-efficient resource allocation in a single-user, amplify-and-forward relay-assisted MIMO system is devised in this paper. Previous results in this area have focused on rate maximization or sum power minimization problems, whereas fewer results are available when bits/Joule energy efficiency (EE) optimization is the goal. The performance metric to optimize is the ratio between the system's achievable rate and the total consumed power. The optimization is carried out with respect to the source and relay precoding matrices, subject to QoS and power constraints. Such a challenging non-convex problem is tackled by means of fractional programming and and alternating maximization algorithms, for various CSI assumptions at the source and relay. In particular the scenarios of perfect CSI and those of statistical CSI for either the source-relay or the relay-destination channel are addressed. Moreover, sufficient conditions for beamforming optimality are derived, which is useful in simplifying the system design. Numerical results are provided to corroborate the validity of the theoretical findings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinlan, D.; Yi, Q.; Buduc, R.
2005-02-17
ROSE is an object-oriented software infrastructure for source-to-source translation that provides an interface for programmers to write their own specialized translators for optimizing scientific applications. ROSE is a part of current research on telescoping languages, which provides optimizations of the use of libraries in scientific applications. ROSE defines approaches to extend the optimization techniques, common in well defined languages, to the optimization of scientific applications using well defined libraries. ROSE includes a rich set of tools for generating customized transformations to support optimization of applications codes. We currently support full C and C++ (including template instantiation etc.), with Fortran 90more » support under development as part of a collaboration and contract with Rice to use their version of the open source Open64 F90 front-end. ROSE represents an attempt to define an open compiler infrastructure to handle the full complexity of full scale DOE applications codes using the languages common to scientific computing within DOE. We expect that such an infrastructure will also be useful for the development of numerous tools that may then realistically expect to work on DOE full scale applications.« less
Wearable Large-Scale Perovskite Solar-Power Source via Nanocellular Scaffold.
Hu, Xiaotian; Huang, Zengqi; Zhou, Xue; Li, Pengwei; Wang, Yang; Huang, Zhandong; Su, Meng; Ren, Wanjie; Li, Fengyu; Li, Mingzhu; Chen, Yiwang; Song, Yanlin
2017-11-01
Dramatic advances in perovskite solar cells (PSCs) and the blossoming of wearable electronics have triggered tremendous demands for flexible solar-power sources. However, the fracturing of functional crystalline films and transmittance wastage from flexible substrates are critical challenges to approaching the high-performance PSCs with flexural endurance. In this work, a nanocellular scaffold is introduced to architect a mechanics buffer layer and optics resonant cavity. The nanocellular scaffold releases mechanical stresses during flexural experiences and significantly improves the crystalline quality of the perovskite films. The nanocellular optics resonant cavity optimizes light harvesting and charge transportation of devices. More importantly, these flexible PSCs, which demonstrate excellent performance and mechanical stability, are practically fabricated in modules as a wearable solar-power source. A power conversion efficiency of 12.32% for a flexible large-scale device (polyethylene terephthalate substrate, indium tin oxide-free, 1.01 cm 2 ) is achieved. This ingenious flexible structure will enable a new approach for development of wearable electronics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Marine Fungi: A Source of Potential Anticancer Compounds
Deshmukh, Sunil K.; Prakash, Ved; Ranjan, Nihar
2018-01-01
Metabolites from marine fungi have hogged the limelight in drug discovery because of their promise as therapeutic agents. A number of metabolites related to marine fungi have been discovered from various sources which are known to possess a range of activities as antibacterial, antiviral and anticancer agents. Although, over a thousand marine fungi based metabolites have already been reported, none of them have reached the market yet which could partly be related to non-comprehensive screening approaches and lack of sustained lead optimization. The origin of these marine fungal metabolites is varied as their habitats have been reported from various sources such as sponge, algae, mangrove derived fungi, and fungi from bottom sediments. The importance of these natural compounds is based on their cytotoxicity and related activities that emanate from the diversity in their chemical structures and functional groups present on them. This review covers the majority of anticancer compounds isolated from marine fungi during 2012–2016 against specific cancer cell lines. PMID:29354097
OPTiM: Optical projection tomography integrated microscope using open-source hardware and software
Andrews, Natalie; Davis, Samuel; Bugeon, Laurence; Dallman, Margaret D.; McGinty, James
2017-01-01
We describe the implementation of an OPT plate to perform optical projection tomography (OPT) on a commercial wide-field inverted microscope, using our open-source hardware and software. The OPT plate includes a tilt adjustment for alignment and a stepper motor for sample rotation as required by standard projection tomography. Depending on magnification requirements, three methods of performing OPT are detailed using this adaptor plate: a conventional direct OPT method requiring only the addition of a limiting aperture behind the objective lens; an external optical-relay method allowing conventional OPT to be performed at magnifications >4x; a remote focal scanning and region-of-interest method for improved spatial resolution OPT (up to ~1.6 μm). All three methods use the microscope’s existing incoherent light source (i.e. arc-lamp) and all of its inherent functionality is maintained for day-to-day use. OPT acquisitions are performed on in vivo zebrafish embryos to demonstrate the implementations’ viability. PMID:28700724
Speckle-based portable device for in-situ metrology of x-ray mirrors at Diamond Light Source
NASA Astrophysics Data System (ADS)
Wang, Hongchang; Kashyap, Yogesh; Zhou, Tunhe; Sawhney, Kawal
2017-09-01
For modern synchrotron light sources, the push toward diffraction-limited and coherence-preserved beams demands accurate metrology on X-ray optics. Moreover, it is important to perform in-situ characterization and optimization of X-ray mirrors since their ultimate performance is critically dependent on the working conditions. Therefore, it is highly desirable to develop a portable metrology device, which can be easily implemented on a range of beamlines for in-situ metrology. An X-ray speckle-based portable device for in-situ metrology of synchrotron X-ray mirrors has been developed at Diamond Light Source. Ultra-high angular sensitivity is achieved by scanning the speckle generator in the X-ray beam. In addition to the compact setup and ease of implementation, a user-friendly graphical user interface has been developed to ensure that characterization and alignment of X-ray mirrors is simple and fast. The functionality and feasibility of this device is presented with representative examples.
Optimizing moderation of He-3 neutron detectors for shielded fission sources
Rees, Lawrence B.; Czirr, J. Bart
2012-07-10
Abstract: The response of 3-He neutron detectors is highly dependent on the amount of moderator incorporated into the detector system. If there is too little moderation, neutrons will not react with the 3-He. If there is too much moderation, neutrons will not reach the 3-He. In applications for portal or border monitors where 3He detectors are used to interdict illicit Importation of plutonium, the fission source is always shielded to some extent. Since the energy distribution of neutrons emitted from the source depends on the amount and type of shielding present, the optimum placement of moderating material around 3-He tubesmore » is a function of shielding. In this paper, we use Monte Carlo techniques to model the response of 3-He tubes placed in polyethylene boxes for moderation. To model the shielded fission neutron source, we use a 252-Cf source placed in the center of spheres of water of varying radius. Detector efficiency as a function of box geometry and shielding are explored. We find that increasing the amount of moderator behind and to the sides of the detector generally improves the detector response, but that benefits are limited if the thickness of the polyethylene moderator is greater than about 5-7 cm. The thickness of the moderator in front of the 3He tubes, however, is very important. For bare sources, about 5-6 cm of moderator is optimum, but as the shielding increases, the optimum thickness of this moderator decreases to 0-1 cm. A two-tube box with a moderator thickness of 5 cm in front of the first tube and a thickness of 1 cm in front of the second tube is proposed to improve the detector's sensitivity to lower-energy neutrons.« less
Wohlt, J E; Ritter, D E; Evans, J L
1986-11-01
Three supplemental sources of inorganic calcium (calcite flour, aragonite, albacar), each differing in particle size and rate of reactivity, provided .6 or .9% calcium in corn silage:grain (1:1 dry matter) diets of high producing dairy cows. All cows were fed calcite flour at .6% calcium during the first 4 wk of lactation. On d 29 of lactation 5 cows were assigned to each of the six diets. Peak milk yield paralleled dry matter intake and was higher when calcite flour and aragonite provided .9% calcium, intermediate when all sources provided .6% calcium, and lower when albacar provided .9% calcium. However, adaptations to calcium source and to particle sizes of a calcium source (.35 to 1190 mu) were made within 40 d by lactating Holsteins. Starch increased and pH decreased in feces of cows fed albacar. Increasing calcium in the diet provided more buffering capacity in the gastrointestinal tract. True absorption of calcium did not differ from linearity due to source when fecal calcium was regressed on ingested calcium but did vary as a function of diet percentage. Thus, calcium retention was increased when cows were fed .9 vs. .6% calcium. These data suggest that a slow reacting (coarser) inorganic calcium source should be fed at a higher amount to optimize feed intake and milk production.
Sokhey, Taegh; Gaebler-Spira, Deborah; Kording, Konrad P.
2017-01-01
Background It is important to understand the motor deficits of children with Cerebral Palsy (CP). Our understanding of this motor disorder can be enriched by computational models of motor control. One crucial stage in generating movement involves combining uncertain information from different sources, and deficits in this process could contribute to reduced motor function in children with CP. Healthy adults can integrate previously-learned information (prior) with incoming sensory information (likelihood) in a close-to-optimal way when estimating object location, consistent with the use of Bayesian statistics. However, there are few studies investigating how children with CP perform sensorimotor integration. We compare sensorimotor estimation in children with CP and age-matched controls using a model-based analysis to understand the process. Methods and findings We examined Bayesian sensorimotor integration in children with CP, aged between 5 and 12 years old, with Gross Motor Function Classification System (GMFCS) levels 1–3 and compared their estimation behavior with age-matched typically-developing (TD) children. We used a simple sensorimotor estimation task which requires participants to combine probabilistic information from different sources: a likelihood distribution (current sensory information) with a prior distribution (learned target information). In order to examine sensorimotor integration, we quantified how participants weighed statistical information from the two sources (prior and likelihood) and compared this to the statistical optimal weighting. We found that the weighing of statistical information in children with CP was as statistically efficient as that of TD children. Conclusions We conclude that Bayesian sensorimotor integration is not impaired in children with CP and therefore, does not contribute to their motor deficits. Future research has the potential to enrich our understanding of motor disorders by investigating the stages of motor processing set out by computational models. Therapeutic interventions should exploit the ability of children with CP to use statistical information. PMID:29186196
An Optimal Design for Placements of Tsunami Observing Systems Around the Nankai Trough, Japan
NASA Astrophysics Data System (ADS)
Mulia, I. E.; Gusman, A. R.; Satake, K.
2017-12-01
Presently, there are numerous tsunami observing systems deployed in several major tsunamigenic regions throughout the world. However, documentations on how and where to optimally place such measurement devices are limited. This study presents a methodological approach to select the best and fewest observation points for the purpose of tsunami source characterizations, particularly in the form of fault slip distributions. We apply the method to design a new tsunami observation network around the Nankai Trough, Japan. In brief, our method can be divided into two stages: initialization and optimization. The initialization stage aims to identify favorable locations of observation points, as well as to determine the initial number of observations. These points are generated based on extrema of an empirical orthogonal function (EOF) spatial modes derived from 11 hypothetical tsunami events in the region. In order to further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search (MADS) to remove redundant measurements from the initially generated points by the first stage. A combinatorial search by the MADS will improve the accuracy and reduce the number of observations simultaneously. The EOF analysis of the hypothetical tsunamis using first 2 leading modes with 4 extrema on each mode results in 30 observation points spread along the trench. This is obtained after replacing some clustered points within the radius of 30 km with only one representative. Furthermore, the MADS optimization can improve the accuracy of the EOF-generated points by approximately 10-20% with fewer observations (23 points). Finally, we compare our result with the existing observation points (68 stations) in the region. The result shows that the optimized design with fewer number of observations can produce better source characterizations with approximately 20-60% improvement of accuracies at all the 11 hypothetical cases. It should be note, however, that our design is a tsunami-based approach, some of the existing observing systems are equipped with additional devices to measure other parameter of interests, i.e., for monitoring seismic activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J; Sisniega, A; Zbijewski, W
Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less
ERIC Educational Resources Information Center
Chatzarakis, G. E.
2009-01-01
This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Relay selection in energy harvesting cooperative networks with rateless codes
NASA Astrophysics Data System (ADS)
Zhu, Kaiyan; Wang, Fei
2018-04-01
This paper investigates the relay selection in energy harvesting cooperative networks, where the relays harvests energy from the radio frequency (RF) signals transmitted by a source, and the optimal relay is selected and uses the harvested energy to assist the information transmission from the source to its destination. Both source and the selected relay transmit information using rateless code, which allows the destination recover original information after collecting codes bits marginally surpass the entropy of original information. In order to improve transmission performance and efficiently utilize the harvested power, the optimal relay is selected. The optimization problem are formulated to maximize the achievable information rates of the system. Simulation results demonstrate that our proposed relay selection scheme outperform other strategies.
Analysis-Driven Design Optimization of a SMA-Based Slat-Cove Filler for Aeroacoustic Noise Reduction
NASA Technical Reports Server (NTRS)
Scholten, William; Hartl, Darren; Turner, Travis
2013-01-01
Airframe noise is a significant component of environmental noise in the vicinity of airports. The noise associated with the leading-edge slat of typical transport aircraft is a prominent source of airframe noise. Previous work suggests that a slat-cove filler (SCF) may be an effective noise treatment. Hence, development and optimization of a practical slat-cove-filler structure is a priority. The objectives of this work are to optimize the design of a functioning SCF which incorporates superelastic shape memory alloy (SMA) materials as flexures that permit the deformations involved in the configuration change. The goal of the optimization is to minimize the actuation force needed to retract the slat-SCF assembly while satisfying constraints on the maximum SMA stress and on the SCF deflection under static aerodynamic pressure loads, while also satisfying the condition that the SCF self-deploy during slat extension. A finite element analysis model based on a physical bench-top model is created in Abaqus such that automated iterative analysis of the design could be performed. In order to achieve an optimized design, several design variables associated with the current SCF configuration are considered, such as the thicknesses of SMA flexures and the dimensions of various components, SMA and conventional. Designs of experiment (DOE) are performed to investigate structural response to an aerodynamic pressure load and to slat retraction and deployment. DOE results are then used to inform the optimization process, which determines a design minimizing actuator forces while satisfying the required constraints.
NASA Astrophysics Data System (ADS)
Aittokoski, Timo; Miettinen, Kaisa
2008-07-01
Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.
iQIST v0.7: An open source continuous-time quantum Monte Carlo impurity solver toolkit
NASA Astrophysics Data System (ADS)
Huang, Li
2017-12-01
In this paper, we present a new version of the iQIST software package, which is capable of solving various quantum impurity models by using the hybridization expansion (or strong coupling expansion) continuous-time quantum Monte Carlo algorithm. In the revised version, the software architecture is completely redesigned. New basis (intermediate representation or singular value decomposition representation) for the single-particle and two-particle Green's functions is introduced. A lot of useful physical observables are added, such as the charge susceptibility, fidelity susceptibility, Binder cumulant, and autocorrelation time. Especially, we optimize measurement for the two-particle Green's functions. Both the particle-hole and particle-particle channels are supported. In addition, the block structure of the two-particle Green's functions is exploited to accelerate the calculation. Finally, we fix some known bugs and limitations. The computational efficiency of the code is greatly enhanced.
The influences of delay time on the stability of a market model with stochastic volatility
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2013-02-01
The effects of the delay time on the stability of a market model are investigated, by using a modified Heston model with a cubic nonlinearity and cross-correlated noise sources. These results indicate that: (i) There is an optimal delay time τo which maximally enhances the stability of the stock price under strong demand elasticity of stock price, and maximally reduces the stability of the stock price under weak demand elasticity of stock price; (ii) The cross correlation coefficient of noises and the delay time play an opposite role on the stability for the case of the delay time <τo and the same role for the case of the delay time >τo. Moreover, the probability density function of the escape time of stock price returns, the probability density function of the returns and the correlation function of the returns are compared with other literatures.
Intermittent metabolic switching, neuroplasticity and brain health
Mattson, Mark P.; Moehl, Keelin; Ghena, Nathaniel; Schmaedick, Maggie; Cheng, Aiwu
2018-01-01
During evolution, individuals whose brains and bodies functioned well in a fasted state were successful in acquiring food, enabling their survival and reproduction. With fasting and extended exercise, liver glycogen stores are depleted and ketones are produced from adipose-cell-derived fatty acids. This metabolic switch in cellular fuel source is accompanied by cellular and molecular adaptations of neural networks in the brain that enhance their functionality and bolster their resistance to stress, injury and disease. Here, we consider how intermittent metabolic switching, repeating cycles of a metabolic challenge that induces ketosis (fasting and/or exercise) followed by a recovery period (eating, resting and sleeping), may optimize brain function and resilience throughout the lifespan, with a focus on the neuronal circuits involved in cognition and mood. Such metabolic switching impacts multiple signalling pathways that promote neuroplasticity and resistance of the brain to injury and disease. PMID:29321682
Manikan, Vidyah; Kalil, Mohd Sahaid; Hamid, Aidil Abdul
2015-02-27
Docosahexaenoic acid (DHA, C22:6n-3) plays a vital role in the enhancement of human health, particularly for cognitive, neurological, and visual functions. Marine microalgae, such as members of the genus Aurantiochytrium, are rich in DHA and represent a promising source of omega-3 fatty acids. In this study, levels of glucose, yeast extract, sodium glutamate and sea salt were optimized for enhanced lipid and DHA production by a Malaysian isolate of thraustochytrid, Aurantiochytrium sp. SW1, using response surface methodology (RSM). The optimized medium contained 60 g/L glucose, 2 g/L yeast extract, 24 g/L sodium glutamate and 6 g/L sea salt. This combination produced 17.8 g/L biomass containing 53.9% lipid (9.6 g/L) which contained 44.07% DHA (4.23 g/L). The optimized medium was used in a scale-up run, where a 5 L bench-top bioreactor was employed to verify the applicability of the medium at larger scale. This produced 24.46 g/L biomass containing 38.43% lipid (9.4 g/L), of which 47.87% was DHA (4.5 g/L). The total amount of DHA produced was 25% higher than that produced in the original medium prior to optimization. This result suggests that Aurantiochytrium sp. SW1 could be developed for industrial application as a commercial DHA-producing microorganism.
Optimization of fixture layouts of glass laser optics using multiple kernel regression.
Su, Jianhua; Cao, Enhua; Qiao, Hong
2014-05-10
We aim to build an integrated fixturing model to describe the structural properties and thermal properties of the support frame of glass laser optics. Therefore, (a) a near global optimal set of clamps can be computed to minimize the surface shape error of the glass laser optic based on the proposed model, and (b) a desired surface shape error can be obtained by adjusting the clamping forces under various environmental temperatures based on the model. To construct the model, we develop a new multiple kernel learning method and call it multiple kernel support vector functional regression. The proposed method uses two layer regressions to group and order the data sources by the weights of the kernels and the factors of the layers. Because of that, the influences of the clamps and the temperature can be evaluated by grouping them into different layers.
Cyclic Parameter Refinement of 4S-10 Hybrid Flux-Switching Motor for Lightweight Electric Vehicle
NASA Astrophysics Data System (ADS)
Rani, J. Abd; Sulaiman, E.; Kumar, R.
2017-08-01
A great deal of attention has been given to the reduction of lighting the vehicle because the lighter the vehicle the energy consumption is comparatively low. Hence, the lightweight electric vehicle was introduced for lower carbon footprint and the sizing of the vehicle itself. One of the components to reduce the weight of the vehicle is the propulsion system which comprised of electric motor functioning as the source of torque to drive the propulsion system of the machine. This paper presents the refinement methodology for the optimized design of the 4S-10P E-Core hybrid excitation flux switching motor. The purpose of the refinement methodology is to improve the torque production of the optimized motor. The result of the successful improvement of the torque production is justifiable for a lightweight electric vehicle to drive the propulsion system.
Graphene oxide as an optimal candidate material for methane storage.
Chouhan, Rajiv K; Ulman, Kanchan; Narasimhan, Shobhana
2015-07-28
Methane, the primary constituent of natural gas, binds too weakly to nanostructured carbons to meet the targets set for on-board vehicular storage to be viable. We show, using density functional theory calculations, that replacing graphene by graphene oxide increases the adsorption energy of methane by 50%. This enhancement is sufficient to achieve the optimal binding strength. In order to gain insight into the sources of this increased binding, that could also be used to formulate design principles for novel storage materials, we consider a sequence of model systems that progressively take us from graphene to graphene oxide. A careful analysis of the various contributions to the weak binding between the methane molecule and the graphene oxide shows that the enhancement has important contributions from London dispersion interactions as well as electrostatic interactions such as Debye interactions, aided by geometric curvature induced primarily by the presence of epoxy groups.
An analytic approach to optimize tidal turbine fields
NASA Astrophysics Data System (ADS)
Pelz, P.; Metzler, M.
2013-12-01
Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
NASA Astrophysics Data System (ADS)
Verma, Madhulika; Sharma, Dheeraj; Pandey, Sunil; Nigam, Kaushal; Kondekar, P. N.
2017-01-01
In this work, we perform a comparative analysis between single and dual metal dielectrically modulated tunnel field-effect transistors (DMTFETs) for the application of label free biosensor. For this purpose, two different gate material with work-function as ϕM 1 and ϕM 2 are used in short-gate DMTFET, where ϕM 1 represents the work-function of gate M1 near to the drain end, while ϕM 2 denotes the work-function of gate M2 near to the source end. A nanogap cavity in the gate dielectric is formed by removing the selected portion of gate oxide for sensing the biomolecules. To investigate the sensitivity of these biosensors, dielectric constant and charge density within the cavity region are considered as governing parameters. The work-function of gate M2 is optimized and considered less than M1 to achieve abruptness at the source/channel junction, which results in better tunneling and improved ON-state current. The ATLAS device simulations show that dual metal SG-DMTFETs attains higher ON-state current and drain current sensitivity as compared to its counterpart device. Finally, a dual metal short-gate (DSG) biosensor is compared with the single metal short-gate (SG), single metal full-gate (FG), and dual metal full-gate (DFG) biosensors to analyse structurally enhanced conjugation effect on gate-channel coupling.
Analytic solution of magnetic induction distribution of ideal hollow spherical field sources
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-12-01
The Halbach type hollow spherical permanent magnet arrays (HSPMA) are volume compacted, energy efficient field sources, and capable of producing multi-Tesla field in the cavity of the array, which have attracted intense interests in many practical applications. Here, we present analytical solutions of magnetic induction to the ideal HSPMA in entire space, outside of array, within the cavity of array, and in the interior of the magnet. We obtain solutions using concept of magnetic charge to solve the Poisson's and Laplace's equations for the HSPMA. Using these analytical field expressions inside the material, a scalar demagnetization function is defined to approximately indicate the regions of magnetization reversal, partial demagnetization, and inverse magnetic saturation. The analytical field solution provides deeper insight into the nature of HSPMA and offer guidance in designing optimized one.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Marini, J. W.
1977-01-01
A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
Li, Kaiming; Guo, Lei; Zhu, Dajiang; Hu, Xintao; Han, Junwei; Liu, Tianming
2013-01-01
Studying connectivities among functional brain regions and the functional dynamics on brain networks has drawn increasing interest. A fundamental issue that affects functional connectivity and dynamics studies is how to determine the best possible functional brain regions or ROIs (regions of interest) for a group of individuals, since the connectivity measurements are heavily dependent on ROI locations. Essentially, identification of accurate, reliable and consistent corresponding ROIs is challenging due to the unclear boundaries between brain regions, variability across individuals, and nonlinearity of the ROIs. In response to these challenges, this paper presents a novel methodology to computationally optimize ROIs locations derived from task-based fMRI data for individuals so that the optimized ROIs are more consistent, reproducible and predictable across brains. Our computational strategy is to formulate the individual ROI location optimization as a group variance minimization problem, in which group-wise consistencies in functional/structural connectivity patterns and anatomic profiles are defined as optimization constraints. Our experimental results from multimodal fMRI and DTI data show that the optimized ROIs have significantly improved consistency in structural and functional profiles across individuals. These improved functional ROIs with better consistency could contribute to further study of functional interaction and dynamics in the human brain. PMID:22281931
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
A flexible, interactive software tool for fitting the parameters of neuronal models.
Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.
A flexible, interactive software tool for fitting the parameters of neuronal models
Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540
Benomar, Lahcen; Lamhamedi, Mohammed S.; Rainville, André; Beaulieu, Jean; Bousquet, Jean; Margolis, Hank A.
2016-01-01
Assisted population migration (APM) is the intentional movement of populations within a species range to sites where future environmental conditions are projected to be more conducive to growth. APM has been proposed as a proactive adaptation strategy to maintain forest productivity and to reduce the vulnerability of forest ecosystems to projected climate change. The validity of such a strategy will depend on the adaptation capacity of populations, which can partially be evaluated by the ecophysiological response of different genetic sources along a climatic gradient. This adaptation capacity results from the compromise between (i) the degree of genetic adaptation of seed sources to their environment of origin and (ii) the phenotypic plasticity of functional trait which can make it possible for transferred seed sources to positively respond to new growing conditions. We examined phenotypic variation in morphophysiological traits of six seed sources of white spruce (Picea glauca [Moench] Voss) along a regional climatic gradient in Québec, Canada. Seedlings from the seed sources were planted at three forest sites representing a mean annual temperature (MAT) gradient of 2.2°C. During the second growing season, we measured height growth (H2014) and traits related to resources use efficiency and photosynthetic rate (Amax). All functional traits showed an adaptive response to the climatic gradient. Traits such as H2014, Amax, stomatal conductance (gs), the ratio of mesophyll to stomatal conductance, water use efficiency, and photosynthetic nitrogen-use efficiency showed significant variation in both physiological plasticity due to the planting site and seed source variation related to local genetic adaptation. However, the amplitude of seed source variation was much less than that related to plantation sites in the area investigated. The six seed sources showed a similar level of physiological plasticity. H2014, Amax and gs, but not carboxylation capacity (Vcmax), were correlated and decreased with a reduction of the average temperature of the growing season at seed origin. The clinal variation in H2014 and Amax appeared to be driven by CO2 conductance. The presence of locally adapted functional traits suggests that the use of APM may have advantages for optimizing seed source productivity in future local climates. PMID:26870067
NASA Astrophysics Data System (ADS)
Asano, K.
2017-12-01
An MJMA 6.5 earthquake occurred offshore the Kii peninsula, southwest Japan on April 1, 2016. This event was interpreted as a thrust-event on the plate-boundary along the Nankai trough where (Wallace et al., 2016). This event is the largest plate-boundary earthquake in the source region of the 1944 Tonankai earthquake (MW 8.0) after that event. The significant point of this event regarding to seismic observation is that this event occurred beneath an ocean-bottom seismic network called DONET1, which is jointly operated by NIED and JAMSTEC. Since moderate-to-large earthquake of this focal type is very rare in this region in the last half century, it is a good opportunity to investigate the source characteristics relating to strong motion generation of subduction-zone plate-boundary earthquakes along the Nankai trough. Knowledge obtained from the study of this earthquake would contribute to ground motion prediction and seismic hazard assessment for future megathrust earthquakes expected in the Nankai trough. In this study, the source model of the 2016 offshore the Kii peninsula earthquake was estimated by broadband strong motion waveform modeling using the empirical Green's function method (Irikura, 1986). The source model is characterized by strong motion generation area (SMGA) (Miyake et al., 2003), which is defined as a rectangular area with high-stress drop or high slip-velocity. SMGA source model based on the empirical Green's function method has great potential to reproduce ground motion time history in broadband frequency range. We used strong motion data from offshore stations (DONET1 and LTBMS) and onshore stations (NIED F-net and DPRI). The records of an MJMA 3.2 aftershock at 13:04 on April 1, 2016 were selected for the empirical Green's functions. The source parameters of SMGA are optimized by the waveform modeling in the frequency range 0.4-10 Hz. The best estimate of SMGA size is 19.4 km2, and SMGA of this event does not follow the source scaling relationship for past plate-boundary earthquakes along the Japan trench, northeast Japan. This finding implies that the source characteristics of plate-boundary events in the Nankai trough are different from those in the Japan Trench, and it could be important information to consider regional variation in ground motion prediction.
Beyer, Hans-Georg
2014-01-01
The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.
Central Chemoreceptors: Locations and Functions
Nattie, Eugene; Li, Aihua
2016-01-01
Central chemoreception traditionally refers to a change in ventilation attributable to changes in CO2/H+ detected within the brain. Interest in central chemoreception has grown substantially since the previous Handbook of Physiology published in 1986. Initially, central chemoreception was localized to areas on the ventral medullary surface, a hypothesis complemented by the recent identification of neurons with specific phenotypes near one of these areas as putative chemoreceptor cells. However, there is substantial evidence that many sites participate in central chemoreception some located at a distance from the ventral medulla. Functionally, central chemoreception, via the sensing of brain interstitial fluid H+, serves to detect and integrate information on 1) alveolar ventilation (arterial PCO2), 2) brain blood flow and metabolism and 3) acid-base balance, and, in response, can affect breathing, airway resistance, blood pressure (sympathetic tone) and arousal. In addition, central chemoreception provides a tonic ‘drive’ (source of excitation) at the normal, baseline PCO2 level that maintains a degree of functional connectivity among brainstem respiratory neurons necessary to produce eupneic breathing. Central chemoreception responds to small variations in PCO2 to regulate normal gas exchange and to large changes in PCO2 to minimize acid-base changes. Central chemoreceptor sites vary in function with sex and with development. From an evolutionary perspective, central chemoreception grew out of the demands posed by air vs. water breathing, homeothermy, sleep, optimization of the work of breathing with the ‘ideal’ arterial PCO2, and the maintenance of the appropriate pH at 37°C for optimal protein structure and function. PMID:23728974
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
Mehri, Mehran
2014-07-01
The optimization algorithm of a model may have significant effects on the final optimal values of nutrient requirements in poultry enterprises. In poultry nutrition, the optimal values of dietary essential nutrients are very important for feed formulation to optimize profit through minimizing feed cost and maximizing bird performance. This study was conducted to introduce a novel multi-objective algorithm, desirability function, for optimization the bird response models based on response surface methodology (RSM) and artificial neural network (ANN). The growth databases on the central composite design (CCD) were used to construct the RSM and ANN models and optimal values for 3 essential amino acids including lysine, methionine, and threonine in broiler chicks have been reevaluated using the desirable function in both analytical approaches from 3 to 16 d of age. Multi-objective optimization results showed that the most desirable function was obtained for ANN-based model (D = 0.99) where the optimal levels of digestible lysine (dLys), digestible methionine (dMet), and digestible threonine (dThr) for maximum desirability were 13.2, 5.0, and 8.3 g/kg of diet, respectively. However, the optimal levels of dLys, dMet, and dThr in the RSM-based model were estimated at 11.2, 5.4, and 7.6 g/kg of diet, respectively. This research documented that the application of ANN in the broiler chicken model along with a multi-objective optimization algorithm such as desirability function could be a useful tool for optimization of dietary amino acids in fractional factorial experiments, in which the use of the global desirability function may be able to overcome the underestimations of dietary amino acids resulting from the RSM model. © 2014 Poultry Science Association Inc.
Advances in the Knowledge about Kidney Decellularization and Repopulation
Destefani, Afrânio Côgo; Sirtoli, Gabriela Modenesi; Nogueira, Breno Valentim
2017-01-01
End-stage renal disease (ESRD) is characterized by the progressive deterioration of renal function that may compromise different tissues and organs. The major treatment indicated for patients with ESRD is kidney transplantation. However, the shortage of available organs, as well as the high rate of organ rejection, supports the need for new therapies. Thus, the implementation of tissue bioengineering to organ regeneration has emerged as an alternative to traditional organ transplantation. Decellularization of organs with chemical, physical, and/or biological agents generates natural scaffolds, which can serve as basis for tissue reconstruction. The recellularization of these scaffolds with different cell sources, such as stem cells or adult differentiated cells, can provide an organ with functionality and no immune response after in vivo transplantation on the host. Several studies have focused on improving these techniques, but until now, there is no optimal decellularization method for the kidney available yet. Herein, an overview of the current literature for kidney decellularization and whole-organ recellularization is presented, addressing the pros and cons of the actual techniques already developed, the methods adopted to evaluate the efficacy of the procedures, and the challenges to be overcome in order to achieve an optimal protocol. PMID:28620603
Computational Support for Technology- Investment Decisions
NASA Technical Reports Server (NTRS)
Adumitroaie, Virgil; Hua, Hook; Lincoln, William; Block, Gary; Mrozinski, Joseph; Shelton, Kacie; Weisbin, Charles; Elfes, Alberto; Smith, Jeffrey
2007-01-01
Strategic Assessment of Risk and Technology (START) is a user-friendly computer program that assists human managers in making decisions regarding research-and-development investment portfolios in the presence of uncertainties and of non-technological constraints that include budgetary and time limits, restrictions related to infrastructure, and programmatic and institutional priorities. START facilitates quantitative analysis of technologies, capabilities, missions, scenarios and programs, and thereby enables the selection and scheduling of value-optimal development efforts. START incorporates features that, variously, perform or support a unique combination of functions, most of which are not systematically performed or supported by prior decision- support software. These functions include the following: Optimal portfolio selection using an expected-utility-based assessment of capabilities and technologies; Temporal investment recommendations; Distinctions between enhancing and enabling capabilities; Analysis of partial funding for enhancing capabilities; and Sensitivity and uncertainty analysis. START can run on almost any computing hardware, within Linux and related operating systems that include Mac OS X versions 10.3 and later, and can run in Windows under the Cygwin environment. START can be distributed in binary code form. START calls, as external libraries, several open-source software packages. Output is in Excel (.xls) file format.
Comparison of weighting techniques for acoustic full waveform inversion
NASA Astrophysics Data System (ADS)
Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo
2017-12-01
To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.
Comparative analysis of methods and sources of financing of the transport organizations activity
NASA Astrophysics Data System (ADS)
Gorshkov, Roman
2017-10-01
The article considers the analysis of methods of financing of transport organizations in conditions of limited investment resources. A comparative analysis of these methods is carried out, the classification of investment, methods and sources of financial support for projects being implemented to date are presented. In order to select the optimal sources of financing for the projects, various methods of financial management and financial support for the activities of the transport organization were analyzed, which were considered from the perspective of analysis of advantages and limitations. The result of the study is recommendations on the selection of optimal sources and methods of financing of transport organizations.
Gis-Based Route Finding Using ANT Colony Optimization and Urban Traffic Data from Different Sources
NASA Astrophysics Data System (ADS)
Davoodi, M.; Mesgari, M. S.
2015-12-01
Nowadays traffic data is obtained from multiple sources including GPS, Video Vehicle Detectors (VVD), Automatic Number Plate Recognition (ANPR), Floating Car Data (FCD), VANETs, etc. All such data can be used for route finding. This paper proposes a model for finding the optimum route based on the integration of traffic data from different sources. Ant Colony Optimization is applied in this paper because the concept of this method (movement of ants in a network) is similar to urban road network and movements of cars. The results indicate that this model is capable of incorporating data from different sources, which may even be inconsistent.
An almost-parameter-free harmony search algorithm for groundwater pollution source identification.
Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui
2013-01-01
The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.
Personal sound zone reproduction with room reflections
NASA Astrophysics Data System (ADS)
Olik, Marek
Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
Conceptual design optimization study
NASA Technical Reports Server (NTRS)
Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.
1990-01-01
The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.
Purpose-driven biomaterials research in liver-tissue engineering.
Ananthanarayanan, Abhishek; Narmada, Balakrishnan Chakrapani; Mo, Xuejun; McMillian, Michael; Yu, Hanry
2011-03-01
Bottom-up engineering of microscale tissue ("microtissue") constructs to recapitulate partially the complex structure-function relationships of liver parenchyma has been realized through the development of sophisticated biomaterial scaffolds, liver-cell sources, and in vitro culture techniques. With regard to in vivo applications, the long-lived stem/progenitor cell constructs can improve cell engraftment, whereas the short-lived, but highly functional hepatocyte constructs stimulate host liver regeneration. With regard to in vitro applications, microtissue constructs are being adapted or custom-engineered into cell-based assays for testing acute, chronic and idiosyncratic toxicities of drugs or pathogens. Systems-level methods and computational models that represent quantitative relationships between biomaterial scaffolds, cells and microtissue constructs will further enable their rational design for optimal integration into specific biomedical applications. Copyright © 2010 Elsevier Ltd. All rights reserved.
The CARIBU EBIS control and synchronization system
NASA Astrophysics Data System (ADS)
Dickerson, Clayton; Peters, Christopher
2015-01-01
The Californium Rare Isotope Breeder Upgrade (CARIBU) Electron Beam Ion Source (EBIS) charge breeder has been built and tested. The bases of the CARIBU EBIS electrical system are four voltage platforms on which both DC and pulsed high voltage outputs are controlled. The high voltage output pulses are created with either a combination of a function generator and a high voltage amplifier, or two high voltage DC power supplies and a high voltage solid state switch. Proper synchronization of the pulsed voltages, fundamental to optimizing the charge breeding performance, is achieved with triggering from a digital delay pulse generator. The control system is based on National Instruments realtime controllers and LabVIEW software implementing Functional Global Variables (FGV) to store and access instrument parameters. Fiber optic converters enable network communication and triggering across the platforms.
SpcAudace: Spectroscopic processing and analysis package of Audela software
NASA Astrophysics Data System (ADS)
Mauclaire, Benjamin
2017-11-01
SpcAudace processes long slit spectra with automated pipelines and performs astrophysical analysis of the latter data. These powerful pipelines do all the required steps in one pass: standard preprocessing, masking of bad pixels, geometric corrections, registration, optimized spectrum extraction, wavelength calibration and instrumental response computation and correction. Both high and low resolution long slit spectra are managed for stellar and non-stellar targets. Many types of publication-quality figures can be easily produced: pdf and png plots or annotated time series plots. Astrophysical quantities can be derived from individual or large amount of spectra with advanced functions: from line profile characteristics to equivalent width and periodogram. More than 300 documented functions are available and can be used into TCL scripts for automation. SpcAudace is based on Audela open source software.
Robert, Amélie; Boyer, Lucie; Pineault, Nicolas
2011-03-01
The development of culture processes for hematopoietic progenitors could lead to the development of a complementary source of platelets for therapeutic purposes. However, functional characterization of culture-derived platelets remains limited, which raises some uncertainties about the quality of platelets produced in vitro. The aim of this study was to define the proportion of functional platelets produced in cord blood CD34+ cell cultures. Toward this, the morphological and functional properties of culture-derived platelet-like particles (PLPs) were critically compared to that of blood platelets. Flow cytometry combined with transmission electron microscopy analyses revealed that PLPs formed a more heterogeneous population of platelets at a different stage of maturation than blood platelets. The majority of PLPs harbored the fibrinogen receptor αIIbβ3, but a significant proportion failed to maintain glycoprotein (GP)Ibα surface expression, a component of the vWF receptor essential for platelet functions. Importantly, GPIbα extracellular expression correlated closely with platelet function, as the GPIIb+ GPIbα+ PLP subfraction responded normally to agonist stimulation as evidenced by α-granule release, adhesion, spreading, and aggregation. In contrast, the GPIIb+ GPIbα⁻ subfraction was unresponsive in most functional assays and appeared to be metabolically inactive. The present study confirms that functional platelets can be generated in cord blood CD34+ cell cultures, though these are highly susceptible to ectodomain shedding of receptors associated with loss of function. Optimization of culture conditions to prevent these deleterious effects and to homogenize PLPs is necessary to improve the quality and yields of culture-derived platelets before they can be recognized as a suitable complementary source for therapeutic purposes.
Tecchio, Franca; Porcaro, Camillo; Barbati, Giulia; Zappasodi, Filippo
2007-01-01
A brain–computer interface (BCI) can be defined as any system that can track the person's intent which is embedded in his/her brain activity and, from it alone, translate the intention into commands of a computer. Among the brain signal monitoring systems best suited for this challenging task, electroencephalography (EEG) and magnetoencephalography (MEG) are the most realistic, since both are non-invasive, EEG is portable and MEG could provide more specific information that could be later exploited also through EEG signals. The first two BCI steps require set up of the appropriate experimental protocol while recording the brain signal and then to extract interesting features from the recorded cerebral activity. To provide information useful in these BCI stages, our aim is to provide an overview of a new procedure we recently developed, named functional source separation (FSS). As it comes from the blind source separation algorithms, it exploits the most valuable information provided by the electrophysiological techniques, i.e. the waveform signal properties, remaining blind to the biophysical nature of the signal sources. FSS returns the single trial source activity, estimates the time course of a neuronal pool along different experimental states on the basis of a specific functional requirement in a specific time period, and uses the simulated annealing as the optimization procedure allowing the exploit of functional constraints non-differentiable. Moreover, a minor section is included, devoted to information acquired by MEG in stroke patients, to guide BCI applications aiming at sustaining motor behaviour in these patients. Relevant BCI features – spatial and time-frequency properties – are in fact altered by a stroke in the regions devoted to hand control. Moreover, a method to investigate the relationship between sensory and motor hand cortical network activities is described, providing information useful to develop BCI feedback control systems. This review provides a description of the FSS technique, a promising tool for the BCI community for online electrophysiological feature extraction, and offers interesting information to develop BCI applications to sustain hand control in stroke patients. PMID:17331989
Analytical optimal pulse shapes obtained with the aid of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés
2015-09-28
We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less
NASA Astrophysics Data System (ADS)
Hashemi-Dezaki, Hamed; Mohammadalizadeh-Shabestary, Masoud; Askarian-Abyaneh, Hossein; Rezaei-Jegarluei, Mohammad
2014-01-01
In electrical distribution systems, a great amount of power are wasting across the lines, also nowadays power factors, voltage profiles and total harmonic distortions (THDs) of most loads are not as would be desired. So these important parameters of a system play highly important role in wasting money and energy, and besides both consumers and sources are suffering from a high rate of distortions and even instabilities. Active power filters (APFs) are innovative ideas for solving of this adversity which have recently used instantaneous reactive power theory. In this paper, a novel method is proposed to optimize the allocation of APFs. The introduced method is based on the instantaneous reactive power theory in vectorial representation. By use of this representation, it is possible to asses different compensation strategies. Also, APFs proper placement in the system plays a crucial role in either reducing the losses costs and power quality improvement. To optimize the APFs placement, a new objective function has been defined on the basis of five terms: total losses, power factor, voltage profile, THD and cost. Genetic algorithm has been used to solve the optimization problem. The results of applying this method to a distribution network illustrate the method advantages.
Design and optimization analysis of dual material gate on DG-IMOS
NASA Astrophysics Data System (ADS)
Singh, Sarabdeep; Raman, Ashish; Kumar, Naveen
2017-12-01
An impact ionization MOSFET (IMOS) is evolved for overcoming the constraint of less than 60 mV/decade sub-threshold slope (SS) of conventional MOSFET at room temperature. In this work, first, the device performance of the p-type double gate impact ionization MOSFET (DG-IMOS) is optimized by adjusting the device design parameters. The adjusted parameters are ratio of gate and intrinsic length, gate dielectric thickness and gate work function. Secondly, the DMG (dual material gate) DG-IMOS is proposed and investigated. This DMG DG-IMOS is further optimized to obtain the best possible performance parameters. Simulation results reveal that DMG DG-IMOS when compared to DG-IMOS, shows better I ON, I ON/I OFF ratio, and RF parameters. Results show that by properly tuning the lengths of two materials at a ratio of 1.5 in DMG DG-IMOS, optimized performance is achieved including I ON/I OFF ratio of 2.87 × 109 A/μm with I ON as 11.87 × 10-4 A/μm and transconductance of 1.06 × 10-3 S/μm. It is analyzed that length of drain side material should be greater than the length of source side material to attain the higher transconductance in DMG DG-IMOS.
van Riel, N A; Giuseppin, M L; Verrips, C T
2000-01-01
The theory of dynamic optimal metabolic control (DOMC), as developed by Giuseppin and Van Riel (Metab. Eng., 2000), is applied to model the central nitrogen metabolism (CNM) in Saccharomyces cerevisiae. The CNM represents a typical system encountered in advanced metabolic engineering. The CNM is the source of the cellular amino acids and proteins, including flavors and potentially valuable biomolecules; therefore, it is also of industrial interest. In the DOMC approach the cell is regarded as an optimally controlled system. Given the metabolic genotype, the cell faces a control problem to maintain an optimal flux distribution in a changing environment. The regulation is based on strategies and balances feedback control of homeostasis and feedforward regulation for adaptation. The DOMC approach is an integrative, holistic approach, not based on mechanistic descriptions and (therefore) not biased by the variation present in biochemical and molecular biological data. It is an effective tool to structure the rapidly increasing amount of data on the function of genes and pathways. The DOMC model is used successfully to predict the responses of pulses of ammonia and glutamine to nitrogen-limited continuous cultures of a wild-type strain and a glutamine synthetase-negative mutant. The simulation results are validated with experimental data.
Carvalho, Henrique F; Barbosa, Arménio J M; Roque, Ana C A; Iranzo, Olga; Branco, Ricardo J F
2017-01-01
Recent advances in de novo protein design have gained considerable insight from the intrinsic dynamics of proteins, based on the integration of molecular dynamics simulations protocols on the state-of-the-art de novo protein design protocols used nowadays. With this protocol we illustrate how to set up and run a molecular dynamics simulation followed by a functional protein dynamics analysis. New users will be introduced to some useful open-source computational tools, including the GROMACS molecular dynamics simulation software package and ProDy for protein structural dynamics analysis.
NASA Astrophysics Data System (ADS)
Bergstrom, Lars; Reppy, John
Compilers for polymorphic languages are required to treat values in programs in an abstract and generic way at the source level. The challenges of optimizing the boxing of raw values, flattening of argument tuples, and raising the arity of functions that handle complex structures to reduce memory usage are old ones, but take on newfound import with processors that have twice as many registers. We present a novel strategy that uses both control-flow and type information to provide an arity raising implementation addressing these problems. This strategy is conservative - no matter the execution path, the transformed program will not perform extra operations.
Features of Modularly Assembled Compounds That Impart Bioactivity Against an RNA Target
Rzuczek, Suzanne G.; Gao, Yu; Tang, Zhen-Zhi; Thornton, Charles A.; Kodadek, Thomas; Disney, Matthew D.
2013-01-01
Transcriptomes provide a myriad of potential RNAs that could be the targets of therapeutics or chemical genetic probes of function. Cell permeable small molecules, however, generally do not exploit these targets, owing to the difficulty in the design of high affinity, specific small molecules targeting RNA. As part of a general program to study RNA function using small molecules, we designed bioactive, modularly assembled small molecules that target the non-coding expanded RNA repeat that causes myotonic dystrophy type 1 (DM1), r(CUG)exp. Herein, we present a rigorous study to elucidate features in modularly assembled compounds that afford bioactivity. Different modular assembly scaffolds were investigated including polyamines, α-peptides, β-peptides, and peptide tertiary amides (PTAs). Based on activity as assessed by improvement of DM1-associated defects, stability against proteases, cellular permeability, and toxicity, we discovered that constrained backbones, namely PTAs, are optimal. Notably, we determined that r(CUG)exp is the target of the optimal PTA in cellular models and that the optimal PTA improves DM1-associated defects in a mouse model. Biophysical analyses were employed to investigate potential sources of bioactivity. These investigations show that modularly assembled compounds have increased residence times on their targets and faster on rates than the RNA-binding modules from which they were derived and faster on rates than the protein that binds r(CUG)exp, the inactivation of which gives rise to DM1-associated defects. These studies provide information about features of small molecules that are programmable for targeting RNA, allowing for the facile optimization of therapeutics or chemical probes against other cellular RNA targets. PMID:24032410
Features of modularly assembled compounds that impart bioactivity against an RNA target.
Rzuczek, Suzanne G; Gao, Yu; Tang, Zhen-Zhi; Thornton, Charles A; Kodadek, Thomas; Disney, Matthew D
2013-10-18
Transcriptomes provide a myriad of potential RNAs that could be the targets of therapeutics or chemical genetic probes of function. Cell-permeable small molecules, however, generally do not exploit these targets, owing to the difficulty in the design of high affinity, specific small molecules targeting RNA. As part of a general program to study RNA function using small molecules, we designed bioactive, modularly assembled small molecules that target the noncoding expanded RNA repeat that causes myotonic dystrophy type 1 (DM1), r(CUG)(exp). Herein, we present a rigorous study to elucidate features in modularly assembled compounds that afford bioactivity. Different modular assembly scaffolds were investigated, including polyamines, α-peptides, β-peptides, and peptide tertiary amides (PTAs). On the basis of activity as assessed by improvement of DM1-associated defects, stability against proteases, cellular permeability, and toxicity, we discovered that constrained backbones, namely, PTAs, are optimal. Notably, we determined that r(CUG)(exp) is the target of the optimal PTA in cellular models and that the optimal PTA improves DM1-associated defects in a mouse model. Biophysical analyses were employed to investigate potential sources of bioactivity. These investigations show that modularly assembled compounds have increased residence times on their targets and faster on rates than the RNA-binding modules from which they were derived. Moreover, they have faster on rates than the protein that binds r(CUG)(exp), the inactivation of which gives rise to DM1-associated defects. These studies provide information about features of small molecules that are programmable for targeting RNA, allowing for the facile optimization of therapeutics or chemical probes against other cellular RNA targets.
Tannase production by Paecilomyces variotii.
Battestin, Vania; Macedo, Gabriela Alves
2007-07-01
Surface response methodology was applied to the optimization of the laboratory scale production of tannase using a lineage of Paecilomyces variotii. A preliminary study was conducted to evaluate the effects of variables, including temperature ( degrees C), residue (%) (coffee husk:wheat bran), tannic acid (%) and salt solutions (%) on the production of tannase during 3, 5 and 7 days of fermentation. Among these variables, temperature, residues and tannic acid had significant effects on tannase production. The variables were optimized using surface response methodology. The best conditions for tannase production were: temperature (29-34 degrees C); tannic acid (8.5-14%); % residue (coffee husk:wheat bran 50:50) and incubation time of 5 days. The supplementation of external nitrogen and carbon sources at 0.4%, 0.8% and 1.2% concentration on tannase production were studied in the optimized medium. Three different nitrogen sources included yeast extract, ammonia nitrate and sodium nitrate along with carbon source (starch) were studied. Only ammonia nitrate showed a significant effect on tannase production. After the optimization process, the tannase activity increased 8.6-fold.
Zajicek, J.L.; Brown, L.; Brown, S.B.; Honeyfield, D.C.; Fitzsimons, J.D.; Tillitt, D.E.
2009-01-01
The source of thiaminase in the Great Lakes food web remains unknown. Biochemical characterization of the thiaminase I activities observed in forage fish was undertaken to provide insights into potential thiaminase sources and to optimize catalytic assay conditions. We measured the thiaminase I activities of crude extracts from five forage fish species and one strain of Paenibacillus thiaminolyticus over a range of pH values. The clupeids, alewife Alosa pseudoharengus and gizzard shad Dorosoma cepedianum, had very similar thiaminase I pH dependencies, with optimal activity ranges (> or = 90% of maximum activity) between pH 4.6 and 5.5. Rainbow smelt Osmerus mordax and spottail shiner Notropis hudsonius had optimal activity ranges between pH 5.5-6.6. The thiaminase I activity pH dependence profile of P. thiaminolyticus had an optimal activity range between pH 5.4 and 6.3, which was similar to the optimal range for rainbow smelt and spottail shiners. Incubation of P. thiaminolyticus extracts with extracts from bloater Coregonus hoyi (normally, bloaters have little or no detectable thiaminase I activity) did not significantly alter the pH dependence profile of P. thiaminolyticus-derived thiaminase I, such that it continued to resemble that of the rainbow smelt and spottail shiner, with an apparent optimal activity range between pH 5.7 and 6.6. These data are consistent with the hypothesis of a bacterial source for thiaminase I in the nonclupeid species of forage fish; however, the data also suggest different sources of thiaminase I enzymes in the clupeid species.
Atomically-thin molecular layers for electrode modification of organic transistors
NASA Astrophysics Data System (ADS)
Gim, Yuseong; Kang, Boseok; Kim, Bongsoo; Kim, Sun-Guk; Lee, Joong-Hee; Cho, Kilwon; Ku, Bon-Cheol; Cho, Jeong Ho
2015-08-01
Atomically-thin molecular layers of aryl-functionalized graphene oxides (GOs) were used to modify the surface characteristics of source-drain electrodes to improve the performances of organic field-effect transistor (OFET) devices. The GOs were functionalized with various aryl diazonium salts, including 4-nitroaniline, 4-fluoroaniline, or 4-methoxyaniline, to produce several types of GOs with different surface functional groups (NO2-Ph-GO, F-Ph-GO, or CH3O-Ph-GO, respectively). The deposition of aryl-functionalized GOs or their reduced derivatives onto metal electrode surfaces dramatically enhanced the electrical performances of both p-type and n-type OFETs relative to the performances of OFETs prepared without the GO modification layer. Among the functionalized rGOs, CH3O-Ph-rGO yielded the highest hole mobility of 0.55 cm2 V-1 s-1 and electron mobility of 0.17 cm2 V-1 s-1 in p-type and n-type FETs, respectively. Two governing factors: (1) the work function of the modified electrodes and (2) the crystalline microstructures of the benchmark semiconductors grown on the modified electrode surface were systematically investigated to reveal the origin of the performance improvements. Our simple, inexpensive, and scalable electrode modification technique provides a significant step toward optimizing the device performance by engineering the semiconductor-electrode interfaces in OFETs.Atomically-thin molecular layers of aryl-functionalized graphene oxides (GOs) were used to modify the surface characteristics of source-drain electrodes to improve the performances of organic field-effect transistor (OFET) devices. The GOs were functionalized with various aryl diazonium salts, including 4-nitroaniline, 4-fluoroaniline, or 4-methoxyaniline, to produce several types of GOs with different surface functional groups (NO2-Ph-GO, F-Ph-GO, or CH3O-Ph-GO, respectively). The deposition of aryl-functionalized GOs or their reduced derivatives onto metal electrode surfaces dramatically enhanced the electrical performances of both p-type and n-type OFETs relative to the performances of OFETs prepared without the GO modification layer. Among the functionalized rGOs, CH3O-Ph-rGO yielded the highest hole mobility of 0.55 cm2 V-1 s-1 and electron mobility of 0.17 cm2 V-1 s-1 in p-type and n-type FETs, respectively. Two governing factors: (1) the work function of the modified electrodes and (2) the crystalline microstructures of the benchmark semiconductors grown on the modified electrode surface were systematically investigated to reveal the origin of the performance improvements. Our simple, inexpensive, and scalable electrode modification technique provides a significant step toward optimizing the device performance by engineering the semiconductor-electrode interfaces in OFETs. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03307a
3D synthetic aperture for controlled-source electromagnetics
NASA Astrophysics Data System (ADS)
Knaak, Allison
Locating hydrocarbon reservoirs has become more challenging with smaller, deeper or shallower targets in complicated environments. Controlled-source electromagnetics (CSEM), is a geophysical electromagnetic method used to detect and derisk hydrocarbon reservoirs in marine settings, but it is limited by the size of the target, low-spatial resolution, and depth of the reservoir. To reduce the impact of complicated settings and improve the detecting capabilities of CSEM, I apply synthetic aperture to CSEM responses, which virtually increases the length and width of the CSEM source by combining the responses from multiple individual sources. Applying a weight to each source steers or focuses the synthetic aperture source array in the inline and crossline directions. To evaluate the benefits of a 2D source distribution, I test steered synthetic aperture on 3D diffusive fields and view the changes with a new visualization technique. Then I apply 2D steered synthetic aperture to 3D noisy synthetic CSEM fields, which increases the detectability of the reservoir significantly. With more general weighting, I develop an optimization method to find the optimal weights for synthetic aperture arrays that adapts to the information in the CSEM data. The application of optimally weighted synthetic aperture to noisy, simulated electromagnetic fields reduces the presence of noise, increases detectability, and better defines the lateral extent of the target. I then modify the optimization method to include a term that minimizes the variance of random, independent noise. With the application of the modified optimization method, the weighted synthetic aperture responses amplifies the anomaly from the reservoir, lowers the noise floor, and reduces noise streaks in noisy CSEM responses from sources offset kilometers from the receivers. Even with changes to the location of the reservoir and perturbations to the physical properties, synthetic aperture is still able to highlight targets correctly, which allows use of the method in locations where the subsurface models are built from only estimates. In addition to the technical work in this thesis, I explore the interface between science, government, and society by examining the controversy over hydraulic fracturing and by suggesting a process to aid the debate and possibly other future controversies.
NASA Astrophysics Data System (ADS)
Morgenthaler, George; Khatib, Nader; Kim, Byoungsoo
with information to improve their crop's vigor has been a major topic of interest. With world population growing exponentially, arable land being consumed by urbanization, and an unfavorable farm economy, the efficiency of farming must increase to meet future food requirements and to make farming a sustainable occupation for the farmer. "Precision Agriculture" refers to a farming methodology that applies nutrients and moisture only where and when they are needed in the field. The goal is to increase farm revenue by increasing crop yield and decreasing applications of costly chemical and water treatments. In addition, this methodology will decrease the environmental costs of farming, i.e., reduce air, soil, and water pollution. Sensing/Precision Agriculture has not grown as rapidly as early advocates envisioned. Technology for a successful Remote Sensing/Precision Agriculture system is now available. Commercial satellite systems can image (multi-spectral) the Earth with a resolution of approximately 2.5 m. Variable precision dispensing systems using GPS are available and affordable. Crop models that predict yield as a function of soil, chemical, and irrigation parameter levels have been formulated. Personal computers and internet access are in place in most farm homes and can provide a mechanism to periodically disseminate, e.g. bi-weekly, advice on what quantities of water and chemicals are needed in individual regions of the field. What is missing is a model that fuses the disparate sources of information on the current states of the crop and soil, and the remaining resource levels available with the decisions farmers are required to make. This must be a product that is easy for the farmer to understand and to implement. A "Constrained Optimization Feed-back Control Model" to fill this void will be presented. The objective function of the model will be used to maximize the farmer's profit by increasing yields while decreasing environmental costs and decreasing application of costly treatments. This model will incorporate information from remote sensing, in-situ weather sources, soil measurements, crop models, and tacit farmer knowledge of the relative productivity of the selected control regions of the farm to provide incremental advice throughout the growing season on water and chemical treatments. Genetic and meta-heuristic algorithms will be used to solve the constrained optimization problem that possesses complex constraints and a non-linear objective function. *
High performance GPU processing for inversion using uniform grid searches
NASA Astrophysics Data System (ADS)
Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios
2017-04-01
Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.
Reference tissue quantification of DCE-MRI data without a contrast agent calibration
NASA Astrophysics Data System (ADS)
Walker-Samuel, Simon; Leach, Martin O.; Collins, David J.
2007-02-01
The quantification of dynamic contrast-enhanced (DCE) MRI data conventionally requires a conversion from signal intensity to contrast agent concentration by measuring a change in the tissue longitudinal relaxation rate, R1. In this paper, it is shown that the use of a spoiled gradient-echo acquisition sequence (optimized so that signal intensity scales linearly with contrast agent concentration) in conjunction with a reference tissue-derived vascular input function (VIF), avoids the need for the conversion to Gd-DTPA concentration. This study evaluates how to optimize such sequences and which dynamic time-series parameters are most suitable for this type of analysis. It is shown that signal difference and relative enhancement provide useful alternatives when full contrast agent quantification cannot be achieved, but that pharmacokinetic parameters derived from both contain sources of error (such as those caused by differences between reference tissue and region of interest proton density and native T1 values). It is shown in a rectal cancer study that these sources of uncertainty are smaller when using signal difference, compared with relative enhancement (15 ± 4% compared with 33 ± 4%). Both of these uncertainties are of the order of those associated with the conversion to Gd-DTPA concentration, according to literature estimates.
Sotiropoulos, A; Malamis, D; Michailidis, P; Krokida, M; Loizidou, M
2016-01-01
Domestic food waste drying foresees the significant reduction of household food waste mass through the hygienic removal of its moisture content at source. In this manuscript, a new approach for the development and optimization of an innovative household waste dryer for the effective dehydration of food waste at source is presented. Food waste samples were dehydrated with the use of the heated air-drying technique under different air-drying conditions, namely air temperature and air velocity, in order to investigate their drying kinetics. Different thin-layer drying models have been applied, in which the drying constant is a function of the process variables. The Midilli model demonstrated the best performance in fitting the experimental data in all tested samples, whereas it was found that food waste drying is greatly affected by temperature and to a smaller scale by air velocity. Due to the increased moisture content of food waste, an appropriate configuration of the drying process variables can lead to a total reduction of its mass by 87% w/w, thus achieving a sustainable residence time and energy consumption level. Thus, the development of a domestic waste dryer can be proved to be economically and environmentally viable in the future.