OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. BOETTCHER; A. PERCUS
2000-08-01
We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less
Extreme Trust Region Policy Optimization for Active Object Recognition.
Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei
2018-06-01
In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
Combining local search with co-evolution in a remarkably simple way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.
2000-05-01
The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less
Neighboring extremal optimal control design including model mismatch errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T.J.; Hull, D.G.
1994-11-01
The mismatch control technique that is used to simplify model equations of motion in order to determine analytic optimal control laws is extended using neighboring extremal theory. The first variation optimal control equations are linearized about the extremal path to account for perturbations in the initial state and the final constraint manifold. A numerical example demonstrates that the tuning procedure inherent in the mismatch control method increases the performance of the controls to the level of a numerically-determined piecewise-linear controller.
Optimizing Illumina next-generation sequencing library preparation for extremely AT-biased genomes.
Oyola, Samuel O; Otto, Thomas D; Gu, Yong; Maslen, Gareth; Manske, Magnus; Campino, Susana; Turner, Daniel J; Macinnis, Bronwyn; Kwiatkowski, Dominic P; Swerdlow, Harold P; Quail, Michael A
2012-01-03
Massively parallel sequencing technology is revolutionizing approaches to genomic and genetic research. Since its advent, the scale and efficiency of Next-Generation Sequencing (NGS) has rapidly improved. In spite of this success, sequencing genomes or genomic regions with extremely biased base composition is still a great challenge to the currently available NGS platforms. The genomes of some important pathogenic organisms like Plasmodium falciparum (high AT content) and Mycobacterium tuberculosis (high GC content) display extremes of base composition. The standard library preparation procedures that employ PCR amplification have been shown to cause uneven read coverage particularly across AT and GC rich regions, leading to problems in genome assembly and variation analyses. Alternative library-preparation approaches that omit PCR amplification require large quantities of starting material and hence are not suitable for small amounts of DNA/RNA such as those from clinical isolates. We have developed and optimized library-preparation procedures suitable for low quantity starting material and tolerant to extremely high AT content sequences. We have used our optimized conditions in parallel with standard methods to prepare Illumina sequencing libraries from a non-clinical and a clinical isolate (containing ~53% host contamination). By analyzing and comparing the quality of sequence data generated, we show that our optimized conditions that involve a PCR additive (TMAC), produces amplified libraries with improved coverage of extremely AT-rich regions and reduced bias toward GC neutral templates. We have developed a robust and optimized Next-Generation Sequencing library amplification method suitable for extremely AT-rich genomes. The new amplification conditions significantly reduce bias and retain the complexity of either extremes of base composition. This development will greatly benefit sequencing clinical samples that often require amplification due to low mass of DNA starting material.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Reconstructing metabolic flux vectors from extreme pathways: defining the alpha-spectrum.
Wiback, Sharon J; Mahadevan, Radhakrishnan; Palsson, Bernhard Ø
2003-10-07
The move towards genome-scale analysis of cellular functions has necessitated the development of analytical (in silico) methods to understand such large and complex biochemical reaction networks. One such method is extreme pathway analysis that uses stoichiometry and thermodynamic irreversibly to define mathematically unique, systemic metabolic pathways. These extreme pathways form the edges of a high-dimensional convex cone in the flux space that contains all the attainable steady state solutions, or flux distributions, for the metabolic network. By definition, any steady state flux distribution can be described as a nonnegative linear combination of the extreme pathways. To date, much effort has been focused on calculating, defining, and understanding these extreme pathways. However, little work has been performed to determine how these extreme pathways contribute to a given steady state flux distribution. This study represents an initial effort aimed at defining how physiological steady state solutions can be reconstructed from a network's extreme pathways. In general, there is not a unique set of nonnegative weightings on the extreme pathways that produce a given steady state flux distribution but rather a range of possible values. This range can be determined using linear optimization to maximize and minimize the weightings of a particular extreme pathway in the reconstruction, resulting in what we have termed the alpha-spectrum. The alpha-spectrum defines which extreme pathways can and cannot be included in the reconstruction of a given steady state flux distribution and to what extent they individually contribute to the reconstruction. It is shown that accounting for transcriptional regulatory constraints can considerably shrink the alpha-spectrum. The alpha-spectrum is computed and interpreted for two cases; first, optimal states of a skeleton representation of core metabolism that include transcriptional regulation, and second for human red blood cell metabolism under various physiological, non-optimal conditions.
Optimized extreme learning machine for urban land cover classification using hyperspectral imagery
NASA Astrophysics Data System (ADS)
Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam
2017-12-01
This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems
Li, Xiguang; Zhao, Liang; Gong, Changqing; Liu, Xiaojing
2017-01-01
Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent. PMID:29085425
NASA Astrophysics Data System (ADS)
Kuroki, R.; Yamashiki, Y. A.; Varlamov, S.; Miyazawa, Y.; Gupta, H. V.; Racault, M.; Troselj, J.
2017-12-01
We estimated the effects of extreme fluvial outflow events from river mouths on the salinity distribution in the Japanese coastal zones. Targeted extreme event was a typhoon from 06/09/2015 to 12/09/2015, and we generated a set of hourly simulated river outflow data of all Japanese first-class rivers from these basins to the Pacific Ocean and the Sea of Japan during the period by using our model "Cell Distributed Runoff Model Version 3.1.1 (CDRMV3.1.1)". The model simulated fresh water discharges for the case of the typhoon passage over Japan. We used these data with a coupled hydrological-oceanographic model JCOPE-T, developed by Japan Agency for Marine-earth Science and Technology (JAMSTEC), for estimation of the circulation and salinity distribution in Japanese coastal zones. By using the model, the coastal oceanic circulation was reproduced adequately, which was verified by satellite remote sensing. In addition to this, we have successfully optimized 5 parameters, soil roughness coefficient, river roughness coefficient, effective porosity, saturated hydraulic conductivity, and effective rainfall by using Shuffled Complex Evolution method developed by University of Arizona (SCE-UA method), that is one of the optimization method for hydrological model. Increasing accuracy of peak discharge prediction of extreme typhoon events on river mouths is essential for continental-oceanic mutual interaction.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
NASA Astrophysics Data System (ADS)
Ivashkin, V. V.; Krylov, I. V.
2015-09-01
A method to optimize the flight trajectories to the asteroid Apophis that allows reliably to form a set of Pontryagin extremals for various boundary conditions of the flight, as well as effectively to search for a global problem optimum amongst its elements, is developed.
MRF energy minimization and beyond via dual decomposition.
Komodakis, Nikos; Paragios, Nikos; Tziritas, Georgios
2011-03-01
This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
Optimal regionalization of extreme value distributions for flood estimation
NASA Astrophysics Data System (ADS)
Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.
2018-01-01
Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.
Minimal time spiking in various ChR2-controlled neuron models.
Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel
2018-02-01
We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
Anam, Khairul; Al-Jumaily, Adel
2014-01-01
The use of a small number of surface electromyography (EMG) channels on the transradial amputee in a myoelectric controller is a big challenge. This paper proposes a pattern recognition system using an extreme learning machine (ELM) optimized by particle swarm optimization (PSO). PSO is mutated by wavelet function to avoid trapped in a local minima. The proposed system is used to classify eleven imagined finger motions on five amputees by using only two EMG channels. The optimal performance of wavelet-PSO was compared to a grid-search method and standard PSO. The experimental results show that the proposed system is the most accurate classifier among other tested classifiers. It could classify 11 finger motions with the average accuracy of about 94 % across five amputees.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
Mozheiko, E Yu; Prokopenko, S V; Alekseevich, G V
To reason the choice of methods of restoration of advanced hand activity depending on severity of motor disturbance in the top extremity. Eighty-eight patients were randomized into 3 groups: 1) the mCIMT group, 2) the 'touch glove' group, 3) the control group. For assessment of physical activity of the top extremity Fugl-Meyer Assessment Upper Extremity, Nine-Hole Peg Test, Motor Assessment Scale were used. Assessment of non-use phenomenon was carried out with the Motor Activity Log scale. At a stage of severe motor dysfunction, there was a restoration of proximal departments of a hand in all groups, neither method was superior to the other. In case of moderate severity of motor deficiency of the upper extremity the most effective was the method based on the principle of biological feedback - 'a touch glove'. In the group with mild severity of motor dysfunction, the best recovery was achieved in the mCIMT group.
NASA Astrophysics Data System (ADS)
Guo, Enliang; Zhang, Jiquan; Si, Ha; Dong, Zhenhua; Cao, Tiehua; Lan, Wu
2017-10-01
Environmental changes have brought about significant changes and challenges to water resources and management in the world; these include increasing climate variability, land use change, intensive agriculture, and rapid urbanization and industrial development, especially much more frequency extreme precipitation events. All of which greatly affect water resource and the development of social economy. In this study, we take extreme precipitation events in the Midwest of Jilin Province as an example; daily precipitation data during 1960-2014 are used. The threshold of extreme precipitation events is defined by multifractal detrended fluctuation analysis (MF-DFA) method. Extreme precipitation (EP), extreme precipitation ratio (EPR), and intensity of extreme precipitation (EPI) are selected as the extreme precipitation indicators, and then the Kolmogorov-Smirnov (K-S) test is employed to determine the optimal probability distribution function of extreme precipitation indicators. On this basis, copulas connect nonparametric estimation method and the Akaike Information Criterion (AIC) method is adopted to determine the bivariate copula function. Finally, we analyze the characteristics of single variable extremum and bivariate joint probability distribution of the extreme precipitation events. The results show that the threshold of extreme precipitation events in semi-arid areas is far less than that in subhumid areas. The extreme precipitation frequency shows a significant decline while the extreme precipitation intensity shows a trend of growth; there are significant differences in spatiotemporal of extreme precipitation events. The spatial variation trend of the joint return period gets shorter from the west to the east. The spatial distribution of co-occurrence return period takes on contrary changes and it is longer than the joint return period.
Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2001-01-01
A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.
Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method
NASA Astrophysics Data System (ADS)
Huang, Feng; Li, Jing
2017-12-01
The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.
3D sensor placement strategy using the full-range pheromone ant colony system
NASA Astrophysics Data System (ADS)
Shuo, Feng; Jingqing, Jia
2016-07-01
An optimized sensor placement strategy will be extremely beneficial to ensure the safety and cost reduction considerations of structural health monitoring (SHM) systems. The sensors must be placed such that important dynamic information is obtained and the number of sensors is minimized. The practice is to select individual sensor directions by several 1D sensor methods and the triaxial sensors are placed in these directions for monitoring. However, this may lead to non-optimal placement of many triaxial sensors. In this paper, a new method, called FRPACS, is proposed based on the ant colony system (ACS) to solve the optimal placement of triaxial sensors. The triaxial sensors are placed as single units in an optimal fashion. And then the new method is compared with other algorithms using Dalian North Bridge. The computational precision and iteration efficiency of the FRPACS has been greatly improved compared with the original ACS and EFI method.
A modified estimation distribution algorithm based on extreme elitism.
Gao, Shujun; de Silva, Clarence W
2016-12-01
An existing estimation distribution algorithm (EDA) with univariate marginal Gaussian model was improved by designing and incorporating an extreme elitism selection method. This selection method highlighted the effect of a few top best solutions in the evolution and advanced EDA to form a primary evolution direction and obtain a fast convergence rate. Simultaneously, this selection can also keep the population diversity to make EDA avoid premature convergence. Then the modified EDA was tested by means of benchmark low-dimensional and high-dimensional optimization problems to illustrate the gains in using this extreme elitism selection. Besides, no-free-lunch theorem was implemented in the analysis of the effect of this new selection on EDAs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Weigt, Martin
2005-10-01
A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.
A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine
Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini
2013-01-01
A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136
Data Transfer Advisor with Transport Profiling Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Liu, Qiang; Yun, Daqing
The network infrastructures have been rapidly upgraded in many high-performance networks (HPNs). However, such infrastructure investment has not led to corresponding performance improvement in big data transfer, especially at the application layer, largely due to the complexity of optimizing transport control on end hosts. We design and implement ProbData, a PRofiling Optimization Based DAta Transfer Advisor, to help users determine the most effective data transfer method with the most appropriate control parameter values to achieve the best data transfer performance. ProbData employs a profiling optimization based approach to exploit the optimal operational zone of various data transfer methods in supportmore » of big data transfer in extreme scale scientific applications. We present a theoretical framework of the optimized profiling approach employed in ProbData as wellas its detailed design and implementation. The advising procedure and performance benefits of ProbData are illustrated and evaluated by proof-of-concept experiments in real-life networks.« less
EUVL back-insertion layout optimization
NASA Astrophysics Data System (ADS)
Civay, D.; Laffosse, E.; Chesneau, A.
2018-03-01
Extreme ultraviolet lithography (EUVL) is targeted for front-up insertion at advanced technology nodes but will be evaluated for back insertion at more mature nodes. EUVL can put two or more mask levels back on one mask, depending upon what level(s) in the process insertion occurs. In this paper, layout optimization methods are discussed that can be implemented when EUVL back insertion is implemented. The layout optimizations can be focused on improving yield, reliability or density, depending upon the design needs. The proposed methodology modifies the original two or more colored layers and generates an optimized single color EUVL layout design.
Optimal helicopter trajectory planning for terrain following flight
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1990-01-01
Helicopters operating in high threat areas have to fly close to the earth surface to minimize the risk of being detected by the adversaries. Techniques are presented for low altitude helicopter trajectory planning. These methods are based on optimal control theory and appear to be implementable onboard in realtime. Second order necessary conditions are obtained to provide a criterion for finding the optimal trajectory when more than one extremal passes through a given point. A second trajectory planning method incorporating a quadratic performance index is also discussed. Trajectory planning problem is formulated as a differential game. The objective is to synthesize optimal trajectories in the presence of an actively maneuvering adversary. Numerical methods for obtaining solutions to these problems are outlined. As an alternative to numerical method, feedback linearizing transformations are combined with the linear quadratic game results to synthesize explicit nonlinear feedback strategies for helicopter pursuit-evasion. Some of the trajectories generated from this research are evaluated on a six-degree-of-freedom helicopter simulation incorporating an advanced autopilot. The optimal trajectory planning methods presented are also useful for autonomous land vehicle guidance.
A Neuroscience Approach to Optimizing Brain Resources for Human Performance in Extreme Environments
Paulus, Martin P.; Potterat, Eric G.; Taylor, Marcus K.; Van Orden, Karl F.; Bauman, James; Momen, Nausheen; Padilla, Genieleah A.; Swain, Judith L.
2009-01-01
Extreme environments requiring optimal cognitive and behavioral performance occur in a wide variety of situations ranging from complex combat operations to elite athletic competitions. Although a large literature characterizes psychological and other aspects of individual differences in performances in extreme environments, virtually nothing is known about the underlying neural basis for these differences. This review summarizes the cognitive, emotional, and behavioral consequences of exposure to extreme environments, discusses predictors of performance, and builds a case for the use of neuroscience approaches to quantify and understand optimal cognitive and behavioral performance. Extreme environments are defined as an external context that exposes individuals to demanding psychological and/or physical conditions, and which may have profound effects on cognitive and behavioral performance. Examples of these types of environments include combat situations, Olympic-level competition, and expeditions in extreme cold, at high altitudes, or in space. Optimal performance is defined as the degree to which individuals achieve a desired outcome when completing goal-oriented tasks. It is hypothesized that individual variability with respect to optimal performance in extreme environments depends on a well “contextualized” internal body state that is associated with an appropriate potential to act. This hypothesis can be translated into an experimental approach that may be useful for quantifying the degree to which individuals are particularly suited to performing optimally in demanding environments. PMID:19447132
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
Impacts of climate change on rainfall extremes and urban drainage systems: a review.
Arnbjerg-Nielsen, K; Willems, P; Olsson, J; Beecham, S; Pathirana, A; Bülow Gregersen, I; Madsen, H; Nguyen, V-T-V
2013-01-01
A review is made of current methods for assessing future changes in urban rainfall extremes and their effects on urban drainage systems, due to anthropogenic-induced climate change. The review concludes that in spite of significant advances there are still many limitations in our understanding of how to describe precipitation patterns in a changing climate in order to design and operate urban drainage infrastructure. Climate change may well be the driver that ensures that changes in urban drainage paradigms are identified and suitable solutions implemented. Design and optimization of urban drainage infrastructure considering climate change impacts and co-optimizing these with other objectives will become ever more important to keep our cities habitable into the future.
NASA Astrophysics Data System (ADS)
Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung
2016-12-01
In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.
Toward an optimal online checkpoint solution under a two-level HPC checkpoint model
Di, Sheng; Robert, Yves; Vivien, Frederic; ...
2016-03-29
The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less
Methods of Constructing a Blended Performance Function Suitable for Formation Flight
NASA Technical Reports Server (NTRS)
Ryan, Jack
2017-01-01
Two methods for constructing performance functions for formation fight-for-drag-reduction suitable for use with an extreme-seeking control system are presented. The first method approximates an a prior measured or estimated drag-reduction performance function by combining real-time measurements of readily available parameters. The parameters are combined with weightings determined from a minimum squares optimization to form a blended performance function.
Optimal feedback strategies for pursuit-evasion and interception in a plane
NASA Technical Reports Server (NTRS)
Rajan, N.; Ardema, M. D.
1983-01-01
Variable-speed pursuit-evasion and interception for two aircraft moving in a horizontal plane are analyzed in terms of a coordinate frame fixed in the plane at termination. Each participant's optimal motion can be represented by extremal trajectory maps. These maps are used to discuss sub-optimal approximations that are independent of the other participant. A method of constructing sections of the barrier, dispersal, and control-level surfaces and thus determining feedback strategies is described. Some examples are shown for pursuit-evasion and the minimum-time interception of a straight-flying target.
NASA Astrophysics Data System (ADS)
Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun
2017-02-01
In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.
Wang, ShaoPeng; Zhang, Yu-Hang; Huang, GuoHua; Chen, Lei; Cai, Yu-Dong
2017-01-01
Myristoylation is an important hydrophobic post-translational modification that is covalently bound to the amino group of Gly residues on the N-terminus of proteins. The many diverse functions of myristoylation on proteins, such as membrane targeting, signal pathway regulation and apoptosis, are largely due to the lipid modification, whereas abnormal or irregular myristoylation on proteins can lead to several pathological changes in the cell. To better understand the function of myristoylated sites and to correctly identify them in protein sequences, this study conducted a novel computational investigation on identifying myristoylation sites in protein sequences. A training dataset with 196 positive and 84 negative peptide segments were obtained. Four types of features derived from the peptide segments following the myristoylation sites were used to specify myristoylatedand non-myristoylated sites. Then, feature selection methods including maximum relevance and minimum redundancy (mRMR), incremental feature selection (IFS), and a machine learning algorithm (extreme learning machine method) were adopted to extract optimal features for the algorithm to identify myristoylation sites in protein sequences, thereby building an optimal prediction model. As a result, 41 key features were extracted and used to build an optimal prediction model. The effectiveness of the optimal prediction model was further validated by its performance on a test dataset. Furthermore, detailed analyses were also performed on the extracted 41 features to gain insight into the mechanism of myristoylation modification. This study provided a new computational method for identifying myristoylation sites in protein sequences. We believe that it can be a useful tool to predict myristoylation sites from protein sequences. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A variational approach to probing extreme events in turbulent dynamical systems
Farazmand, Mohammad; Sapsis, Themistoklis P.
2017-01-01
Extreme events are ubiquitous in a wide range of dynamical systems, including turbulent fluid flows, nonlinear waves, large-scale networks, and biological systems. We propose a variational framework for probing conditions that trigger intermittent extreme events in high-dimensional nonlinear dynamical systems. We seek the triggers as the probabilistically feasible solutions of an appropriately constrained optimization problem, where the function to be maximized is a system observable exhibiting intermittent extreme bursts. The constraints are imposed to ensure the physical admissibility of the optimal solutions, that is, significant probability for their occurrence under the natural flow of the dynamical system. We apply the method to a body-forced incompressible Navier-Stokes equation, known as the Kolmogorov flow. We find that the intermittent bursts of the energy dissipation are independent of the external forcing and are instead caused by the spontaneous transfer of energy from large scales to the mean flow via nonlinear triad interactions. The global maximizer of the corresponding variational problem identifies the responsible triad, hence providing a precursor for the occurrence of extreme dissipation events. Specifically, monitoring the energy transfers within this triad allows us to develop a data-driven short-term predictor for the intermittent bursts of energy dissipation. We assess the performance of this predictor through direct numerical simulations. PMID:28948226
New adaptive method to optimize the secondary reflector of linear Fresnel collectors
Zhu, Guangdong
2017-01-16
Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. Optimization of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative optimization method is proposed to optimize the secondary-reflector profile of a generic linear Fresnel configuration. The method correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed method is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed optimization method is applied to an industrial linear Fresnel configuration, and the results show that the derived optimal secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed method can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less
New adaptive method to optimize the secondary reflector of linear Fresnel collectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Guangdong
Performance of linear Fresnel collectors may largely depend on the secondary-reflector profile design when small-aperture absorbers are used. Optimization of the secondary-reflector profile is an extremely challenging task because there is no established theory to ensure superior performance of derived profiles. In this work, an innovative optimization method is proposed to optimize the secondary-reflector profile of a generic linear Fresnel configuration. The method correctly and accurately captures impacts of both geometric and optical aspects of a linear Fresnel collector to secondary-reflector design. The proposed method is an adaptive approach that does not assume a secondary shape of any particular form,more » but rather, starts at a single edge point and adaptively constructs the next surface point to maximize the reflected power to be reflected to absorber(s). As a test case, the proposed optimization method is applied to an industrial linear Fresnel configuration, and the results show that the derived optimal secondary reflector is able to redirect more than 90% of the power to the absorber in a wide range of incidence angles. Here, the proposed method can be naturally extended to other types of solar collectors as well, and it will be a valuable tool for solar-collector designs with a secondary reflector.« less
Nanofocusing of the free-space optical energy with plasmonic Tamm states.
Niu, Linyu; Xiang, Yinxiao; Luo, Weiwei; Cai, Wei; Qi, Jiwei; Zhang, Xinzheng; Xu, Jingjun
2016-12-20
To achieve extreme electromagnetic enhancement, we propose a plasmonic Tamm states (PTSs) configuration based on the metal-insulator-metal Bragg reflector, which is realized by periodically modulating the width of the insulator. Both the thick (2D) and thin (3D) structures are discussed. Through optimization performed by the impedance-based transfer matrix method and the finite difference time domain method, we find that both the electric field and magnetic field intensities can be increased by three orders of magnitude. The field-enhancement inside the PTSs configuration is not limited to extremely sharp waveguide terminal, which can greatly reduce processing difficulties.
Early Reconstructions of Complex Lower Extremity Battlefield Soft Tissue Wounds
Ebrahimi, Ali; Nejadsarvari, Nasrin; Ebrahimi, Azin; Rasouli, Hamid Reza
2017-01-01
BACKGROUND Severe lower extremity trauma as a devastating combat related injury is on the rise and this presents reconstructive surgeons with significant challenges to reach optimal cosmetic and functional outcomes. This study assessed early reconstructions of complex lower extremity battlefield soft tissue wounds. METHODS This was a prospective case series study of battled field injured patients which was done in the Department of Plastic Surgery, Baqiyatallah University of Medical Sciences hospitals, Tehran, Iran between 2013-2015. In this survey, 73 patients were operated for reconstruction of lower extremity soft tissue defects due to battlefield injuries RESULTS Seventy-three patients (65 men, 8 womens) ranging from 21-48 years old (mean: 35 years) were enrolled. Our study showed that early debridement and bone stabilization and later coverage of complex battlefields soft tissue wounds with suitable flaps and grafts of lower extremity were effective method for difficult wounds managements with less amputation and infections. CONCLUSION Serial debridement and bone stabilization before early soft tissue reconstruction according to reconstructive ladder were shown to be essential steps. PMID:29218283
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
Optimization of entanglement witnesses
NASA Astrophysics Data System (ADS)
Lewenstein, M.; Kraus, B.; Cirac, J. I.; Horodecki, P.
2000-11-01
An entanglement witness (EW) is an operator that allows the detection of entangled states. We give necessary and sufficient conditions for such operators to be optimal, i.e., to detect entangled states in an optimal way. We show how to optimize general EW, and then we particularize our results to the nondecomposable ones; the latter are those that can detect positive partial transpose entangled states (PPTES's). We also present a method to systematically construct and optimize this last class of operators based on the existence of ``edge'' PPTES's, i.e., states that violate the range separability criterion [Phys. Lett. A 232, 333 (1997)] in an extreme manner. This method also permits a systematic construction of nondecomposable positive maps (PM's). Our results lead to a sufficient condition for entanglement in terms of nondecomposable EW's and PM's. Finally, we illustrate our results by constructing optimal EW acting on H=C2⊗C4. The corresponding PM's constitute examples of PM's with minimal ``qubit'' domains, or-equivalently-minimal Hermitian conjugate codomains.
Approximation of Nash equilibria and the network community structure detection problem
2017-01-01
Game theory based methods designed to solve the problem of community structure detection in complex networks have emerged in recent years as an alternative to classical and optimization based approaches. The Mixed Nash Extremal Optimization uses a generative relation for the characterization of Nash equilibria to identify the community structure of a network by converting the problem into a non-cooperative game. This paper proposes a method to enhance this algorithm by reducing the number of payoff function evaluations. Numerical experiments performed on synthetic and real-world networks show that this approach is efficient, with results better or just as good as other state-of-the-art methods. PMID:28467496
Costa, Filippo; Monorchio, Agostino; Manara, Giuliano
2016-01-01
A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process. PMID:27181841
Barriers and dispersal surfaces in minimum-time interception. [for optimizing aircraft flight paths
NASA Technical Reports Server (NTRS)
Rajan, N.; Ardema, M. D.
1984-01-01
A method is proposed for mapping the barrier, dispersal, and control-level surfaces for a class of minimum-time interception and pursuit-evasion problems. Minimum-time interception of a target moving in a horizontal plane is formulated in a coordinate system whose origin is at the interceptor's terminal position and whose x-axis is along the terminal line of sight. This approach makes it possible to discuss the nature of the interceptor's extremals, using its extremal trajectory maps (ETMs), independently of target motion. The game surfaces are constructed by drawing sections of the isochrones, or constant minimum-time loci, from the interceptor and target ETMs. In this way, feedback solutions for the optimal controls are obtained. An example involving the interception of a target moving in a straight line at constant speed is presented.
Game theory and extremal optimization for community detection in complex dynamic networks.
Lung, Rodica Ioana; Chira, Camelia; Andreica, Anca
2014-01-01
The detection of evolving communities in dynamic complex networks is a challenging problem that recently received attention from the research community. Dynamics clearly add another complexity dimension to the difficult task of community detection. Methods should be able to detect changes in the network structure and produce a set of community structures corresponding to different timestamps and reflecting the evolution in time of network data. We propose a novel approach based on game theory elements and extremal optimization to address dynamic communities detection. Thus, the problem is formulated as a mathematical game in which nodes take the role of players that seek to choose a community that maximizes their profit viewed as a fitness function. Numerical results obtained for both synthetic and real-world networks illustrate the competitive performance of this game theoretical approach.
NASA Astrophysics Data System (ADS)
Guo, Peng; Cheng, Wenming; Wang, Yi
2014-10-01
The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.
Quantification of Uncertainty in the Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.
2017-12-01
Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.
Theory and Computation of Optimal Low- and Medium- Thrust Orbit Transfers
NASA Technical Reports Server (NTRS)
Goodson, Troy D.; Chuang, Jason C. H.; Ledsinger, Laura A.
1996-01-01
This report presents new theoretical results which lead to new algorithms for the computation of fuel-optimal multiple-burn orbit transfers of low and medium thrust. Theoretical results introduced herein show how to add burns to an optimal trajectory and show that the traditional set of necessary conditions may be replaced with a much simpler set of equations. Numerical results are presented to demonstrate the utility of the theoretical results and the new algorithms. Two indirect methods from the literature are shown to be effective for the optimal orbit transfer problem with relatively small numbers of burns. These methods are the Minimizing Boundary Condition Method (MBCM) and BOUNDSCO. Both of these methods make use of the first-order necessary conditions exactly as derived by optimal control theory. Perturbations due to Earth's oblateness and atmospheric drag are considered. These perturbations are of greatest interest for transfers that take place between low Earth orbit altitudes and geosynchronous orbit altitudes. Example extremal solutions including these effects and computed by the aforementioned methods are presented. An investigation is also made into a suboptimal multiple-burn guidance scheme. The FORTRAN code developed for this study has been collected together in a package named ORBPACK. ORBPACK's user manual is provided as an appendix to this report.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
New numerical methods for open-loop and feedback solutions to dynamic optimization problems
NASA Astrophysics Data System (ADS)
Ghosh, Pradipto
The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development is that the resulting control law has an algebraic closed-form structure. The proposed method uses an optimal spatial statistical predictor called universal kriging to construct the surrogate model of a feedback controller, which is capable of quickly predicting an optimal control estimate based on current state (and time) information. With universal kriging, an approximation to the optimal feedback map is computed by conceptualizing a set of state-control samples from pre-computed extremals to be a particular realization of a jointly Gaussian spatial process. Feedback policies are computed for a variety of example dynamic optimization problems in order to evaluate the effectiveness of this methodology. This feedback synthesis approach is found to combine good numerical accuracy with low computational overhead, making it a suitable candidate for real-time applications. Particle swarm and universal kriging are combined for a capstone example, a near optimal, near-admissible, full-state feedback control law is computed and tested for the heat-load-limited atmospheric-turn guidance of an aeroassisted transfer vehicle. The performance of this explicit guidance scheme is found to be very promising; initial errors in atmospheric entry due to simulated thruster misfirings are found to be accurately corrected while closely respecting the algebraic state-inequality constraint.
NASA Astrophysics Data System (ADS)
Matsukuma, Hiraku; Hosoda, Tatsuya; Suzuki, Yosuke; Yogo, Akifumi; Yanagida, Tatsuya; Kodama, Takeshi; Nishimura, Hiroaki
2016-08-01
The two-color, double-pulse method is an efficient scheme to generate extreme ultraviolet light for fabricating the next generation semiconductor microchips. In this method, a Nd:YAG laser pulse is used to expand a several-tens-of-micrometers-scale tin droplet, and a CO2 laser pulse is subsequently directed at the expanded tin vapor after an appropriate delay time. We propose the use of shadowgraphy with a CO2 laser probe-pulse scheme to optimize the CO2 main-drive laser. The distribution of absorption coefficients is derived from the experiment, and the results are converted to a practical absorption rate for the CO2 main-drive laser.
Adaptive photoacoustic imaging quality optimization with EMD and reconstruction
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.
2016-10-01
Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.
1978-12-01
multinational corporation in the 1960’s placed extreme emphasis on the need for effective and efficient noise suppression devices. Phase I of work...through model and engine testing applicable to an afterburning turbojet engine. Suppressor designs were based primarily on empirical methods. Phase II...using "ray" acoustics. This method is in contrast to the purely empirical method which consists of the curve -fitting of normalized data. In order to
NASA Astrophysics Data System (ADS)
Fix, Miranda J.; Cooley, Daniel; Hodzic, Alma; Gilleland, Eric; Russell, Brook T.; Porter, William C.; Pfister, Gabriele G.
2018-03-01
We conduct a case study of observed and simulated maximum daily 8-h average (MDA8) ozone (O3) in three US cities for summers during 1996-2005. The purpose of this study is to evaluate the ability of a high resolution atmospheric chemistry model to reproduce observed relationships between meteorology and high or extreme O3. We employ regional coupled chemistry-transport model simulations to make three types of comparisons between simulated and observational data, comparing (1) tails of the O3 response variable, (2) distributions of meteorological predictor variables, and (3) sensitivities of high and extreme O3 to meteorological predictors. This last comparison is made using two methods: quantile regression, for the 0.95 quantile of O3, and tail dependence optimization, which is used to investigate even higher O3 extremes. Across all three locations, we find substantial differences between simulations and observational data in both meteorology and meteorological sensitivities of high and extreme O3.
Extremality of Gaussian quantum states.
Wolf, Michael M; Giedke, Geza; Cirac, J Ignacio
2006-03-03
We investigate Gaussian quantum states in view of their exceptional role within the space of all continuous variables states. A general method for deriving extremality results is provided and applied to entanglement measures, secret key distillation and the classical capacity of bosonic quantum channels. We prove that for every given covariance matrix the distillable secret key rate and the entanglement, if measured appropriately, are minimized by Gaussian states. This result leads to a clearer picture of the validity of frequently made Gaussian approximations. Moreover, it implies that Gaussian encodings are optimal for the transmission of classical information through bosonic channels, if the capacity is additive.
Optimal bounds and extremal trajectories for time averages in dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles
2017-11-01
For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.
Fast exploration of an optimal path on the multidimensional free energy surface
Chen, Changjun
2017-01-01
In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475
Inverse lithography using sparse mask representations
NASA Astrophysics Data System (ADS)
Ionescu, Radu C.; Hurley, Paul; Apostol, Stefan
2015-03-01
We present a novel optimisation algorithm for inverse lithography, based on optimization of the mask derivative, a domain inherently sparse, and for rectilinear polygons, invertible. The method is first developed assuming a point light source, and then extended to general incoherent sources. What results is a fast algorithm, producing manufacturable masks (the search space is constrained to rectilinear polygons), and flexible (specific constraints such as minimal line widths can be imposed). One inherent trick is to treat polygons as continuous entities, thus making aerial image calculation extremely fast and accurate. Requirements for mask manufacturability can be integrated in the optimization without too much added complexity. We also explain how to extend the scheme for phase-changing mask optimization.
NASA Astrophysics Data System (ADS)
Troselj, Josko; Sayama, Takahiro; Varlamov, Sergey M.; Sasaki, Toshiharu; Racault, Marie-Fanny; Takara, Kaoru; Miyazawa, Yasumasa; Kuroki, Ryusuke; Yamagata, Toshio; Yamashiki, Yosuke
2017-12-01
This study demonstrates the importance of accurate extreme discharge input in hydrological and oceanographic combined modeling by introducing two extreme typhoon events. We investigated the effects of extreme freshwater outflow events from river mouths on sea surface salinity distribution (SSS) in the coastal zone of the north-eastern Japan. Previous studies have used observed discharge at the river mouth, as well as seasonally averaged inter-annual, annual, monthly or daily simulated data. Here, we reproduced the hourly peak discharge during two typhoon events for a targeted set of nine rivers and compared their impact on SSS in the coastal zone based on observed, climatological and simulated freshwater outflows in conjunction with verification of the results using satellite remote-sensing data. We created a set of hourly simulated freshwater outflow data from nine first-class Japanese river basins flowing to the western Pacific Ocean for the two targeted typhoon events (Chataan and Roke) and used it with the integrated hydrological (CDRMV3.1.1) and oceanographic (JCOPE-T) model, to compare the case using climatological mean monthly discharges as freshwater input from rivers with the case using our hydrological model simulated discharges. By using the CDRMV model optimized with the SCE-UA method, we successfully reproduced hindcasts for peak discharges of extreme typhoon events at the river mouths and could consider multiple river basin locations. Modeled SSS results were verified by comparison with Chlorophyll-a distribution, observed by satellite remote sensing. The projection of SSS in the coastal zone became more realistic than without including extreme freshwater outflow. These results suggest that our hydrological models with optimized model parameters calibrated to the Typhoon Roke and Chataan cases can be successfully used to predict runoff values from other extreme precipitation events with similar physical characteristics. Proper simulation of extreme typhoon events provides more realistic coastal SSS and may allow a different scenario analysis with various precipitation inputs for developing a nowcasting analysis in the future.
Porosity estimation by semi-supervised learning with sparsely available labeled samples
NASA Astrophysics Data System (ADS)
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
Optimal thrust level for orbit insertion
NASA Astrophysics Data System (ADS)
Cerf, Max
2017-07-01
The minimum-fuel orbital transfer is analyzed in the case of a launcher upper stage using a constantly thrusting engine. The thrust level is assumed to be constant and its value is optimized together with the thrust direction. A closed-loop solution for the thrust direction is derived from the extremal analysis for a planar orbital transfer. The optimal control problem reduces to two unknowns, namely the thrust level and the final time. Guessing and propagating the costates is no longer necessary and the optimal trajectory is easily found from a rough initialization. On the other hand the initial costates are assessed analytically from the initial conditions and they can be used as initial guess for transfers at different thrust levels. The method is exemplified on a launcher upper stage targeting a geostationary transfer orbit.
Optimization of Low-Thrust Spiral Trajectories by Collocation
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Dankanich, John W.
2012-01-01
As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.
Advanced fast 3D DSA model development and calibration for design technology co-optimization
NASA Astrophysics Data System (ADS)
Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing
2017-04-01
Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.
Using SpF to Achieve Petascale for Legacy Pseudospectral Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Jiang, Weiyuan
2014-01-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.
Bailey-Wilson, Joan E.; Brennan, Jennifer S.; Bull, Shelley B; Culverhouse, Robert; Kim, Yoonhee; Jiang, Yuan; Jung, Jeesun; Li, Qing; Lamina, Claudia; Liu, Ying; Mägi, Reedik; Niu, Yue S.; Simpson, Claire L.; Wang, Libo; Yilmaz, Yildiz E.; Zhang, Heping; Zhang, Zhaogong
2012-01-01
Group 14 of Genetic Analysis Workshop 17 examined several issues related to analysis of complex traits using DNA sequence data. These issues included novel methods for analyzing rare genetic variants in an aggregated manner (often termed collapsing rare variants), evaluation of various study designs to increase power to detect effects of rare variants, and the use of machine learning approaches to model highly complex heterogeneous traits. Various published and novel methods for analyzing traits with extreme locus and allelic heterogeneity were applied to the simulated quantitative and disease phenotypes. Overall, we conclude that power is (as expected) dependent on locus-specific heritability or contribution to disease risk, large samples will be required to detect rare causal variants with small effect sizes, extreme phenotype sampling designs may increase power for smaller laboratory costs, methods that allow joint analysis of multiple variants per gene or pathway are more powerful in general than analyses of individual rare variants, population-specific analyses can be optimal when different subpopulations harbor private causal mutations, and machine learning methods may be useful for selecting subsets of predictors for follow-up in the presence of extreme locus heterogeneity and large numbers of potential predictors. PMID:22128066
Hierarchical extreme learning machine based reinforcement learning for goal localization
NASA Astrophysics Data System (ADS)
AlDahoul, Nouar; Zaw Htike, Zaw; Akmeliawati, Rini
2017-03-01
The objective of goal localization is to find the location of goals in noisy environments. Simple actions are performed to move the agent towards the goal. The goal detector should be capable of minimizing the error between the predicted locations and the true ones. Few regions need to be processed by the agent to reduce the computational effort and increase the speed of convergence. In this paper, reinforcement learning (RL) method was utilized to find optimal series of actions to localize the goal region. The visual data, a set of images, is high dimensional unstructured data and needs to be represented efficiently to get a robust detector. Different deep Reinforcement models have already been used to localize a goal but most of them take long time to learn the model. This long learning time results from the weights fine tuning stage that is applied iteratively to find an accurate model. Hierarchical Extreme Learning Machine (H-ELM) was used as a fast deep model that doesn’t fine tune the weights. In other words, hidden weights are generated randomly and output weights are calculated analytically. H-ELM algorithm was used in this work to find good features for effective representation. This paper proposes a combination of Hierarchical Extreme learning machine and Reinforcement learning to find an optimal policy directly from visual input. This combination outperforms other methods in terms of accuracy and learning speed. The simulations and results were analysed by using MATLAB.
An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.
2017-01-01
Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.
Extracting archaeal populations from iron oxidizing systems
NASA Astrophysics Data System (ADS)
Whitmore, L. M.; Hutchison, J.; Chrisler, W.; Jay, Z.; Moran, J.; Inskeep, W.; Kreuzer, H.
2013-12-01
Unique environments in Yellowstone National Park offer exceptional conditions for studying microorganisms in extreme and constrained systems. However, samples from some extreme systems often contain inorganic components that pose complications during microbial and molecular analysis. Several archaeal species are found in acidic, geothermal ferric-oxyhydroxide mats; these species have been shown to adhere to mineral surfaces in flocculated colonies. For optimal microbial analysis, (microscopy, flow cytometry, genomic extractions, proteomic analysis, stable isotope analysis, and others), improved techniques are needed to better facilitate cell detachment and separation from mineral surfaces. As a requirement, these techniques must preserve cell structure while simultaneously minimizing organic carryover to downstream analysis. Several methods have been developed for removing sediments from mixed prokaryotic populations, including ultra-centrifugation, nycodenz gradient, sucrose cushions, and cell straining. In this study we conduct a comparative analysis of mechanisms used to detach archaeal cell populations from the mineral interface. Specifically, we evaluated mechanical and chemical approaches for cell separation and homogenization. Methods were compared using confocal microscopy, flow cytometry analyses, and real-time PCR detection. The methodology and approaches identified will be used to optimize biomass collection from environmental specimens or isolates grown with solid phases.
A Pathological Brain Detection System based on Extreme Learning Machine Optimized by Bat Algorithm.
Lu, Siyuan; Qiu, Xin; Shi, Jianping; Li, Na; Lu, Zhi-Hai; Chen, Peng; Yang, Meng-Meng; Liu, Fang-Yuan; Jia, Wen-Juan; Zhang, Yudong
2017-01-01
It is beneficial to classify brain images as healthy or pathological automatically, because 3D brain images can generate so much information which is time consuming and tedious for manual analysis. Among various 3D brain imaging techniques, magnetic resonance (MR) imaging is the most suitable for brain, and it is now widely applied in hospitals, because it is helpful in the four ways of diagnosis, prognosis, pre-surgical, and postsurgical procedures. There are automatic detection methods; however they suffer from low accuracy. Therefore, we proposed a novel approach which employed 2D discrete wavelet transform (DWT), and calculated the entropies of the subbands as features. Then, a bat algorithm optimized extreme learning machine (BA-ELM) was trained to identify pathological brains from healthy controls. A 10x10-fold cross validation was performed to evaluate the out-of-sample performance. The method achieved a sensitivity of 99.04%, a specificity of 93.89%, and an overall accuracy of 98.33% over 132 MR brain images. The experimental results suggest that the proposed approach is accurate and robust in pathological brain detection. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.
Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar
2017-03-01
We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
A systematic optimization for graphene-based supercapacitors
NASA Astrophysics Data System (ADS)
Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho
2017-08-01
Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF = 0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.
NASA Technical Reports Server (NTRS)
Chuang, C.-H.; Goodson, Troy D.; Ledsinger, Laura A.
1995-01-01
This report describes current work in the numerical computation of multiple burn, fuel-optimal orbit transfers and presents an analysis of the second variation for extremal multiple burn orbital transfers as well as a discussion of a guidance scheme which may be implemented for such transfers. The discussion of numerical computation focuses on the use of multivariate interpolation to aid the computation in the numerical optimization. The second variation analysis includes the development of the conditions for the examination of both fixed and free final time transfers. Evaluations for fixed final time are presented for extremal one, two, and three burn solutions of the first variation. The free final time problem is considered for an extremal two burn solution. In addition, corresponding changes of the second variation formulation over thrust arcs and coast arcs are included. The guidance scheme discussed is an implicit scheme which implements a neighboring optimal feedback guidance strategy to calculate both thrust direction and thrust on-off times.
Using Extreme Groups Strategy When Measures Are Not Normally Distributed.
ERIC Educational Resources Information Center
Fowler, Robert L.
1992-01-01
A Monte Carlo simulation explored how to optimize power in the extreme groups strategy when sampling from nonnormal distributions. Results show that the optimum percent for the extreme group selection was approximately the same for all population shapes, except the extremely platykurtic (uniform) distribution. (SLD)
Takahashi, Fumihiro; Morita, Satoshi
2018-02-08
Phase II clinical trials are conducted to determine the optimal dose of the study drug for use in Phase III clinical trials while also balancing efficacy and safety. In conducting these trials, it may be important to consider subpopulations of patients grouped by background factors such as drug metabolism and kidney and liver function. Determining the optimal dose, as well as maximizing the effectiveness of the study drug by analyzing patient subpopulations, requires a complex decision-making process. In extreme cases, drug development has to be terminated due to inadequate efficacy or severe toxicity. Such a decision may be based on a particular subpopulation. We propose a Bayesian utility approach (BUART) to randomized Phase II clinical trials which uses a first-order bivariate normal dynamic linear model for efficacy and safety in order to determine the optimal dose and study population in a subsequent Phase III clinical trial. We carried out a simulation study under a wide range of clinical scenarios to evaluate the performance of the proposed method in comparison with a conventional method separately analyzing efficacy and safety in each patient population. The proposed method showed more favorable operating characteristics in determining the optimal population and dose.
Lang, Catherine E.; Bland, Marghuretta D.; Bailey, Ryan R.; Schaefer, Sydney Y.; Birkenmeier, Rebecca L.
2012-01-01
The purpose of this review is to provide a comprehensive approach for assessing the upper extremity (UE) after stroke. First, common upper extremity impairments and how to assess them are briefly discussed. While multiple UE impairments are typically present after stroke, the severity of one impairment, paresis, is the primary determinant of UE functional loss. Second, UE function is operationally defined and a number of clinical measures are discussed. It is important to consider how impairment and loss of function affect UE activity outside of the clinical environment. Thus, this review also identifies accelerometry as an objective method for assessing UE activity in daily life. Finally, the role that each of these levels of assessment should play in clinical decision making is discussed in order to optimize the provision of stroke rehabilitation services. PMID:22975740
Extremal entanglement witnesses
NASA Astrophysics Data System (ADS)
Hansen, Leif Ove; Hauge, Andreas; Myrheim, Jan; Sollid, Per Øyvind
2015-02-01
We present a study of extremal entanglement witnesses on a bipartite composite quantum system. We define the cone of witnesses as the dual of the set of separable density matrices, thus TrΩρ≥0 when Ω is a witness and ρ is a pure product state, ρ=ψψ† with ψ=ϕ⊗χ. The set of witnesses of unit trace is a compact convex set, uniquely defined by its extremal points. The expectation value f(ϕ,χ)=TrΩρ as a function of vectors ϕ and χ is a positive semidefinite biquadratic form. Every zero of f(ϕ,χ) imposes strong real-linear constraints on f and Ω. The real and symmetric Hessian matrix at the zero must be positive semidefinite. Its eigenvectors with zero eigenvalue, if such exist, we call Hessian zeros. A zero of f(ϕ,χ) is quadratic if it has no Hessian zeros, otherwise it is quartic. We call a witness quadratic if it has only quadratic zeros, and quartic if it has at least one quartic zero. A main result we prove is that a witness is extremal if and only if no other witness has the same, or a larger, set of zeros and Hessian zeros. A quadratic extremal witness has a minimum number of isolated zeros depending on dimensions. If a witness is not extremal, then the constraints defined by its zeros and Hessian zeros determine all directions in which we may search for witnesses having more zeros or Hessian zeros. A finite number of iterated searches in random directions, by numerical methods, leads to an extremal witness which is nearly always quadratic and has the minimum number of zeros. We discuss briefly some topics related to extremal witnesses, in particular the relation between the facial structures of the dual sets of witnesses and separable states. We discuss the relation between extremality and optimality of witnesses, and a conjecture of separability of the so-called structural physical approximation (SPA) of an optimal witness. Finally, we discuss how to treat the entanglement witnesses on a complex Hilbert space as a subset of the witnesses on a real Hilbert space.
Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo
2017-01-01
This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.
Optimization of a Small Scale Linear Reluctance Accelerator
NASA Astrophysics Data System (ADS)
Barrera, Thor; Beard, Robby
2011-11-01
Reluctance accelerators are extremely promising future methods of transportation. Several problems still plague these devices, most prominently low efficiency. Variables to overcoming efficiency problems are many and difficult to correlate how they affect our accelerator. The study examined several differing variables that present potential challenges in optimizing the efficiency of reluctance accelerators. These include coil and projectile design, power supplies, switching, and the elusive gradient inductance problem. Extensive research in these areas has been performed from computational and theoretical to experimental. Findings show that these parameters share significant similarity to transformer design elements, thus general findings show current optimized parameters the research suggests as a baseline for further research and design. Demonstration of these current findings will be offered at the time of presentation.
Implicit methods for efficient musculoskeletal simulation and optimal control
van den Bogert, Antonie J.; Blana, Dimitra; Heinrich, Dieter
2011-01-01
The ordinary differential equations for musculoskeletal dynamics are often numerically stiff and highly nonlinear. Consequently, simulations require small time steps, and optimal control problems are slow to solve and have poor convergence. In this paper, we present an implicit formulation of musculoskeletal dynamics, which leads to new numerical methods for simulation and optimal control, with the expectation that we can mitigate some of these problems. A first order Rosenbrock method was developed for solving forward dynamic problems using the implicit formulation. It was used to perform real-time dynamic simulation of a complex shoulder arm system with extreme dynamic stiffness. Simulations had an RMS error of only 0.11 degrees in joint angles when running at real-time speed. For optimal control of musculoskeletal systems, a direct collocation method was developed for implicitly formulated models. The method was applied to predict gait with a prosthetic foot and ankle. Solutions were obtained in well under one hour of computation time and demonstrated how patients may adapt their gait to compensate for limitations of a specific prosthetic limb design. The optimal control method was also applied to a state estimation problem in sports biomechanics, where forces during skiing were estimated from noisy and incomplete kinematic data. Using a full musculoskeletal dynamics model for state estimation had the additional advantage that forward dynamic simulations, could be done with the same implicitly formulated model to simulate injuries and perturbation responses. While these methods are powerful and allow solution of previously intractable problems, there are still considerable numerical challenges, especially related to the convergence of gradient-based solvers. PMID:22102983
Optimality of Thermal Expansion Bounds in Three Dimensions
Watts, Seth E.; Tortorelli, Daniel A.
2015-02-20
In this short note, we use topology optimization to design multi-phase isotropic three-dimensional composite materials with extremal combinations of isotropic thermal expansion and bulk modulus. In so doing, we provide evidence that the theoretical bounds for this combination of material properties are optimal. This has been shown in two dimensions, but not heretofore in three dimensions. Finally, we also show that restricting the design space by enforcing material symmetry by construction does not prevent one from obtaining extremal designs.
Interception in three dimensions - An energy formulation
NASA Technical Reports Server (NTRS)
Rajan, N.; Ardema, M. D.
1983-01-01
The problem of minimum-time interception of a target flying in three dimensional space is analyzed with the interceptor aircraft modeled through energy-state approximation. A coordinate transformation that uncouples the interceptor's extremals from the target motion in an open-loop sense is introduced, and the necessary conditions for optimality and the optimal controls are derived. Example extremals are shown.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
Li, Ke; Deb, Kalyanmoy; Zhang, Qingfu; Zhang, Qiang
2017-09-01
Nondominated sorting (NDS), which divides a population into several nondomination levels (NDLs), is a basic step in many evolutionary multiobjective optimization (EMO) algorithms. It has been widely studied in a generational evolution model, where the environmental selection is performed after generating a whole population of offspring. However, in a steady-state evolution model, where a population is updated right after the generation of a new candidate, the NDS can be extremely time consuming. This is especially severe when the number of objectives and population size become large. In this paper, we propose an efficient NDL update method to reduce the cost for maintaining the NDL structure in steady-state EMO. Instead of performing the NDS from scratch, our method only updates the NDLs of a limited number of solutions by extracting the knowledge from the current NDL structure. Notice that our NDL update method is performed twice at each iteration. One is after the reproduction, the other is after the environmental selection. Extensive experiments fully demonstrate that, comparing to the other five state-of-the-art NDS methods, our proposed method avoids a significant amount of unnecessary comparisons, not only in the synthetic data sets, but also in some real optimization scenarios. Last but not least, we find that our proposed method is also useful for the generational evolution model.
A Scalable and Robust Multi-Agent Approach to Distributed Optimization
NASA Technical Reports Server (NTRS)
Tumer, Kagan
2005-01-01
Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.
Design of optimal and ideal 2-D concentrators with the collector immersed in a dielectric tube
NASA Astrophysics Data System (ADS)
Minano, J. C.; Ruiz, J. M.; Luque, A.
1983-12-01
A method is presented for designing ideal and optimal 2-D concentrators when the collector is placed inside a dielectric tube, for the particular case of a bifacial solar collector. The prototype 2-D (cylindrical geometry) concentrator is the compound parabolic concentrator or CPC, and from the beginning of development, it was found by Winston (1978) that filling up the concentrator with a transparent dielectric medium results in a big improvement of the optical properties. The method reported here is based on the extreme ray principle of design and avoids the use of differential equations by means of a proper appliction of Fermat's principle. One advantage of these concentrators is that they allow the size to be small compared with classical CPCs.
Rapid convergence of optimal control in NMR using numerically-constructed toggling frames
NASA Astrophysics Data System (ADS)
Coote, Paul; Anklin, Clemens; Massefski, Walter; Wagner, Gerhard; Arthanari, Haribabu
2017-08-01
We present a numerical method for rapidly solving the Bloch equation for an arbitrary time-varying spin-1/2 Hamiltonian. The method relies on fast, vectorized computations such as summation and quaternion multiplication, rather than slow computations such as matrix exponentiation. A toggling frame is constructed in which the Hamiltonian is time-invariant, and therefore has a simple analytical solution. The key insight is that constructing this frame is faster than solving the system dynamics in the original frame. Rapidly solving the Bloch equations for an arbitrary Hamiltonian is particularly useful in the context of NMR optimal control. Optimal control theory can be used to design pulse shapes for a range of tasks in NMR spectroscopy. However, it requires multiple simulations of the Bloch equations at each stage of the algorithm, and for each relevant set of parameters (e.g. chemical shift frequencies). This is typically time consuming. We demonstrate that by working in an appropriate toggling frame, optimal control pulses can be generated much faster. We present a new alternative to the well-known GRAPE algorithm to continuously update the toggling-frame as the optimal pulse is generated, and demonstrate that this approach is extremely fast. The use and benefit of rapid optimal pulse generation is demonstrated for 19F fragment screening experiments.
Weighted mining of massive collections of [Formula: see text]-values by convex optimization.
Dobriban, Edgar
2018-06-01
Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).
Optimal Shakedown of the Thin-Wall Metal Structures Under Strength and Stiffness Constraints
NASA Astrophysics Data System (ADS)
Alawdin, Piotr; Liepa, Liudas
2017-06-01
Classical optimization problems of metal structures confined mainly with 1st class cross-sections. But in practice it is common to use the cross-sections of higher classes. In this paper, a new mathematical model for described shakedown optimization problem for metal structures, which elements are designed from 1st to 4th class cross-sections, under variable quasi-static loads is presented. The features of limited plastic redistribution of forces in the structure with thin-walled elements there are taken into account. Authors assume the elastic-plastic flexural buckling in one plane without lateral torsional buckling behavior of members. Design formulae for Methods 1 and 2 for members are analyzed. Structures stiffness constrains are also incorporated in order to satisfy the limit serviceability state requirements. With the help of mathematical programming theory and extreme principles the structure optimization algorithm is developed and justified with the numerical experiment for the metal plane frames.
Wang, Geng; Xing, Fei; Wei, Minsong; You, Zheng
2017-10-16
The strong stray light has huge interference on the detection of weak and small optical signals, and is difficult to suppress. In this paper, a miniaturized baffle with angled vanes was proposed and a rapid optimization model of strong light elimination was built, which has better suppression of the stray lights than the conventional vanes and can optimize the positions of the vanes efficiently and accurately. Furthermore, the light energy distribution model was built based on the light projection at a specific angle, and the light propagation models of the vanes and sidewalls were built based on the Lambert scattering, both of which act as the bias of a calculation method of stray light. Moreover, the Monte-Carlo method was employed to realize the Point Source Transmittance (PST) simulation, and the simulation result indicated that it was consistent with the calculation result based on our models, and the PST could be improved by 2-3 times at the small incident angles for the baffle designed by the new method. Meanwhile, the simulation result was verified by laboratory tests, and the new model with derived analytical expressions which can reduce the simulation time significantly.
Li, Yang; Li, Guoqing; Wang, Zhenhao
2015-01-01
In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.
L^1 -optimality conditions for the circular restricted three-body problem
NASA Astrophysics Data System (ADS)
Chen, Zheng
2016-11-01
In this paper, the L^1 -minimization for the translational motion of a spacecraft in the circular restricted three-body problem (CRTBP) is considered. Necessary conditions are derived by using the Pontryagin Maximum Principle (PMP), revealing the existence of bang-bang and singular controls. Singular extremals are analyzed, recalling the existence of the Fuller phenomenon according to the theories developed in (Marchal in J Optim Theory Appl 11(5):441-486, 1973; Zelikin and Borisov in Theory of Chattering Control with Applications to Astronautics, Robotics, Economics, and Engineering. Birkhäuser, Basal 1994; in J Math Sci 114(3):1227-1344, 2003). The sufficient optimality conditions for the L^1 -minimization problem with fixed endpoints have been developed in (Chen et al. in SIAM J Control Optim 54(3):1245-1265, 2016). In the current paper, we establish second-order conditions for optimal control problems with more general final conditions defined by a smooth submanifold target. In addition, the numerical implementation to check these optimality conditions is given. Finally, approximating the Earth-Moon-Spacecraft system by the CRTBP, an L^1 -minimization trajectory for the translational motion of a spacecraft is computed by combining a shooting method with a continuation method in (Caillau et al. in Celest Mech Dyn Astron 114:137-150, 2012; Caillau and Daoud in SIAM J Control Optim 50(6):3178-3202, 2012). The local optimality of the computed trajectory is asserted thanks to the second-order optimality conditions developed.
Activity Monitors Help Users Get Optimum Sun Exposure
NASA Technical Reports Server (NTRS)
2015-01-01
Goddard scientist Shahid Aslam was investigating alternative methods for measuring extreme ultraviolet radiation on the Solar Dynamics Observatory when he hit upon semiconductors that measured wavelengths pertinent to human health. As a result, he and a partner established College Park, Maryland-based Sensor Sensor LLC and developed UVA+B SunFriend, a wrist monitor that lets people know when they've received their optimal amounts of sunlight for the day.
Johnson, Mitchell E; Landers, James P
2004-11-01
Laser-induced fluorescence is an extremely sensitive method for detection in chemical separations. In addition, it is well-suited to detection in small volumes, and as such is widely used for capillary electrophoresis and microchip-based separations. This review explores the detailed instrumental conditions required for sub-zeptomole, sub-picomolar detection limits. The key to achieving the best sensitivity is to use an excitation and emission volume that is matched to the separation system and that, simultaneously, will keep scattering and luminescence background to a minimum. We discuss how this is accomplished with confocal detection, 90 degrees on-capillary detection, and sheath-flow detection. It is shown that each of these methods have their advantages and disadvantages, but that all can be used to produce extremely sensitive detectors for capillary- or microchip-based separations. Analysis of these capabilities allows prediction of the optimal means of achieving ultrasensitive detection on microchips.
Lunar Habitat Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.
Extrinsic and intrinsic index finger muscle attachments in an OpenSim upper-extremity model.
Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L
2015-04-01
Musculoskeletal models allow estimation of muscle function during complex tasks. We used objective methods to determine possible attachment locations for index finger muscles in an OpenSim upper-extremity model. Data-driven optimization algorithms, Simulated Annealing and Hook-Jeeves, estimated tendon locations crossing the metacarpophalangeal (MCP), proximal interphalangeal (PIP) and distal interphalangeal (DIP) joints by minimizing the difference between model-estimated and experimentally-measured moment arms. Sensitivity analysis revealed that multiple sets of muscle attachments with similar optimized moment arms are possible, requiring additional assumptions or data to select a single set of values. The most smooth muscle paths were assumed to be biologically reasonable. Estimated tendon attachments resulted in variance accounted for (VAF) between calculated moment arms and measured values of 78% for flex/extension and 81% for ab/adduction at the MCP joint. VAF averaged 67% at the PIP joint and 54% at the DIP joint. VAF values at PIP and DIP joints partially reflected the constant moment arms reported for muscles about these joints. However, all moment arm values found through optimization were non-linear and non-constant. Relationships between moment arms and joint angles were best described with quadratic equations for tendons at the PIP and DIP joints.
NASA Astrophysics Data System (ADS)
Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.
2017-12-01
To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the midst of uncertainty is a major part of this study. We will speak to how the ensemble approach may be used in concert with one another to manage risk and enhance resiliency in the midst of uncertainty. Finally, the presentation will also address the implications of including climate change in future extreme precipitation estimation studies.
Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package
NASA Astrophysics Data System (ADS)
Cheng, L.; AghaKouchak, A.; Gilleland, E.
2013-12-01
Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.
Extremity War Injuries VIII: sequelae of combat injuries.
Andersen, Romney C; D'Alleyrand, Jean-Claude G; Swiontkowski, Marc F; Ficke, James R
2014-01-01
The 2013 Extremity War Injury symposium focused on the sequelae of combat-related injuries, including posttraumatic osteoarthritis, amputations, and infections. Much remains to be learned about posttraumatic arthritis, and there are few circumstances in which a definitive arthroplasty should be performed in an acutely injured and open joint. Although the last decade has seen tremendous advances in the treatment of combat upper extremity injuries, many questions remain unanswered, and continued research focusing on improving reconstruction of large segmental defects remains critical. Discussion of infection centered on the need for novel methods to reduce the bacterial load following the initial débridement procedures. Novel methods of delivering antimicrobial therapy and anti-inflammatory medications directly to the wound were discussed as well as the need for near real-time assessment of bacterial and fungal burden and further means of prevention and treatment of biofilm formation and the importance of animal models to test therapies discussed. Moderators and lecturers of focus groups noted the continuing need for improved prehospital care in the management of junctional injuries, identified optimal strategies for both surgical repair and/or reconstruction of the ligaments in multiligamentous injuries, and noted the need to mitigate bone mineral density loss following amputation and/or limb salvage as well as the necessity of developing better methods of anticipating and managing heterotopic ossification.
Inada, Satoshi; Masuda, Takanori; Maruyama, Naoya; Yamashita, Yukari; Sato, Tomoyasu; Imada, Naoyuki
2016-01-01
To evaluate the image quality and effect of radiation dose reduction by setting for computed tomography automatic exposure control system (CT-AEC) in computed tomographic angiography (CTA) of lower extremity artery. Two methods of setting were compared for CT-AEC [conventional and contrast-to-noise ratio (CNR) methods]. Conventional method was set noise index (NI): 14and tube current threshold: 10-750 mA. CNR method was set NI: 18, minimum tube current: (X+Y)/2 mA (X, Y: maximum X (Y)-axis tube current value of leg in NI: 14), and maximum tube current: 750 mA. The image quality was evaluated by CNR, and radiation dose reduction was evaluated by dose-length-product (DLP). In conventional method, mean CNRs for pelvis, femur, and leg were 19.9±4.8, 20.4±5.4, and 16.2±4.3, respectively. There was a significant difference between the CNRs of pelvis and leg (P<0.001), and between femur and leg (P<0.001). In CNR method, mean CNRs for pelvis, femur, and leg were 15.2±3.3, 15.3±3.2, and 15.3±3.1, respectively; no significant difference between pelvis, femur, and leg (P=0.973) in CNR method was observed. Mean DLPs were 1457±434 mGy⋅cm in conventional method, and 1049±434 mGy·cm in CNR method. There was a significant difference in the DLPs of conventional method and CNR method (P<0.001). CNR method gave equal CNRs for pelvis, femur, and leg, and was beneficial for radiation dose reduction in CTA of lower extremity artery.
Postpartum contraceptive use among women with a recent preterm birth.
Robbins, Cheryl L; Farr, Sherry L; Zapata, Lauren B; D'Angelo, Denise V; Callaghan, William M
2015-10-01
The objective of the study was to evaluate the associations between postpartum contraception and having a recent preterm birth. Population-based data from the Pregnancy Risk Assessment Monitoring System in 9 states were used to estimate the postpartum use of highly or moderately effective contraception (sterilization, intrauterine device, implants, shots, pills, patch, and ring) and user-independent contraception (sterilization, implants, and intrauterine device) among women with recent live births (2009-2011). We assessed the differences in contraception by gestational age (≤27, 28-33, or 34-36 weeks vs term [≥37 weeks]) and modeled the associations using multivariable logistic regression with weighted data. A higher percentage of women with recent extreme preterm birth (≤27 weeks) reported using no postpartum method (31%) compared with all other women (15-16%). Women delivering extreme preterm infants had a decreased odds of using highly or moderately effective methods (adjusted odds ratio, 0.5; 95% confidence interval, 0.4-0.6) and user-independent methods (adjusted odds ratio, 0.5; 95% confidence interval, 0.4-0.7) compared with women having term births. Wanting to get pregnant was more frequently reported as a reason for contraceptive nonuse by women with an extreme preterm birth overall (45%) compared with all other women (15-18%, P < .0001). Infant death occurred in 41% of extreme preterm births and more than half of these mothers (54%) reported wanting to become pregnant as the reason for contraceptive nonuse. During contraceptive counseling with women who had recent preterm births, providers should address an optimal pregnancy interval and consider that women with recent extreme preterm birth, particularly those whose infants died, may not use contraception because they want to get pregnant. Published by Elsevier Inc.
Kara, Fatih; Yucel, Ismail
2015-09-01
This study investigates the climate change impact on the changes of mean and extreme flows under current and future climate conditions in the Omerli Basin of Istanbul, Turkey. The 15 regional climate model output from the EU-ENSEMBLES project and a downscaling method based on local implications from geophysical variables were used for the comparative analyses. Automated calibration algorithm is used to optimize the parameters of Hydrologiska Byråns Vattenbalansavdel-ning (HBV) model for the study catchment using observed daily temperature and precipitation. The calibrated HBV model was implemented to simulate daily flows using precipitation and temperature data from climate models with and without downscaling method for reference (1960-1990) and scenario (2071-2100) periods. Flood indices were derived from daily flows, and their changes throughout the four seasons and year were evaluated by comparing their values derived from simulations corresponding to the current and future climate. All climate models strongly underestimate precipitation while downscaling improves their underestimation feature particularly for extreme events. Depending on precipitation input from climate models with and without downscaling the HBV also significantly underestimates daily mean and extreme flows through all seasons. However, this underestimation feature is importantly improved for all seasons especially for spring and winter through the use of downscaled inputs. Changes in extreme flows from reference to future increased for the winter and spring and decreased for the fall and summer seasons. These changes were more significant with downscaling inputs. With respect to current time, higher flow magnitudes for given return periods will be experienced in the future and hence, in the planning of the Omerli reservoir, the effective storage and water use should be sustained.
SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models
NASA Astrophysics Data System (ADS)
Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.
2013-12-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.
Das, Tony; Mustapha, Jihad; Indes, Jeffrey; Vorhies, Robert; Beasley, Robert; Doshi, Nilesh; Adams, George L
2014-01-01
Objectives The purpose of CONFIRM registry series was to evaluate the use of orbital atherectomy (OA) in peripheral lesions of the lower extremities, as well as optimize the technique of OA. Background Methods of treating calcified arteries (historically a strong predictor of treatment failure) have improved significantly over the past decade and now include minimally invasive endovascular treatments, such as OA with unique versatility in modifying calcific lesions above and below-the-knee. Methods Patients (3135) undergoing OA by more than 350 physicians at over 200 US institutions were enrolled on an “all-comers” basis, resulting in registries that provided site-reported patient demographics, ABI, Rutherford classification, co-morbidities, lesion characteristics, plaque morphology, device usage parameters, and procedural outcomes. Results Treatment with OA reduced pre-procedural stenosis from an average of 88–35%. Final residual stenosis after adjunctive treatments, typically low-pressure percutaneous transluminal angioplasty (PTA), averaged 10%. Plaque removal was most effective for severely calcified lesions and least effective for soft plaque. Shorter spin times and smaller crown sizes significantly lowered procedural complications which included slow flow (4.4%), embolism (2.2%), and spasm (6.3%), emphasizing the importance of treatment regimens that focus on plaque modification over maximizing luminal gain. Conclusion The OA technique optimization, which resulted in a change of device usage across the CONFIRM registry series, corresponded to a lower incidence of adverse events irrespective of calcium burden or co-morbidities. © 2013 The Authors. Wiley Periodicals, Inc. PMID:23737432
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
NASA Astrophysics Data System (ADS)
Li, Peng-fei; Zhou, Xiao-jun
2015-12-01
Subsea tunnel lining structures should be designed to sustain the loads transmitted from surrounding ground and groundwater during excavation. Extremely high pore-water pressure reduces the effective strength of the country rock that surrounds a tunnel, thereby lowering the arching effect and stratum stability of the structure. In this paper, the mechanical behavior and shape optimization of the lining structure for the Xiang'an tunnel excavated in weathered slots are examined. Eight cross sections with different geometric parameters are adopted to study the mechanical behavior and shape optimization of the lining structure. The hyperstatic reaction method is used through finite element analysis software ANSYS. The mechanical behavior of the lining structure is evidently affected by the geometric parameters of crosssectional shape. The minimum safety factor of the lining structure elements is set to be the objective function. The efficient tunnel shape to maximize the minimum safety factor is identified. The minimum safety factor increases significantly after optimization. The optimized cross section significantly improves the mechanical characteristics of the lining structure and effectively reduces its deformation. Force analyses of optimization process and program are conducted parametrically so that the method can be applied to the optimization design of other similar structures. The results obtained from this study enhance our understanding of the mechanical behavior of the lining structure for subsea tunnels. These results are also beneficial to the optimal design of lining structures in general.
NASA Astrophysics Data System (ADS)
Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan
2016-12-01
In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
Kogovšek, P; Hodgetts, J; Hall, J; Prezelj, N; Nikolić, P; Mehle, N; Lenarčič, R; Rotter, A; Dickinson, M; Boonham, N; Dermastia, M; Ravnikar, M
2015-01-01
In Europe the most devastating phytoplasma associated with grapevine yellows (GY) diseases is a quarantine pest, flavescence dorée (FDp), from the 16SrV taxonomic group. The on-site detection of FDp with an affordable device would contribute to faster and more efficient decisions on the control measures for FDp. Therefore, a real-time isothermal LAMP assay for detection of FDp was validated according to the EPPO standards and MIQE guidelines. The LAMP assay was shown to be specific and extremely sensitive, because it detected FDp in all leaf samples that were determined to be FDp infected using quantitative real-time PCR. The whole procedure of sample preparation and testing was designed and optimized for on-site detection and can be completed in one hour. The homogenization procedure of the grapevine samples (leaf vein, flower or berry) was optimized to allow direct testing of crude homogenates with the LAMP assay, without the need for DNA extraction, and was shown to be extremely sensitive. PMID:26146413
Xu, Tianhong; Cao, Juncheng; Montrosset, Ivo
2015-01-01
The dynamical regimes and performance optimization of quantum dot monolithic passively mode-locked lasers with extremely low repetition rate are investigated using the numerical method. A modified multisection delayed differential equation model is proposed to accomplish simulations of both two-section and three-section passively mode-locked lasers with long cavity. According to the numerical simulations, it is shown that fundamental and harmonic mode-locking regimes can be multistable over a wide current range. These dynamic regimes are studied, and the reasons for their existence are explained. In addition, we demonstrate that fundamental pulses with higher peak power can be achieved when the laser is designed to work in a region with smaller differential gain.
Development of Decision Analysis Specifically for Arctic Offshore Drilling Islands.
1985-12-01
the decision analysis method will - give tradeoffs between costs and design wave height, production and depth • :of water for an oil platform , etc...optimizing the type of platform that is best suited for a particular site has become an extremely difficult decision. Over fifty- one different types of...drilling and production platforms have been identified for the Arctic environment, with new concepts being developed - every year, Boslov et al (198j
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model. However, under extremely unfavorable market conditions, results indicate that f(4) can be a more valid measure of risk than volatility.
NASA Astrophysics Data System (ADS)
Sánchez, H. T.; Estrems, M.; Franco, P.; Faura, F.
2009-11-01
In recent years, the market of heat exchangers is increasingly demanding new products in short cycle time, which means that both the design and manufacturing stages must be extremely reduced. The design stage can be reduced by means of CAD-based parametric design techniques. The methodology presented in this proceeding is based on the optimized control of geometric parameters of a service chamber of a heat exchanger by means of the Application Programming Interface (API) provided by the Solidworks CAD package. Using this implementation, a set of different design configurations of the service chamber made of stainless steel AISI 316 are studied by means of the FE method. As a result of this study, a set of knowledge rules based on the fatigue behaviour are constructed and integrated into the design optimization process.
Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2006-01-01
Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Quantifying the consequences of changing hydroclimatic extremes on protection levels for the Rhine
NASA Astrophysics Data System (ADS)
Sperna Weiland, Frederiek; Hegnauer, Mark; Buiteveld, Hendrik; Lammersen, Rita; van den Boogaard, Henk; Beersma, Jules
2017-04-01
The Dutch method for quantifying the magnitude and frequency of occurrence of discharge extremes in the Rhine basin and the potential influence of climate change hereon are presented. In the Netherlands flood protection design requires estimates of discharge extremes for return periods of 1000 up to 100,000 years. Observed discharge records are too short to derive such extreme return discharges, therefore extreme value assessment is based on very long synthetic discharge time-series generated with the Generator of Rainfall And Discharge Extremes (GRADE). The GRADE instrument consists of (1) a stochastic weather generator based on time series resampling of historical f rainfall and temperature and (2) a hydrological model optimized following the GLUE methodology and (3) a hydrodynamic model to simulate the propagation of flood waves based on the generated hydrological time-series. To assess the potential influence of climate change, the four KNMI'14 climate scenarios are applied. These four scenarios represent a large part of the uncertainty provided by the GCMs used for the IPCC 5th assessment report (the CMIP5 GCM simulations under different climate forcings) and are for this purpose tailored to the Rhine and Meuse river basins. To derive the probability distributions of extreme discharges under climate change the historical synthetic rainfall and temperature series simulated with the weather generator are transformed to the future following the KNMI'14 scenarios. For this transformation the Advanced Delta Change method, which allows that the changes in the extremes differ from those in the means, is used. Subsequently the hydrological model is forced with the historical and future (i.e. transformed) synthetic time-series after which the propagation of the flood waves is simulated with the hydrodynamic model to obtain the extreme discharge statistics both for current and future climate conditions. The study shows that both for 2050 and 2085 increases in discharge extremes for the river Rhine at Lobith are projected by all four KNMI'14 climate scenarios. This poses increased requirements for flood protection design in order to prepare for changing climate conditions.
NASA Technical Reports Server (NTRS)
Engberg, Robert; Ooi, Teng K.
2004-01-01
New methods for structural health monitoring are being assessed, especially in high-performance, extreme environment, safety-critical applications. One such application is for composite cryogenic fuel tanks. The work presented here attempts to characterize and investigate the feasibility of using imbedded piezoelectric sensors to detect cracks and delaminations under cryogenic and ambient conditions. A variety of damage detection methods and different Sensors are employed in the different composite plate samples to aid in determining an optimal algorithm, sensor placement strategy, and type of imbedded sensor to use. Variations of frequency, impedance measurements, and pulse echoing techniques of the sensors are employed and compared. Statistical and analytic techniques are then used to determine which method is most desirable for a specific type of damage. These results are furthermore compared with previous work using externally mounted sensors. Results and optimized methods from this work can then be incorporated into a larger composite structure to validate and assess its structural health. This could prove to be important in the development and qualification of any 2" generation reusable launch vehicle using composites as a structural element.
Superpixel-based graph cuts for accurate stereo matching
NASA Astrophysics Data System (ADS)
Feng, Liting; Qin, Kaihuai
2017-06-01
Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.
Modelling Schumann resonances from ELF measurements using non-linear optimization methods
NASA Astrophysics Data System (ADS)
Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo
2017-04-01
Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.
Manhard, Mary Kate; Harkins, Kevin D; Gochberg, Daniel F; Nyman, Jeffry S; Does, Mark D
2017-03-01
MRI of cortical bone has the potential to offer new information about fracture risk. Current methods are typically performed with 3D acquisitions, which suffer from long scan times and are generally limited to extremities. This work proposes using 2D UTE with half pulses for quantitatively mapping bound and pore water in cortical bone. Half-pulse 2D UTE methods were implemented on a 3T Philips Achieva scanner using an optimized slice-select gradient waveform, with preparation pulses to selectively image bound or pore water. The 2D methods were quantitatively compared with previously implemented 3D methods in the tibia in five volunteers. The mean difference between bound and pore water concentration acquired from 3D and 2D sequences was 0.6 and 0.9 mol 1 H/L bone (3 and 12%, respectively). While 2D pore water methods tended to slightly overestimate concentrations relative to 3D methods, differences were less than scan-rescan uncertainty and expected differences between healthy and fracture-prone bones. Quantitative bound and pore water concentration mapping in cortical bone can be accelerated by 2 orders of magnitude using 2D protocols with optimized half-pulse excitation. Magn Reson Med 77:945-950, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Wei, Ping; Li, Xinyang; Luo, Xi; Li, Jianfeng
2018-02-01
The centroid method is commonly adopted to locate the spot in the sub-apertures in the Shack-Hartmann wavefront sensor (SH-WFS), in which preprocessing image is required before calculating the spot location due to that the centroid method is extremely sensitive to noises. In this paper, the SH-WFS image was simulated according to the characteristics of the noises, background and intensity distribution. The Optimal parameters of SH-WFS image preprocessing method were put forward, in different signal-to-noise ratio (SNR) conditions, where the wavefront reconstruction error was considered as the evaluation index. Two methods of image preprocessing, thresholding method and windowing combing with thresholding method, were compared by studying the applicable range of SNR and analyzing the stability of the two methods, respectively.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Implementation of an optimized microfluidic mixer in alumina employing femtosecond laser ablation
NASA Astrophysics Data System (ADS)
Juodėnas, M.; Tamulevičius, T.; Ulčinas, O.; Tamulevičius, S.
2018-01-01
Manipulation of liquids at the lowest levels of volume and dimension is at the forefront of materials science, chemistry and medicine, offering important time and resource saving applications. However, manipulation by mixing is troublesome at the microliter and lower scales. One approach to overcome this problem is to use passive mixers, which exploit structural obstacles within microfluidic channels or the geometry of channels themselves to enforce and enhance fluid mixing. Some applications require the manipulation and mixing of aggressive substances, which makes conventional microfluidic materials, along with their fabrication methods, inappropriate. In this work, implementation of an optimized full scale three port microfluidic mixer is presented in a slide of a material that is very hard to process but possesses extreme chemical and physical resistance—alumina. The viability of the selected femtosecond laser fabrication method as an alternative to conventional lithography methods, which are unable to process this material, is demonstrated. For the validation and optimization of the microfluidic mixer, a finite element method (FEM) based numerical modeling of the influence of the mixer geometry on its mixing performance is completed. Experimental investigation of the laminar flow geometry demonstrated very good agreement with the numerical simulation results. Such a laser ablation microfabricated passive mixer structure is intended for use in a capillary force assisted nanoparticle assembly setup (CAPA).
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783
Aircraft symmetric flight optimization. [gradient techniques for supersonic aircraft control
NASA Technical Reports Server (NTRS)
Falco, M.; Kelley, H. J.
1973-01-01
Review of the development of gradient techniques and their application to aircraft optimal performance computations in the vertical plane of flight. Results obtained using the method of gradients are presented for attitude- and throttle-control programs which extremize the fuel, range, and time performance indices subject to various trajectory and control constraints, including boundedness of engine throttle control. A penalty function treatment of state inequality constraints which generally appear in aircraft performance problems is outlined. Numerical results for maximum-range, minimum-fuel, and minimum-time climb paths for a hypothetical supersonic turbojet interceptor are presented and discussed. In addition, minimum-fuel climb paths subject to various levels of ground overpressure intensity constraint are indicated for a representative supersonic transport. A variant of the Gel'fand-Tsetlin 'method of ravines' is reviewed, and two possibilities for further development of continuous gradient processes are cited - namely, a projection version of conjugate gradients and a curvilinear search.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Effects of anthropogenic activity emerging as intensified extreme precipitation over China
NASA Astrophysics Data System (ADS)
Li, Huixin; Chen, Huopo; Wang, Huijun
2017-07-01
This study aims to provide an assessment of the effects of anthropogenic (ANT) forcings and other external factors on observed increases in extreme precipitation over China from 1961 to 2005. Extreme precipitation is represented by the annual maximum 1 day precipitation (RX1D) and the annual maximum 5 day consecutive precipitation (RX5D), and these variables are investigated using observations and simulations from the Coupled Model Intercomparison Project phase 5. The analyses mainly focus on the probability-based index (PI), which is derived from RX1D and RX5D by fitting generalized extreme value distributions. The results indicate that the simulations that include the ANT forcings provide the best representation of the spatial and temporal characteristics of extreme precipitation over China. We use the optimal fingerprint method to obtain the univariate and multivariate fingerprints of the responses to external forcings. The results show that only the ANT forcings are detectable at a 90% confidence level, both individually and when natural forcings are considered simultaneously. The impact of the forcing associated with greenhouse gases (GHGs) is also detectable in RX1D, but its effects cannot be separated from those of combinations of forcings that exclude the GHG forcings in the two-signal analyses. Besides, the estimated changes of PI, extreme precipitation, and events with a 20 year return period under nonstationary climate states are potentially attributable to ANT or GHG forcings, and the relationships between extreme precipitation and temperature from ANT forcings show agreement with observations.
Intra-arterial Ultra-low-Dose CT Angiography of Lower Extremity in Diabetic Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Özgen, Ali, E-mail: draliozgen@hotmail.com; Sanioğlu, Soner; Bingöl, Uğur Anıl
2016-08-15
PurposeTo image lower extremity arteries by CT angiography using a very low-dose intra-arterial contrast medium in patients with high risk of developing contrast-induced nephropathy (CIN).Materials and MethodsThree cases with long-standing diabetes mellitus and signs of lower extremity atherosclerotic disease were evaluated by CT angiography using 0.1 ml/kg of the body weight of contrast medium given via 10-cm-long 4F introducer by puncturing the CFA. Images were evaluated by an interventional radiologist and a cardiovascular surgeon. Density values of the lower extremity arteries were also calculated. Findings in two cases were compared with digital subtraction angiography images performed for percutaneous revascularization. Blood creatininemore » levels were followed for possible CIN.ResultsIntra-arterial CT angiography images were considered diagnostic in all patients and optimal in one patient. No patient developed CIN after intra-arterial CT angiography, while one patient developed CIN after percutaneous intervention.ConclusionIntra-arterial CT angiography of lower extremity might be performed in selected patients with high risk of developing CIN. Our limited experience suggests that as low as of 0.1 ml/kg of the body weight of contrast medium may result in adequate diagnostic imaging.« less
Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali
2015-01-01
In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141
Data entry errors and design for model-based tight glycemic control in critical care.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.
[Optimal solution and analysis of muscular force during standing balance].
Wang, Hongrui; Zheng, Hui; Liu, Kun
2015-02-01
The present study was aimed at the optimal solution of the main muscular force distribution in the lower extremity during standing balance of human. The movement musculoskeletal system of lower extremity was simplified to a physical model with 3 joints and 9 muscles. Then on the basis of this model, an optimum mathematical model was built up to solve the problem of redundant muscle forces. Particle swarm optimization (PSO) algorithm is used to calculate the single objective and multi-objective problem respectively. The numerical results indicated that the multi-objective optimization could be more reasonable to obtain the distribution and variation of the 9 muscular forces. Finally, the coordination of each muscle group during maintaining standing balance under the passive movement was qualitatively analyzed using the simulation results obtained.
NASA Astrophysics Data System (ADS)
Xu, Chuanpei; Niu, Junhao; Ling, Jing; Wang, Suyan
2018-03-01
In this paper, we present a parallel test strategy for bandwidth division multiplexing under the test access mechanism bandwidth constraint. The Pareto solution set is combined with a cloud evolutionary algorithm to optimize the test time and power consumption of a three-dimensional network-on-chip (3D NoC). In the proposed method, all individuals in the population are sorted in non-dominated order and allocated to the corresponding level. Individuals with extreme and similar characteristics are then removed. To increase the diversity of the population and prevent the algorithm from becoming stuck around local optima, a competition strategy is designed for the individuals. Finally, we adopt an elite reservation strategy and update the individuals according to the cloud model. Experimental results show that the proposed algorithm converges to the optimal Pareto solution set rapidly and accurately. This not only obtains the shortest test time, but also optimizes the power consumption of the 3D NoC.
Application of Hyperspectral Imaging to Detect Sclerotinia sclerotiorum on Oilseed Rape Stems
Kong, Wenwen; Zhang, Chu; Huang, Weihao
2018-01-01
Hyperspectral imaging covering the spectral range of 384–1034 nm combined with chemometric methods was used to detect Sclerotinia sclerotiorum (SS) on oilseed rape stems by two sample sets (60 healthy and 60 infected stems for each set). Second derivative spectra and PCA loadings were used to select the optimal wavelengths. Discriminant models were built and compared to detect SS on oilseed rape stems, including partial least squares-discriminant analysis, radial basis function neural network, support vector machine and extreme learning machine. The discriminant models using full spectra and optimal wavelengths showed good performance with classification accuracies of over 80% for the calibration and prediction set. Comparing all developed models, the optimal classification accuracies of the calibration and prediction set were over 90%. The similarity of selected optimal wavelengths also indicated the feasibility of using hyperspectral imaging to detect SS on oilseed rape stems. The results indicated that hyperspectral imaging could be used as a fast, non-destructive and reliable technique to detect plant diseases on stems. PMID:29300315
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou
2013-01-01
Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491
From Neutron Star Observables to the Equation of State. I. An Optimal Parametrization
NASA Astrophysics Data System (ADS)
Raithel, Carolyn A.; Özel, Feryal; Psaltis, Dimitrios
2016-11-01
The increasing number and precision of measurements of neutron star masses, radii, and, in the near future, moments of inertia offer the possibility of precisely determining the neutron star equation of state (EOS). One way to facilitate the mapping of observables to the EOS is through a parametrization of the latter. We present here a generic method for optimizing the parametrization of any physically allowed EOS. We use mock EOS that incorporate physically diverse and extreme behavior to test how well our parametrization reproduces the global properties of the stars, by minimizing the errors in the observables of mass, radius, and the moment of inertia. We find that using piecewise polytropes and sampling the EOS with five fiducial densities between ˜1-8 times the nuclear saturation density results in optimal errors for the smallest number of parameters. Specifically, it recreates the radii of the assumed EOS to within less than 0.5 km for the extreme mock EOS and to within less than 0.12 km for 95% of a sample of 42 proposed, physically motivated EOS. Such a parametrization is also able to reproduce the maximum mass to within 0.04 {M}⊙ and the moment of inertia of a 1.338 {M}⊙ neutron star to within less than 10% for 95% of the proposed sample of EOS.
Selection criteria for wear resistant powder coatings under extreme erosive wear conditions
NASA Astrophysics Data System (ADS)
Kulu, P.; Pihl, T.
2002-12-01
Wear-resistant thermal spray coatings for sliding wear are hard but brittle (such as carbide and oxide based coatings), which makes them useless under impact loading conditions and sensitive to fatigue. Under extreme conditions of erosive wear (impact loading, high hardness of abrasives, and high velocity of abradant particles), composite coatings ensure optimal properties of hardness and toughness. The article describes tungsten carbide-cobalt (WC-Co) systems and self-fluxing alloys, containing tungsten carbide based hardmetal particles [NiCrSiB-(WC-Co)] deposited by the detonation gun, continuous detonation spraying, and spray fusion processes. Different powder compositions and processes were studied, and the effect of the coating structure and wear parameters on the wear resistance of coatings are evaluated. The dependence of the wear resistance of sprayed and fused coatings on their hardness is discussed, and hardness criteria for coating selection are proposed. The so-called “double cemented” structure of WC-Co based hardmetal or metal matrix composite coatings, as compared with a simple cobalt matrix containing particles of WC, was found optimal. Structural criteria for coating selection are provided. To assist the end user in selecting an optimal deposition method and materials, coating selection diagrams of wear resistance versus hardness are given. This paper also discusses the cost-effectiveness of coatings in the application areas that are more sensitive to cost, and composite coatings based on recycled materials are offered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. Themore » evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.« less
Spectral CT of the extremities with a silicon strip photon counting detector
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.
2015-03-01
Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.
Ab initio atomic recombination reaction energetics on model heat shield surfaces
NASA Technical Reports Server (NTRS)
Senese, Fredrick; Ake, Robert
1992-01-01
Ab initio quantum mechanical calculations on small hydration complexes involving the nitrate anion are reported. The self-consistent field method with accurate basis sets has been applied to compute completely optimized equilibrium geometries, vibrational frequencies, thermochemical parameters, and stable site labilities of complexes involving 1, 2, and 3 waters. The most stable geometries in the first hydration shell involve in-plane waters bridging pairs of nitrate oxygens with two equal and bent hydrogen bonds. A second extremely labile local minimum involves out-of-plane waters with a single hydrogen bond and lies about 2 kcal/mol higher. The potential in the region of the second minimum is extremely flat and qualitatively sensitive to changes in the basis set; it does not correspond to a true equilibrium structure.
Neural architecture design based on extreme learning machine.
Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis
2013-12-01
Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tan, Qunyou; Zhang, Li; Zhang, Liangke; Teng, Yongzhen; Zhang, Jingqing
2012-01-01
Pyridostigmine bromide (PTB) is a highly soluble and extremely bitter drug. Here, an economic complexation technology combined with direct tablet compression method has been developed to meet the requirements of a patient friendly dosage known as taste-masked dispersible tablets loaded PTB (TPDPTs): (1) TPDPTs should have optimal disintegration and good physical resistance (hardness); (2) a low-cost, simple but practical preparation method suitable for industrial production is preferred from a cost perspective. Physicochemical properties of the inclusion complex of PTB with beta-cyclodextrin were investigated by Fourier transformed infrared spectroscopy, differential scanning calorimetry and UV spectroscopy. An orthogonal design was chosen to properly formulate TPDPTs. All volunteers regarded acceptable bitterness of TPDPTs. The properties including disintegration time, weight variation, friability, hardness, dispersible uniformity and drug content of TPDPTs were evaluated. The dissolution profile of TPDPTs in distilled water exhibited a fast rate. Pharmacokinetic results demonstrated that TPDPTs and the commercial tablets were bioequivalent.
Nonprincipal plane scattering of flat plates and pattern control of horn antennas
NASA Technical Reports Server (NTRS)
Balanis, Constantine A.; Polka, Lesley A.; Liu, Kefeng
1989-01-01
Using the geometrical theory of diffraction, the traditional method of high frequency scattering analysis, the prediction of the radar cross section of a perfectly conducting, flat, rectangular plate is limited to principal planes. Part A of this report predicts the radar cross section in nonprincipal planes using the method of equivalent currents. This technique is based on an asymptotic end-point reduction of the surface radiation integrals for an infinite wedge and enables nonprincipal plane prediction. The predicted radar cross sections for both horizontal and vertical polarizations are compared to moment method results and experimental data from Arizona State University's anechoic chamber. In part B, a variational calculus approach to the pattern control of the horn antenna is outlined. The approach starts with the optimization of the aperture field distribution so that the control of the radiation pattern in a range of directions can be realized. A control functional is thus formulated. Next, a spectral analysis method is introduced to solve for the eigenfunctions from the extremal condition of the formulated functional. Solutions to the optimized aperture field distribution are then obtained.
Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J; Murcray, Cassandra Elizabeth; Conti, David
2011-12-01
Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. © 2011 Wiley Periodicals, Inc.
Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David
2014-01-01
Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. PMID:21922541
Smartphone Assessment of Knee Flexion Compared to Radiographic Standards
Dietz, Matthew J.; Sprando, Daniel; Hanselman, Andrew E.; Regier, Michael D.; Frye, Benjamin M.
2017-01-01
Purpose Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Methods Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. Results The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC = 0.94; 95% CI: 0.91–0.96). Visual estimation was found to be the least reliable method of measurement. Conclusions The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. PMID:28179062
Optimized photonic gauge of extreme high vacuum with Petawatt lasers
NASA Astrophysics Data System (ADS)
Paredes, Ángel; Novoa, David; Tommasini, Daniele; Mas, Héctor
2014-03-01
One of the latest proposed applications of ultra-intense laser pulses is their possible use to gauge extreme high vacuum by measuring the photon radiation resulting from nonlinear Thomson scattering within a vacuum tube. Here, we provide a complete analysis of the process, computing the expected rates and spectra, both for linear and circular polarizations of the laser pulses, taking into account the effect of the time envelope in a slowly varying envelope approximation. We also design a realistic experimental configuration allowing for the implementation of the idea and compute the corresponding geometric efficiencies. Finally, we develop an optimization procedure for this photonic gauge of extreme high vacuum at high repetition rate Petawatt and multi-Petawatt laser facilities, such as VEGA, JuSPARC and ELI.
Mei, Wenjuan; Zeng, Xianping; Yang, Chenglin; Zhou, Xiuyun
2017-01-01
The insulated gate bipolar transistor (IGBT) is a kind of excellent performance switching device used widely in power electronic systems. How to estimate the remaining useful life (RUL) of an IGBT to ensure the safety and reliability of the power electronics system is currently a challenging issue in the field of IGBT reliability. The aim of this paper is to develop a prognostic technique for estimating IGBTs’ RUL. There is a need for an efficient prognostic algorithm that is able to support in-situ decision-making. In this paper, a novel prediction model with a complete structure based on optimally pruned extreme learning machine (OPELM) and Volterra series is proposed to track the IGBT’s degradation trace and estimate its RUL; we refer to this model as Volterra k-nearest neighbor OPELM prediction (VKOPP) model. This model uses the minimum entropy rate method and Volterra series to reconstruct phase space for IGBTs’ ageing samples, and a new weight update algorithm, which can effectively reduce the influence of the outliers and noises, is utilized to establish the VKOPP network; then a combination of the k-nearest neighbor method (KNN) and least squares estimation (LSE) method is used to calculate the output weights of OPELM and predict the RUL of the IGBT. The prognostic results show that the proposed approach can predict the RUL of IGBT modules with small error and achieve higher prediction precision and lower time cost than some classic prediction approaches. PMID:29099811
2015-06-01
cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator
Oyebode, Femi
2014-04-01
This is a brief commentary on the value of optimism in therapy. It draws on the philosophical writings of Schopenhauer and Aristotle. It suggests that the modern preoccupation with optimism may be as extreme as the bleak pessimistic outlook favoured by Schopenhauer.
Current Concepts in Examination and Treatment of Elbow Tendon Injury
Ellenbecker, Todd S.; Nirschl, Robert; Renstrom, Per
2013-01-01
Context: Injuries to the tendons of the elbow occur frequently in the overhead athlete, creating a significant loss of function and dilemma to sports medicine professionals. A detailed review of the anatomy, etiology, and pathophysiology of tendon injury coupled with comprehensive evaluation and treatment information is needed for clinicians to optimally design treatment programs for rehabilitation and prevention. Evidence Acquisitions: The PubMed database was searched in January 2012 for English-language articles pertaining to elbow tendon injury. Results: Detailed information on tendon pathophysiology was found along with incidence of elbow injury in overhead athletes. Several evidence-based reviews were identified, providing a thorough review of the recommended rehabilitation for elbow tendon injury. Conclusions: Humeral epicondylitis is an extra-articular tendon injury that is common in athletes subjected to repetitive upper extremity loading. Research is limited on the identification of treatment modalities that can reduce pain and restore function to the elbow. Eccentric exercise has been studied in several investigations and, when coupled with a complete upper extremity strengthening program, can produce positive results in patients with elbow tendon injury. Further research is needed in high-level study to delineate optimal treatment methods. PMID:24427389
Alpha-amylase from the Hyperthermophilic Archaeon Thermococcus thioreducens
NASA Technical Reports Server (NTRS)
Bernhardsdotter, E. C. M. J.; Pusey, M. L.; Ng, M. L.; Garriott, O. K.
2003-01-01
Extremophiles are microorganisms that thrive in, from an anthropocentric view, extreme environments such as hot springs. The ability of survival at extreme conditions has rendered enzymes from extremophiles to be of interest in industrial applications. One approach to producing these extremozymes entails the expression of the enzyme-encoding gene in a mesophilic host such as E.coli. This method has been employed in the effort to produce an alpha-amylase from a hyperthermophile (an organism that displays optimal growth above 80 C) isolated from a hydrothermal vent at the Rainbow vent site in the Atlantic Ocean. alpha-amylases catalyze the hydrolysis of starch to produce smaller sugars and constitute a class of industrial enzymes having approximately 25% of the enzyme market. One application for thermostable alpha-amylases is the starch liquefaction process in which starch is converted into fructose and glucose syrups. The a-amylase encoding gene from the hyperthermophile Thermococcus thioreducens was cloned and sequenced, revealing high similarity with other archaeal hyperthermophilic a-amylases. The gene encoding the mature protein was expressed in E.coli. Initial characterization of this enzyme has revealed an optimal amylolytic activity between 85-90 C and around pH 5.3-6.0.
Optimal and fast rotational alignment of volumes with missing data in Fourier space.
Shatsky, Maxim; Arbelaez, Pablo; Glaeser, Robert M; Brenner, Steven E
2013-11-01
Electron tomography of intact cells has the potential to reveal the entire cellular content at a resolution corresponding to individual macromolecular complexes. Characterization of macromolecular complexes in tomograms is nevertheless an extremely challenging task due to the high level of noise, and due to the limited tilt angle that results in missing data in Fourier space. By identifying particles of the same type and averaging their 3D volumes, it is possible to obtain a structure at a more useful resolution for biological interpretation. Currently, classification and averaging of sub-tomograms is limited by the speed of computational methods that optimize alignment between two sub-tomographic volumes. The alignment optimization is hampered by the fact that the missing data in Fourier space has to be taken into account during the rotational search. A similar problem appears in single particle electron microscopy where the random conical tilt procedure may require averaging of volumes with a missing cone in Fourier space. We present a fast implementation of a method guaranteed to find an optimal rotational alignment that maximizes the constrained cross-correlation function (cCCF) computed over the actual overlap of data in Fourier space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
1987-09-21
objectives of our program are to isolate and characterize a fully active DNA dependent RNA polymerase from the extremely halophilic archaebacteria of the genus...operons in II. Marismortui. The halobacteriaceae are extreme halophiles . They require 3.5 M NaCI for optimal growth an(l no growth is observed below 2...was difficutlt to perform due to the extreme genetic instability in this strain (6). In contrast, the genoine of the extreme halophilic and prototrophic
Phakthong, Wilaiwan; Liawruangrath, Boonsom; Liawruangrath, Saisunee
2014-12-01
A reversed flow injection (rFI) system was designed and constructed for gallic acid determination. Gallic acid was determined based on the formation of chromogen between gallic acid and rhodanine, resulting in a colored product with a λmax at 520 nm. The optimum conditions for determining gallic acid were also investigated. Optimizations of the experimental conditions were carried out based on the so-call univariate method. The conditions obtained were 0.6% (w/v) rhodanine, 70% (v/v) ethanol, 0.9 mol L(-1) NaOH, 2.0 mL min(-1) flow rate, 75 μL injection loop and 600 cm mixing tubing length, respectively. Comparative optimizations of the experimental conditions were also carried out by multivariate or simplex optimization method. The conditions obtained were 1.2% (w/v) rhodanine, 70% (v/v) ethanol, 1.2 mol L(-1) NaOH, flow rate 2.5 mL min(-1), 75 μL injection loop and 600 cm mixing tubing length, respectively. It was found that the optimum conditions obtained by the former optimization method were mostly similar to those obtained by the latter method. The linear relationship between peak height and the concentration of gallic acid was obtained over the range of 0.1-35.0 mg L(-1) with the detection limit 0.081 mg L(-1). The relative standard deviations were found to be in the ranges 0.46-1.96% for 1, 10, 30 mg L(-1) of gallic acid (n=11). The method has the advantages of simplicity extremely high selectivity and high precision. The proposed method was successfully applied to the determination of gallic acid in longan samples without interferent effects from other common phenolic compounds that might be present in the longan samples collected in northern Thailand. Copyright © 2014 Elsevier B.V. All rights reserved.
Mironov, Vladimir; Kasyanov, Vladimir; Markwald, Roger R
2008-06-01
The existing methods of biofabrication for vascular tissue engineering are still bioreactor-based, extremely expensive, laborious and time consuming and, furthermore, not automated, which would be essential for an economically successful large-scale commercialization. The advances in nanotechnology can bring additional functionality to vascular scaffolds, optimize internal vascular graft surface and even help to direct the differentiation of stem cells into the vascular cell phenotype. The development of rapid nanotechnology-based methods of vascular tissue biofabrication represents one of most important recent technological breakthroughs in vascular tissue engineering because it dramatically accelerates vascular tissue assembly and, importantly, also eliminates the need for a bioreactor-based scaffold cellularization process.
Prince, Linda M
2015-01-01
Inter-simple sequence repeat PCR (ISSR-PCR) is a fast, inexpensive genotyping technique based on length variation in the regions between microsatellites. The method requires no species-specific prior knowledge of microsatellite location or composition. Very small amounts of DNA are required, making this method ideal for organisms of conservation concern, or where the quantity of DNA is extremely limited due to organism size. ISSR-PCR can be highly reproducible but requires careful attention to detail. Optimization of DNA extraction, fragment amplification, and normalization of fragment peak heights during fluorescent detection are critical steps to minimizing the downstream time spent verifying and scoring the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiribella, G.; D'Ariano, G. M.; Perinotti, P.
We investigate the problem of cloning a set of states that is invariant under the action of an irreducible group representation. We then characterize the cloners that are extremal in the convex set of group covariant cloning machines, among which one can restrict the search for optimal cloners. For a set of states that is invariant under the discrete Weyl-Heisenberg group, we show that all extremal cloners can be unitarily realized using the so-called double-Bell states, whence providing a general proof of the popular ansatz used in the literature for finding optimal cloners in a variety of settings. Our resultmore » can also be generalized to continuous-variable optimal cloning in infinite dimensions, where the covariance group is the customary Weyl-Heisenberg group of displacement000.« less
Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems
NASA Astrophysics Data System (ADS)
Tobasco, Ian; Goluskin, David; Doering, Charles R.
2018-02-01
For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.
[New population curves in spanish extremely preterm neonates].
García-Muñoz Rodrigo, F; García-Alix Pérez, A; Figueras Aloy, J; Saavedra Santana, P
2014-08-01
Most anthropometric reference data for extremely preterm infants used in Spain are outdated and based on non-Spanish populations, or are derived from small hospital-based samples that failed to include neonates of borderline viability. To develop gender-specific, population-based curves for birth weight, length, and head circumference in extremely preterm Caucasian infants, using a large contemporary sample size of Spanish singletons. Anthropometric data from neonates ≤ 28 weeks of gestational age were collected between January 2002 and December 2010 using the Spanish database SEN1500. Gestational age was estimated according to obstetric data (early pregnancy ultrasound). The data were analyzed with the SPSS.20 package, and centile tables were created for males and females using the Cole and Green LMS method. This study presents the first population-based growth curves for extremely preterm infants, including those of borderline viability, in Spain. A sexual dimorphism is evident for all of the studied parameters, starting at early gestation. These new gender-specific and population-based data could be useful for the improvement of growth assessments of extremely preterm infants in our country, for the development of epidemiological studies, for the evaluation of temporal trends, and for clinical or public health interventions seeking to optimize fetal growth. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
Technical Parameters Modeling of a Gas Probe Foaming Using an Active Experimental Type Research
NASA Astrophysics Data System (ADS)
Tîtu, A. M.; Sandu, A. V.; Pop, A. B.; Ceocea, C.; Tîtu, S.
2018-06-01
The present paper deals with a current and complex topic, namely - a technical problem solving regarding the modeling and then optimization of some technical parameters related to the natural gas extraction process. The study subject is to optimize the gas probe sputtering using experimental research methods and data processing by regular probe intervention with different sputtering agents. This procedure makes that the hydrostatic pressure to be reduced by the foam formation from the water deposit and the scrubbing agent which can be removed from the surface by the produced gas flow. The probe production data was analyzed and the so-called candidate for the research itself emerged. This is an extremely complex study and it was carried out on the field works, finding that due to the severe gas field depletion the wells flow decreases and the start of their loading with deposit water, was registered. It was required the regular wells foaming, to optimize the daily production flow and the disposal of the wellbore accumulated water. In order to analyze the process of natural gas production, the factorial experiment and other methods were used. The reason of this choice is that the method can offer very good research results with a small number of experimental data. Finally, through this study the extraction process problems were identified by analyzing and optimizing the technical parameters, which led to a quality improvement of the extraction process.
Improve accuracy for automatic acetabulum segmentation in CT images.
Liu, Hao; Zhao, Jianning; Dai, Ning; Qian, Hongbo; Tang, Yuehong
2014-01-01
Separation of the femur head and acetabulum is one of main difficulties in the diseased hip joint due to deformed shapes and extreme narrowness of the joint space. To improve the segmentation accuracy is the key point of existing automatic or semi-automatic segmentation methods. In this paper, we propose a new method to improve the accuracy of the segmented acetabulum using surface fitting techniques, which essentially consists of three parts: (1) design a surface iterative process to obtain an optimization surface; (2) change the ellipsoid fitting to two-phase quadric surface fitting; (3) bring in a normal matching method and an optimization region method to capture edge points for the fitting quadric surface. Furthermore, this paper cited vivo CT data sets of 40 actual patients (with 79 hip joints). Test results for these clinical cases show that: (1) the average error of the quadric surface fitting method is 2.3 (mm); (2) the accuracy ratio of automatically recognized contours is larger than 89.4%; (3) the error ratio of section contours is less than 10% for acetabulums without severe malformation and less than 30% for acetabulums with severe malformation. Compared with similar methods, the accuracy of our method, which is applied in a software system, is significantly enhanced.
Das, Tony; Mustapha, Jihad; Indes, Jeffrey; Vorhies, Robert; Beasley, Robert; Doshi, Nilesh; Adams, George L
2014-01-01
The purpose of CONFIRM registry series was to evaluate the use of orbital atherectomy (OA) in peripheral lesions of the lower extremities, as well as optimize the technique of OA. Methods of treating calcified arteries (historically a strong predictor of treatment failure) have improved significantly over the past decade and now include minimally invasive endovascular treatments, such as OA with unique versatility in modifying calcific lesions above and below-the-knee. Patients (3135) undergoing OA by more than 350 physicians at over 200 US institutions were enrolled on an "all-comers" basis, resulting in registries that provided site-reported patient demographics, ABI, Rutherford classification, co-morbidities, lesion characteristics, plaque morphology, device usage parameters, and procedural outcomes. Treatment with OA reduced pre-procedural stenosis from an average of 88-35%. Final residual stenosis after adjunctive treatments, typically low-pressure percutaneous transluminal angioplasty (PTA), averaged 10%. Plaque removal was most effective for severely calcified lesions and least effective for soft plaque. Shorter spin times and smaller crown sizes significantly lowered procedural complications which included slow flow (4.4%), embolism (2.2%), and spasm (6.3%), emphasizing the importance of treatment regimens that focus on plaque modification over maximizing luminal gain. The OA technique optimization, which resulted in a change of device usage across the CONFIRM registry series, corresponded to a lower incidence of adverse events irrespective of calcium burden or co-morbidities. Copyright © 2013 The Authors. Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hughes, Nikki J.
The optimal combination of Whole body vibration (WBV) amplitude and frequency has not been established. Purpose. To determine optimal combination of WBV amplitude and frequency that will enhance acute mean and peak power (MP and PP) output EMG activity in the lower extremity muscles. Methods. Resistance trained males (n = 13) completed the following testing sessions: On day 1, power spectrum testing of bilateral leg press (BLP) movement was performed on the OMNI. Days 2 and 3 consisted of WBV testing with either average (5.8 mm) or high (9.8 mm) amplitude combined with either 0 (sham control), 10, 20, 30, 40 and 50 Hz frequency. Bipolar surface electrodes were placed on the rectus femoris (RF), vastus lateralis (VL), bicep femoris (BF) and gastrocnemius (GA) muscles for EMG analysis. MP and PP output and EMG activity of the lower extremity were assessed pre-, post-WBV treatments and after sham-controls on the OMNI while participants performed one set of five repetitions of BLP at the optimal resistance determined on Day 1. Results. No significant differences were found between pre- and sham-control on MP and PP output and on EMG activity in RF, VL, BF and GA. Completely randomized one-way ANOVA with repeated measures demonstrated no significant interaction of WBV amplitude and frequency on MP and PP output and peak and mean EMGrms amplitude and EMG rms area under the curve. RF and VL EMGrms area under the curve significantly decreased (p < 0.05) with high WBV amplitude, whereas low amplitude significantly decreased GA mean and peak EMGrms amplitude and EMGrms area under the curve. VL mean EMGrms amplitude and BF mean and peak EMGrms amplitudes were significantly decreased (p < 0.05) with high WBV amplitude when compared to sham-control. WBV frequency significantly decreased (p < 0.05) VL mean and peak EMGrms amplitude. WBV frequency at 30 and 40 Hz significantly decreased (p < 0.05) GA mean EMGrms amplitude and 20 and 30 Hz significantly decreased GA peak EMGrms amplitude. MP and PP output was not significantly effected by either treatment. Conclusions. It is concluded that WBV combined with plyometric exercise does not induce alterations in subsequent MP and PP output and EMGrms activity of the lower extremity. Future studies need to address the time of WBV exposure and magnitude of external loads that will maximize strength and/or power output.
NASA Astrophysics Data System (ADS)
Kong, Wenwen; Liu, Fei; Zhang, Chu; Bao, Yidan; Yu, Jiajia; He, Yong
2014-01-01
Tomatoes are cultivated around the world and gray mold is one of its most prominent and destructive diseases. An early disease detection method can decrease losses caused by plant diseases and prevent the spread of diseases. The activity of peroxidase (POD) is very important indicator of disease stress for plants. The objective of this study is to examine the possibility of fast detection of POD activity in tomato leaves which infected with Botrytis cinerea using hyperspectral imaging data. Five pre-treatment methods were investigated. Genetic algorithm-partial least squares (GA-PLS) was applied to select optimal wavelengths. A new fast learning neural algorithm named extreme learning machine (ELM) was employed as multivariate analytical tool in this study. 21 optimal wavelengths were selected by GA-PLS and used as inputs of three calibration models. The optimal prediction result was achieved by ELM model with selected wavelengths, and the r and RMSEP in validation were 0.8647 and 465.9880 respectively. The results indicated that hyperspectral imaging could be considered as a valuable tool for POD activity prediction. The selected wavelengths could be potential resources for instrument development.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
NASA Astrophysics Data System (ADS)
Fan, Benhui; Liu, Yu; He, Delong; Bai, Jinbo
2018-01-01
Sandwich-structured composites of polydimethylsiloxane/carbon nanotube (PDMS/CNT) bulk between two neat PDMS thin films with different thicknesses are prepared by the spin-coating method. Taking advantage of CNT's percolation behavior, the composite keeps relatively high dielectric constant (ɛ' = 40) at a low frequency (at 100 Hz). Meanwhile, due to the existence of PDMS isolated out-layers which limits the conductivity of the composite, the composite maintains an extremely low dielectric loss (tan δ = 0.01) (at 100 Hz). Moreover, the same matrix of the out-layer and bulk can achieve excellent interfacial adhesion, and the thickness of the coating layer can be controlled by a multi-cycle way. Then, based on the experimental results, the calculation combining the percolation theory and core-shell model is used to analyze the thickness effect of the coating layer on ɛ'. The obtained relationship between the ɛ' of the composite and the thickness of the coating layer can help to optimize the sandwich structure in order to obtain the adjustable ɛ' and the extremely low tan δ.
Minimax confidence intervals in geomagnetism
NASA Technical Reports Server (NTRS)
Stark, Philip B.
1992-01-01
The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.
Photon-efficient super-resolution laser radar
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2017-08-01
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-03-10
DESTINY is a comprehensive tool for modeling 3D and 2D cache designs using SRAM,embedded DRAM (eDRAM), spin transfer torque RAM (STT-RAM), resistive RAM (ReRAM), and phase change RAM (PCN). In its purpose, it is similar to CACTI, CACTI-3DD or NVSim. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g. latency, area or energy-delay product) for agiven memory technology, choosing the suitable memory technology or fabrication method (i.e. 2D v/s 3D) for a given optimization target, etc. DESTINY has been validated against several cache prototypes. DESTINY is expected to boost studies ofmore » next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers.« less
Protein Sequence Classification with Improved Extreme Learning Machine Algorithms
2014-01-01
Precisely classifying a protein sequence from a large biological protein sequences database plays an important role for developing competitive pharmacological products. Comparing the unseen sequence with all the identified protein sequences and returning the category index with the highest similarity scored protein, conventional methods are usually time-consuming. Therefore, it is urgent and necessary to build an efficient protein sequence classification system. In this paper, we study the performance of protein sequence classification using SLFNs. The recent efficient extreme learning machine (ELM) and its invariants are utilized as the training algorithms. The optimal pruned ELM is first employed for protein sequence classification in this paper. To further enhance the performance, the ensemble based SLFNs structure is constructed where multiple SLFNs with the same number of hidden nodes and the same activation function are used as ensembles. For each ensemble, the same training algorithm is adopted. The final category index is derived using the majority voting method. Two approaches, namely, the basic ELM and the OP-ELM, are adopted for the ensemble based SLFNs. The performance is analyzed and compared with several existing methods using datasets obtained from the Protein Information Resource center. The experimental results show the priority of the proposed algorithms. PMID:24795876
Smartphone assessment of knee flexion compared to radiographic standards.
Dietz, Matthew J; Sprando, Daniel; Hanselman, Andrew E; Regier, Michael D; Frye, Benjamin M
2017-03-01
Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC=0.94; 95% CI: 0.91-0.96). Visual estimation was found to be the least reliable method of measurement. The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. Copyright © 2016 Elsevier B.V. All rights reserved.
Sakadzić, Sava; Roussakis, Emmanuel; Yaseen, Mohammad A; Mandeville, Emiri T; Srinivasan, Vivek J; Arai, Ken; Ruvinskaya, Svetlana; Devor, Anna; Lo, Eng H; Vinogradov, Sergei A; Boas, David A
2010-09-01
Measurements of oxygen partial pressure (pO(2)) with high temporal and spatial resolution in three dimensions is crucial for understanding oxygen delivery and consumption in normal and diseased brain. Among existing pO(2) measurement methods, phosphorescence quenching is optimally suited for the task. However, previous attempts to couple phosphorescence with two-photon laser scanning microscopy have faced substantial difficulties because of extremely low two-photon absorption cross-sections of conventional phosphorescent probes. Here we report to our knowledge the first practical in vivo two-photon high-resolution pO(2) measurements in small rodents' cortical microvasculature and tissue, made possible by combining an optimized imaging system with a two-photon-enhanced phosphorescent nanoprobe. The method features a measurement depth of up to 250 microm, sub-second temporal resolution and requires low probe concentration. The properties of the probe allowed for direct high-resolution measurement of cortical extravascular (tissue) pO(2), opening many possibilities for functional metabolic brain studies.
Optimization of the Surface Structure on Black Silicon for Surface Passivation
NASA Astrophysics Data System (ADS)
Jia, Xiaojie; Zhou, Chunlan; Wang, Wenjing
2017-03-01
Black silicon shows excellent anti-reflection and thus is extremely useful for photovoltaic applications. However, its high surface recombination velocity limits the efficiency of solar cells. In this paper, the effective minority carrier lifetime of black silicon is improved by optimizing metal-catalyzed chemical etching (MCCE) method, using an Al2O3 thin film deposited by atomic layer deposition (ALD) as a passivation layer. Using the spray method to eliminate the impact on the rear side, single-side black silicon was obtained on n-type solar grade silicon wafers. Post-etch treatment with NH4OH/H2O2/H2O mixed solution not only smoothes the surface but also increases the effective minority lifetime from 161 μs of as-prepared wafer to 333 μs after cleaning. Moreover, adding illumination during the etching process results in an improvement in both the numerical value and the uniformity of the effective minority carrier lifetime.
Optimization of the Surface Structure on Black Silicon for Surface Passivation.
Jia, Xiaojie; Zhou, Chunlan; Wang, Wenjing
2017-12-01
Black silicon shows excellent anti-reflection and thus is extremely useful for photovoltaic applications. However, its high surface recombination velocity limits the efficiency of solar cells. In this paper, the effective minority carrier lifetime of black silicon is improved by optimizing metal-catalyzed chemical etching (MCCE) method, using an Al 2 O 3 thin film deposited by atomic layer deposition (ALD) as a passivation layer. Using the spray method to eliminate the impact on the rear side, single-side black silicon was obtained on n-type solar grade silicon wafers. Post-etch treatment with NH 4 OH/H 2 O 2 /H 2 O mixed solution not only smoothes the surface but also increases the effective minority lifetime from 161 μs of as-prepared wafer to 333 μs after cleaning. Moreover, adding illumination during the etching process results in an improvement in both the numerical value and the uniformity of the effective minority carrier lifetime.
Rossum, Huub H van; Kemperman, Hans
2017-07-26
General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.
De, Rajat K; Tomar, Namrata
2012-12-01
Metabolism is a complex process for energy production for cellular activity. It consists of a cascade of reactions that form a highly branched network in which the product of one reaction is the reactant of the next reaction. Metabolic pathways efficiently produce maximal amount of biomass while maintaining a steady-state behavior. The steady-state activity of such biochemical pathways necessarily incorporates feedback inhibition of the enzymes. This observation motivates us to incorporate feedback inhibition for modeling the optimal activity of metabolic pathways using flux balance analysis (FBA). We demonstrate the effectiveness of the methodology on a synthetic pathway with and without feedback inhibition. Similarly, for the first time, the Central Carbon Metabolic (CCM) pathways of Saccharomyces cerevisiae and Homo sapiens have been modeled and compared based on the above understanding. The optimal pathway, which maximizes the amount of the target product(s), is selected from all those obtained by the proposed method. For this, we have observed the concentration of the product inhibited enzymes of CCM pathway and its influence on its corresponding metabolite/substrate. We have also studied the concentration of the enzymes which are responsible for the synthesis of target products. We further hypothesize that an optimal pathway would opt for higher flux rate reactions. In light of these observations, we can say that an optimal pathway should have lower enzyme concentration and higher flux rates. Finally, we demonstrate the superiority of the proposed method by comparing it with the extreme pathway analysis.
Clinical Considerations for the Use Lower Extremity Arthroplasty in the Elderly.
Otero-López, Antonio; Beaton-Comulada, David
2017-11-01
There is an increase in the aging population that has led to a surge of reported cases of osteoarthritis and a greater demand for lower extremity arthroplasty. This article aims to review the current treatment options and expectations when considering lower extremity arthroplasty in the elderly patient with an emphasis on the following subjects: (1) updated clinical guidelines for the management of osteoarthritis in the lower extremity, (2) comorbidities and risk factors in the surgical patient, (3) preoperative evaluation and optimization of the surgical patient, (4) surgical approach and implant selection, and (5) rehabilitation and life after lower extremity arthroplasty. Published by Elsevier Inc.
Controlling extreme events on complex networks
NASA Astrophysics Data System (ADS)
Chen, Yu-Zhong; Huang, Zi-Gang; Lai, Ying-Cheng
2014-08-01
Extreme events, a type of collective behavior in complex networked dynamical systems, often can have catastrophic consequences. To develop effective strategies to control extreme events is of fundamental importance and practical interest. Utilizing transportation dynamics on complex networks as a prototypical setting, we find that making the network ``mobile'' can effectively suppress extreme events. A striking, resonance-like phenomenon is uncovered, where an optimal degree of mobility exists for which the probability of extreme events is minimized. We derive an analytic theory to understand the mechanism of control at a detailed and quantitative level, and validate the theory numerically. Implications of our finding to current areas such as cybersecurity are discussed.
The pitch of short-duration fundamental frequency glissandos.
d'Alessandro, C; Rosset, S; Rossi, J P
1998-10-01
Pitch perception for short-duration fundamental frequency (F0) glissandos was studied. In the first part, new measurements using the method of adjustment are reported. Stimuli were F0 glissandos centered at 220 Hz. The parameters under study were: F0 glissando extents (0, 0.8, 1.5, 3, 6, and 12 semitones, i.e., 0, 10.17, 18.74, 38.17, 76.63, and 155.56 Hz), F0 glissando durations (50, 100, 200, and 300 ms), F0 glissando directions (rising or falling), and the extremity of F0 glissandos matched (beginning or end). In the second part, the main results are discussed: (1) perception seems to correspond to an average of the frequencies present in the vicinity of the extremity matched; (2) the higher extremities of the glissando seem more important; (3) adjustments at the end are closer to the extremities than adjustments at the beginning. In the third part, numerical models accounting for the experimental data are proposed: a time-average model and a weighted time-average model. Optimal parameters for these models are derived. The weighted time-average model achieves a 94% accurate prediction rate for the experimental data. The numerical model is successful in predicting the pitch of short-duration F0 glissandos.
Nagy, Eszter; Apfaltrer, Georg; Riccabona, Michael; Singer, Georg; Stücklschweiger, Georg; Guss, Helmuth; Sorantin, Erich
2017-01-01
Objectives To evaluate and compare surface doses of a cone beam computed tomography (CBCT) and a multidetector computed tomography (MDCT) device in pediatric ankle and wrist phantoms. Methods Thermoluminescent dosimeters (TLD) were used to measure and compare surface doses between CBCT and MDCT in a left ankle and a right wrist pediatric phantom. In both modalities adapted pediatric dose protocols were utilized to achieve realistic imaging conditions. All measurements were repeated three times to prove test-retest reliability. Additionally, objective and subjective image quality parameters were assessed. Results Average surface doses were 3.8 ±2.1 mGy for the ankle, and 2.2 ±1.3 mGy for the wrist in CBCT. The corresponding surface doses in optimized MDCT were 4.5 ±1.3 mGy for the ankle, and 3.4 ±0.7 mGy for the wrist. Overall, mean surface dose was significantly lower in CBCT (3.0 ±1.9 mGy vs. 3.9 ±1.2 mGy, p<0.001). Subjectively rated general image quality was not significantly different between the study protocols (p = 0.421), whereas objectively measured image quality parameters were in favor of CBCT (p<0.001). Conclusions Adapted extremity CBCT imaging protocols have the potential to fall below optimized pediatric ankle and wrist MDCT doses at comparable image qualities. These possible dose savings warrant further development and research in pediatric extremity CBCT applications. PMID:28570626
Yamashita, Taiji; Miyamoto, Kenji; Yonenobu, Hitoshi
2018-06-20
A new pretreatment method using room-temperature ionic liquid (IL) was proposed for observing wood specimens in scanning electron microscopy (SEM). A variety of concentrations were examined for ethanol solution of the IL, [Emim][MePO3Me], to determine an optimal pretreatment procedure. It was concluded that 10% ethanol solution of the IL was the most adequate to acquire good SEM images. Using the procedure optimized, SEM images were taken for typical anatomical types of modern soft and hardwood species and archeological wood. SEM images taken were sufficiently good in observing wood cells. The pretreatment method was also effective to archeological wood dated ca. 1600 years ago. It was thus concluded that the method developed in this study is more useful than those conventionally used. Additionally, pretreatment at the high temperature was performed to confirm morphological changes in softwood. Deformation of latewood cells (tracheids) was occurred by treating with undiluted IL at the high temperature of 50°C, probably due to higher accessibility of the IL into intercellular space. Nonetheless, it was confirmed that this happens under far more extreme conditions than our proposed method.
Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0
NASA Astrophysics Data System (ADS)
Millán-Otoya, Jorge E.; Boettcher, Stefan
2014-03-01
Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
Extreme Experiences and Asking the Unaskable: An Interview with Ted Sizer.
ERIC Educational Resources Information Center
Minton, Elaine
1996-01-01
The renowned educational reformer talks about how memorable, "extreme" learning experiences have shaped his views on education; how to create collegial support; the things that have given him satisfaction; his father's influence on him; the irrepressible optimism of teenagers; taking advantage of serendipitous events; and how questioning…
Fabrication of highly efficient ZnO nanoscintillators
NASA Astrophysics Data System (ADS)
Procházková, Lenka; Gbur, Tomáš; Čuba, Václav; Jarý, Vítězslav; Nikl, Martin
2015-09-01
Photo-induced synthesis of high-efficiency ultrafast nanoparticle scintillators of ZnO was demonstrated. Controlled doping with Ga(III) and La(III) ions together with the optimized method of ZnO synthesis and subsequent two-step annealing in air and under reducing atmosphere allow to achieve very high intensity of UV exciton luminescence, up to 750% of BGO intensity magnitude. Fabricated nanoparticles feature extremely short sub-nanosecond photoluminescence decay times. Temperature dependence of the photoluminescence spectrum within 8-340 K range was investigated and shows the absence of visible defect-related emission within all temperature intervals.
A genetic algorithm approach in interface and surface structure optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jian
The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the materialmore » structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.« less
Adams, Bradley J; Aschheim, Kenneth W
2016-01-01
Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.
Copeland, Kari L; Anderson, Julie A; Farley, Adam R; Cox, James R; Tschumper, Gregory S
2008-11-13
To examine the effects of pi-stacking interactions between aromatic amino acid side chains and adenine bearing ligands in crystalline protein structures, 26 toluene/(N9-methyl)adenine model configurations have been constructed from protein/ligand crystal structures. Full geometry optimizations with the MP2 method cause the 26 crystal structures to collapse to six unique structures. The complete basis set (CBS) limit of the CCSD(T) interaction energies has been determined for all 32 structures by combining explicitly correlated MP2-R12 computations with a correction for higher-order correlation effects from CCSD(T) calculations. The CCSD(T) CBS limit interaction energies of the 26 crystal structures range from -3.19 to -6.77 kcal mol (-1) and average -5.01 kcal mol (-1). The CCSD(T) CBS limit interaction energies of the optimized complexes increase by roughly 1.5 kcal mol (-1) on average to -6.54 kcal mol (-1) (ranging from -5.93 to -7.05 kcal mol (-1)). Corrections for higher-order correlation effects are extremely important for both sets of structures and are responsible for the modest increase in the interaction energy after optimization. The MP2 method overbinds the crystal structures by 2.31 kcal mol (-1) on average compared to 4.50 kcal mol (-1) for the optimized structures.
Management of Lower Extremity Long-bone Fractures in Spinal Cord Injury Patients.
Schulte, Leah M; Scully, Ryan D; Kappa, Jason E
2017-09-01
The AO classification system, used as a guide for modern fracture care and fixation, follows a basic philosophy of care that emphasizes early mobility and return to function. Lower extremity long-bone fractures in patients with spinal cord injury often are pathologic injuries that present unique challenges, to which the AO principles may not be entirely applicable. Optimal treatment achieves healing without affecting the functional level of the patient. These injuries often result from low-energy mechanisms in nonambulatory patients with osteopenic bone and a thin, insensate soft-tissue envelope. The complication rate can be high, and the outcomes can be catastrophic without proper care. Satisfactory results can be obtained through various methods of immobilization. Less frequently, internal fixation is applied. In certain cases, after discussion with the patient, amputation may be suitable. Prevention strategies aim to minimize bone loss and muscle atrophy.
NASA Astrophysics Data System (ADS)
Graves, Mark; Smith, Alexander; Batchelor, Bruce G.; Palmer, Stephen C.
1994-10-01
In the food industry there is an ever increasing need to control and monitor food quality. In recent years fully automated x-ray inspection systems have been used to detect food on-line for foreign body contamination. These systems involve a complex integration of x- ray imaging components with state of the art high speed image processing. The quality of the x-ray image obtained by such systems is very poor compared with images obtained from other inspection processes, this makes reliable detection of very small, low contrast defects extremely difficult. It is therefore extremely important to optimize the x-ray imaging components to give the very best image possible. In this paper we present a method of analyzing the x-ray imaging system in order to consider the contrast obtained when viewing small defects.
Plastic Surgery Challenges in War Wounded I: Flap-Based Extremity Reconstruction
Sabino, Jennifer M.; Slater, Julia; Valerio, Ian L.
2016-01-01
Scope and Significance: Reconstruction of traumatic injuries requiring tissue transfer begins with aggressive resuscitation and stabilization. Systematic advances in acute casualty care at the point of injury have improved survival and allowed for increasingly complex treatment before definitive reconstruction at tertiary medical facilities outside the combat zone. As a result, the complexity of the limb salvage algorithm has increased over 14 years of combat activities in Iraq and Afghanistan. Problem: Severe poly-extremity trauma in combat casualties has led to a large number of extremity salvage cases. Advanced reconstructive techniques coupled with regenerative medicine applications have played a critical role in the restoration, recovery, and rehabilitation of functional limb salvage. Translational Relevance: The past 14 years of war trauma have increased our understanding of tissue transfer for extremity reconstruction in the treatment of combat casualties. Injury patterns, flap choice, and reconstruction timing are critical variables to consider for optimal outcomes. Clinical Relevance: Subacute reconstruction with specifically chosen flap tissue and donor site location based on individual injuries result in successful tissue transfer, even in critically injured patients. These considerations can be combined with regenerative therapies to optimize massive wound coverage and limb salvage form and function in previously active patients. Summary: Traditional soft tissue reconstruction is integral in the treatment of war extremity trauma. Pedicle and free flaps are a critically important part of the reconstructive ladder for salvaging extreme extremity injuries that are seen as a result of the current practice of war. PMID:27679751
Low-loss, submicron chalcogenide integrated photonics with chlorine plasma etching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiles, Jeff; Malinowski, Marcin; Rao, Ashutosh
A chlorine plasma etching-based method for the fabrication of high-performance chalcogenide-based integrated photonics on silicon substrates is presented. By optimizing the etching conditions, chlorine plasma is employed to produce extremely low-roughness etched sidewalls on waveguides with minimal penalty to propagation loss. Using this fabrication method, microring resonators with record-high intrinsic Q-factors as high as 450 000 and a corresponding propagation loss as low as 0.42 dB/cm are demonstrated in submicron chalcogenide waveguides. Furthermore, the developed chlorine plasma etching process is utilized to demonstrate fiber-to-waveguide grating couplers in chalcogenide photonics with high power coupling efficiency of 37% for transverse-electric polarized modes.
Method for the production of wideband THz radiation
Krafft, Geoffrey A [Newport News, VA
2008-01-01
A method for the production of extremely wide bandwidth THz radiation comprising: delivering an electron beam from a source to an undulator that does not deflect the angle or transversely move the electron beam; and optimizing the undulator to yield peak emission in the middle of the THz band (1 THz). These objectives are accomplished by magnetically bending the orbit of the incoming electron beam in the undulator according to the function x(z)=x.sub.o exp(-z.sup.2/2.sigma..sup.2) and controlling the transverse magnetic field to be B(z)=B.sub.0(1-z.sup.2/.sigma..sup.2)exp(-z.sup.2/2.sigma..sup.2).
Sakuma, Kaname; Tanaka, Akira; Mataga, Izumi
2016-12-01
The collagen gel droplet-embedded culture drug sensitivity test (CD-DST) is an anticancer drug sensitivity test that uses a method of three-dimensional culture of extremely small samples, and it is suited to primary cultures of human cancer cells. It is a useful method for oral squamous cell carcinoma (OSCC), in which the cancer tissues available for testing are limited. However, since the optimal contact concentrations of anticancer drugs have yet to be established in OSCC, CD-DST for detecting drug sensitivities of OSCC is currently performed by applying the optimal contact concentrations for stomach cancer. In the present study, squamous carcinoma cell lines from human oral cancer were used to investigate the optimal contact concentrations of cisplatin (CDDP) and fluorouracil (5-FU) during CD-DST for OSCC. CD-DST was performed in 7 squamous cell carcinoma cell lines derived from human oral cancers (Ca9-22, HSC-3, HSC-4, HO-1-N-1, KON, OSC-19 and SAS) using CDDP (0.15, 0.3, 1.25, 2.5, 5.0 and 10.0 µg/ml) and 5-FU (0.4, 0.9, 1.8, 3.8, 7.5, 15.0 and 30.0 µg/ml), and the optimal contact concentrations were calculated from the clinical response rate of OSCC to single-drug treatment and the in vitro efficacy rate curve. The optimal concentrations were 0.5 µg/ml for CDDP and 0.7 µg/ml for 5-FU. The antitumor efficacy of CDDP at this optimal contact concentration in CD-DST was compared to the antitumor efficacy in the nude mouse method. The T/C values, which were calculated as the ratio of the colony volume of the treatment group and the colony volume of the control group, at the optimal contact concentration of CDDP and of the nude mouse method were almost in agreement (P<0.05) and predicted clinical efficacy, indicating that the calculated optimal contact concentration is valid. Therefore, chemotherapy for OSCC based on anticancer drug sensitivity tests offers patients a greater freedom of choice and is likely to assume a greater importance in the selection of treatment from the perspectives of function preservation and quality of life, as well as representing a treatment option for unresectable, intractable or recurrent cases.
Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav
2018-04-01
A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).
NASA Astrophysics Data System (ADS)
Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.
2018-01-01
A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.
Should psychology be ‘positive’? Letting the philosophers speak
Oyebode, Femi
2014-01-01
This is a brief commentary on the value of optimism in therapy. It draws on the philosophical writings of Schopenhauer and Aristotle. It suggests that the modern preoccupation with optimism may be as extreme as the bleak pessimistic outlook favoured by Schopenhauer. PMID:25237498
Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm
NASA Astrophysics Data System (ADS)
Moumen, Abdelkader; Sissaoui, Hocine
2017-03-01
Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.
NASA Astrophysics Data System (ADS)
Ozheredov, V. A.; Breus, T. K.; Gurfinkel, Yu. I.; Matveeva, T. A.
2014-12-01
A new approach to finding the dependence between heliophysical and meteorological factors and physiological parameters is considered that is based on the preliminary filtering of precedents (outliers). The sought-after dependence is masked by extraneous influences which cannot be taken into account. Therefore, the typically calculated correlation between the external-influence ( x) and physiology ( y) parameters is extremely low and does not allow their interdependence to be conclusively proved. A robust method for removing the precedents (outliers) from the database is proposed that is based on the intelligent sorting of the polynomial curves of possible dependences y( x), followed by filtering out the precedents which are far away from y( x) and optimizing the coefficient of nonlinear correlation between the regular, i.e., remaining, precedents. This optimization problem is shown to be a search for a maximum in the absence of the concept of gradient and requires the use of a genetic algorithm based on the Gray code. The relationships between the various medical and biological parameters and characteristics of the space and terrestrial weather are obtained and verified using the cross-validation method. It is proven that, by filtering out no more than 20% of precedents, it is possible to obtain a nonlinear correlation coefficient of no less than 0.5. A juxtaposition of the proposed method for filtering precedents (outliers) and the least-square method (LSM) for determining the optimal polynomial using multiple independent tests (Monte Carlo method) of models, which are as close as possible to real dependences, has shown that the LSM determination loses much in comparison to the proposed method.
Preliminary Design of Low-Thrust Interplanetary Missions
NASA Technical Reports Server (NTRS)
Sims, Jon A.; Flanagan, Steve N.
1997-01-01
For interplanetary missions, highly efficient electric propulsion systems can be used to increase the mass delivered to the destination and/or reduce the trip time over typical chemical propulsion systems. This technology is being demonstrated on the Deep Space 1 mission - part of NASA's New Millennium Program validating technologies which can lower the cost and risk and enhance the performance of future missions. With the successful demonstration on Deep Space 1, future missions can consider electric propulsion as a viable propulsion option. Electric propulsion systems, while highly efficient, produce only a small amount of thrust. As a result, the engines operate during a significant fraction of the trajectory. This characteristic makes it much more difficult to find optimal trajectories. The methods for optimizing low-thrust trajectories are typically categorized as either indirect, or direct. Indirect methods are based on calculus of variations, resulting in a two-point boundary value problem that is solved by satisfying terminal constraints and targeting conditions. These methods are subject to extreme sensitivity to the initial guess of the variables - some of which are not physically intuitive. Adding a gravity assist to the trajectory compounds the sensitivity. Direct methods parameterize the problem and use nonlinear programming techniques to optimize an objective function by adjusting a set of variables. A variety of methods of this type have been examined with varying results. These methods are subject to the limitations of the nonlinear programming techniques. In this paper we present a direct method intended to be used primarily for preliminary design of low-thrust interplanetary trajectories, including those with multiple gravity assists. Preliminary design implies a willingness to accept limited accuracy to achieve an efficient algorithm that executes quickly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, William; Zucker, Jeremy; Baxter, Douglas
We report the application of a recently proposed approach for modeling biological systems using a maximum entropy production rate principle in lieu of having in vivo rate constants. The method is applied in four steps: (1) a new ODE-based optimization approach based on Marcelin’s 1910 mass action equation is used to obtain the maximum entropy distribution, (2) the predicted metabolite concentrations are compared to those generally expected from experiment using a loss function from which post-translational regulation of enzymes is inferred, (3) the system is re-optimized with the inferred regulation from which rate constants are determined from the metabolite concentrationsmore » and reaction fluxes, and finally (4) a full ODE-based, mass action simulation with rate parameters and allosteric regulation is obtained. From the last step, the power characteristics and resistance of each reaction can be determined. The method is applied to the central metabolism of Neurospora crassa and the flow of material through the three competing pathways of upper glycolysis, the non-oxidative pentose phosphate pathway, and the oxidative pentose phosphate pathway are evaluated as a function of the NADP/NADPH ratio. It is predicted that regulation of phosphofructokinase (PFK) and flow through the pentose phosphate pathway are essential for preventing an extreme level of fructose 1, 6-bisphophate accumulation. Such an extreme level of fructose 1,6-bisphophate would otherwise result in a glassy cytoplasm with limited diffusion, dramatically decreasing the entropy and energy production rate and, consequently, biological competitiveness.« less
HARPS-N OBSERVES THE SUN AS A STAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumusque, Xavier; Glenday, Alex; Phillips, David F.
Radial velocity (RV) perturbations induced by stellar surface inhomogeneities including spots, plages and granules currently limit the detection of Earth-twins using Doppler spectroscopy. Such stellar noise is poorly understood for stars other than the Sun because their surface is unresolved. In particular, the effects of stellar surface inhomogeneities on observed stellar radial velocities are extremely difficult to characterize, and thus developing optimal correction techniques to extract true stellar radial velocities is extremely challenging. In this paper, we present preliminary results of a solar telescope built to feed full-disk sunlight into the HARPS-N spectrograph, which is in turn calibrated with anmore » astro-comb. This setup enables long-term observation of the Sun as a star with state-of-the-art sensitivity to RV changes. Over seven days of observing in 2014, we show an average 50 cm s{sup −1} RV rms over a few hours of observation. After correcting observed radial velocities for spot and plage perturbations using full-disk photometry of the Sun, we lower by a factor of two the weekly RV rms to 60 cm s{sup −1}. The solar telescope is now entering routine operation, and will observe the Sun every clear day for several hours. We will use these radial velocities combined with data from solar satellites to improve our understanding of stellar noise and develop optimal correction methods. If successful, these new methods should enable the detection of Venus over the next two to three years, thus demonstrating the possibility of detecting Earth-twins around other solar-like stars using the RV technique.« less
NASA Astrophysics Data System (ADS)
Afentoulis, Vasileios; Mohammadi, Bijan; Tsoukala, Vasiliki
2017-04-01
Coastal zone is a significant geographical and particular region, since it gathers a wide range of social-human's activities and appears to be a complex as well as fragile system of natural variables. Coastal communities are increasingly at risk from serious coastal hazards, such as shoreline erosion and flooding related to extreme hydro-meteorological events: storm surges, heavy precipitation, tsunamis and tides. In order to investigate the impact of these extreme events on the coastal zone, it is necessary to describe the driving mechanisms which contribute to its destabilization and more precisely the interaction between the wave forces and the transport of sediment. The aim of the present study is to examine the capability of coastal zone processes simulation under extreme wave events, using numerical models, in the coastal area of Rethymno, Greece. Rethymno city is one of the eleven case study areas of PEARL (Preparing for Extreme And Rare events in coastal regions) project, an EU funded research project, which aims at developing adaptive risk management strategies for coastal communities focusing on extreme hydro-meteorological events, with a multidisciplinary approach integrating social, environmental and technical research and innovation so as to increase the resilience of coastal regions all over the world. Within this framework, three different numerical models have been used: the MIKE 21 - DHI, the XBeach model and a numerical formulation for sea bed evolution, developed by Afaf Bouharguane and Bijan Mohammadi (2013). For the determination of the wave and hydrodynamic conditions, as well as the assessment of the sediment transport components, the MIKE 21 SW and the MIKE 21 FM modules have been applied and the bathymetry of Rethymno is arranged into a 2D unstructured mesh. This method of digitalization was selected because of its ability to easily represent the complex geometry of the coastal zone. It allows smaller scale wave characteristics to be represented at a finer resolution, near of the shore and the shoreline structures, and the offshore respective characteristics at a coarser resolution. For the investigation of the morphological evolution of the sandy bed a new numerical model has been used. The proposed model is based on shallow water equations and on minimization principles in order to investigate the coupling between the flow and the sediment, considering the sea bed as a structure with low stiffness. Minimization principles have been used many times in the past to design defense structures against beach erosion. In previous works, the designed structures were independent of time and were built once for all. Hence, the present method goes one step further giving the possibility to the structure to change in time. The fundamental assumption of this method is the fact that bed adapts to the flow by some sort of optimal sand transport in order to minimize some energy expression, optimal transport is seen here as minimal change in the bed shape. Furthermore, in order to verify the accuracy of this formulation the output is compared with the results of the XBeach model, under the same simulation conditions.
Taguchi method of experimental design in materials education
NASA Technical Reports Server (NTRS)
Weiser, Martin W.
1993-01-01
Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.
Image deblurring based on nonlocal regularization with a non-convex sparsity constraint
NASA Astrophysics Data System (ADS)
Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi
2018-04-01
In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.
Optimizing prescribed fire allocation for managing fire risk in central Catalonia.
Alcasena, Fermín J; Ager, Alan A; Salis, Michele; Day, Michelle A; Vega-Garcia, Cristina
2018-04-15
We used spatial optimization to allocate and prioritize prescribed fire treatments in the fire-prone Bages County, central Catalonia (northeastern Spain). The goal of this study was to identify suitable strategic locations on forest lands for fuel treatments in order to: 1) disrupt major fire movements, 2) reduce ember emissions, and 3) reduce the likelihood of large fires burning into residential communities. We first modeled fire spread, hazard and exposure metrics under historical extreme fire weather conditions, including node influence grid for surface fire pathways, crown fraction burned and fire transmission to residential structures. Then, we performed an optimization analysis on individual planning areas to identify production possibility frontiers for addressing fire exposure and explore alternative prescribed fire treatment configurations. The results revealed strong trade-offs among different fire exposure metrics, showed treatment mosaics that optimize the allocation of prescribed fire, and identified specific opportunities to achieve multiple objectives. Our methods can contribute to improving the efficiency of prescribed fire treatment investments and wildfire management programs aimed at creating fire resilient ecosystems, facilitating safe and efficient fire suppression, and safeguarding rural communities from catastrophic wildfires. The analysis framework can be used to optimally allocate prescribed fire in other fire-prone areas within the Mediterranean region and elsewhere. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimal simultaneous superpositioning of multiple structures with missing data.
Theobald, Douglas L; Steindel, Phillip A
2012-08-01
Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.
Evolutionary Strategies for Protein Folding
NASA Astrophysics Data System (ADS)
Murthy Gopal, Srinivasa; Wenzel, Wolfgang
2006-03-01
The free energy approach for predicting the protein tertiary structure describes the native state of a protein as the global minimum of an appropriate free-energy forcefield. The low-energy region of the free-energy landscape of a protein is extremely rugged. Efficient optimization methods must therefore speed up the search for the global optimum by avoiding high energy transition states, adapt large scale moves or accept unphysical intermediates. Here we investigate an evolutionary strategies(ES) for optimizing a protein conformation in our all-atom free-energy force field([1],[2]). A set of random conformations is evolved using an ES to get a diverse population containing low energy structure. The ES is shown to balance energy improvement and yet maintain diversity in structures. The ES is implemented as a master-client model for distributed computing. Starting from random structures and by using this optimization technique, we were able to fold a 20 amino-acid helical protein and 16 amino-acid beta hairpin[3]. We compare ES to basin hopping method. [1]T. Herges and W. Wenzel,Biophys.J. 87,3100(2004) [2] A. Verma and W. Wenzel Stabilization and folding of beta-sheet and alpha-helical proteins in an all-atom free energy model(submitted)(2005) [3] S. M. Gopal and W. Wenzel Evolutionary Strategies for Protein Folding (in preparation)
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.
Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei
2017-05-18
The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.
NASA Astrophysics Data System (ADS)
Smith, Joshua; Hinterberger, Michael; Hable, Peter; Koehler, Juergen
2014-12-01
Extended battery system lifetime and reduced costs are essential to the success of electric vehicles. An effective thermal management strategy is one method of enhancing system lifetime increasing vehicle range. Vehicle-typical space restrictions favor the minimization of battery thermal management system (BTMS) size and weight, making their production and subsequent vehicle integration extremely difficult and complex. Due to these space requirements, a cooling plate as part of a water-glycerol cooling circuit is commonly implemented. This paper presents a computational fluid dynamics (CFD) model and multi-objective analysis technique for determining the thermal effect of coolant flow rate and inlet temperature in a cooling plate-at a range of vehicle operating conditions-on a battery system, thereby providing a dynamic input for one-dimensional models. Traditionally, one-dimensional vehicular thermal management system models assume a static heat input from components such as a battery system: as a result, the components are designed for a set coolant input (flow rate and inlet temperature). Such a design method is insufficient for dynamic thermal management models and control strategies, thereby compromising system efficiency. The presented approach allows for optimal BMTS design and integration in the vehicular coolant circuit.
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.
Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G
2014-12-01
Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
Have human activities changed the frequencies of absolute extreme temperatures in eastern China?
NASA Astrophysics Data System (ADS)
Wang, Jun; Tett, Simon F. B.; Yan, Zhongwei; Feng, Jinming
2018-01-01
Extreme temperatures affect populous regions, like eastern China, causing substantial socio-economic losses. It is beneficial to explore whether the frequencies of absolute or threshold-based extreme temperatures have been changed by human activities, such as anthropogenic emissions of greenhouse gases (GHGs). In this study, we compared observed and multi-model-simulated changes in the frequencies of summer days, tropical nights, icy days and frosty nights in eastern China for the years 1960-2012 by using an optimal fingerprinting method. The observed long-term trends in the regional mean frequencies of these four indices were +2.36, +1.62, -0.94, -3.02 days decade-1. The models performed better in simulating the observed frequency change in daytime extreme temperatures than nighttime ones. Anthropogenic influences are detectable in the observed frequency changes of these four temperature extreme indices. The influence of natural forcings could not be detected robustly in any indices. Further analysis found that the effects of GHGs changed the frequencies of summer days (tropical nights, icy days, frosty nights) by +3.48 ± 1.45 (+2.99 ± 1.35, -2.52 ± 1.28, -4.11 ± 1.48) days decade-1. Other anthropogenic forcing agents (dominated by anthropogenic aerosols) offset the GHG effect and changed the frequencies of these four indices by -1.53 ± 0.78, -1.49 ± 0.94, +1.84 ± 1.07, +1.45 ± 1.26 days decade-1, respectively. Little influence of natural forcings was found in the observed frequency changes of these four temperature extreme indices.
Optimizing oil spill cleanup efforts: A tactical approach and evaluation framework.
Grubesic, Tony H; Wei, Ran; Nelson, Jake
2017-12-15
Although anthropogenic oil spills vary in size, duration and severity, their broad impacts on complex social, economic and ecological systems can be significant. Questions pertaining to the operational challenges associated with the tactical allocation of human resources, cleanup equipment and supplies to areas impacted by a large spill are particularly salient when developing mitigation strategies for extreme oiling events. The purpose of this paper is to illustrate the application of advanced oil spill modeling techniques in combination with a developed mathematical model to spatially optimize the allocation of response crews and equipment for cleaning up an offshore oil spill. The results suggest that the detailed simulations and optimization model are a good first step in allowing both communities and emergency responders to proactively plan for extreme oiling events and develop response strategies that minimize the impacts of spills. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Chamitoff, Gregory Errol
1992-01-01
Intelligent optimization methods are applied to the problem of real-time flight control for a class of airbreathing hypersonic vehicles (AHSV). The extreme flight conditions that will be encountered by single-stage-to-orbit vehicles, such as the National Aerospace Plane, present a tremendous challenge to the entire spectrum of aerospace technologies. Flight control for these vehicles is particularly difficult due to the combination of nonlinear dynamics, complex constraints, and parametric uncertainty. An approach that utilizes all available a priori and in-flight information to perform robust, real time, short-term trajectory planning is presented.
Shapoval, S D; Savon, I L; Sofilkanych, M M
2015-03-01
General principles of treatment in patients, suffering diabetic foot syndrome, are adduced. There was proved, that reconvalescence of the patients depends not only on quality of complex treatment, but from optimal choice of anesthesia method, its impact on postoperative period course. Application of prolonged blockade of n. ischiadicus gives possibility to perform operative intervention on the lower extremity in full volume, guarantees sufficient motor and sensory block, permits patients to reject from application of narcotic analgetics, to reduce the dose of strong nonnarcotic analgetics, the terms of transition of the wound process phase I into phase II, promotes early activization of patients postoperatively, constitutes alternative for other methods of anesthesiological support.
A finite-element toolbox for the stationary Gross-Pitaevskii equation with rotation
NASA Astrophysics Data System (ADS)
Vergez, Guillaume; Danaila, Ionut; Auliac, Sylvain; Hecht, Frédéric
2016-12-01
We present a new numerical system using classical finite elements with mesh adaptivity for computing stationary solutions of the Gross-Pitaevskii equation. The programs are written as a toolbox for FreeFem++ (www.freefem.org), a free finite-element software available for all existing operating systems. This offers the advantage to hide all technical issues related to the implementation of the finite element method, allowing to easily code various numerical algorithms. Two robust and optimized numerical methods were implemented to minimize the Gross-Pitaevskii energy: a steepest descent method based on Sobolev gradients and a minimization algorithm based on the state-of-the-art optimization library Ipopt. For both methods, mesh adaptivity strategies are used to reduce the computational time and increase the local spatial accuracy when vortices are present. Different run cases are made available for 2D and 3D configurations of Bose-Einstein condensates in rotation. An optional graphical user interface is also provided, allowing to easily run predefined cases or with user-defined parameter files. We also provide several post-processing tools (like the identification of quantized vortices) that could help in extracting physical features from the simulations. The toolbox is extremely versatile and can be easily adapted to deal with different physical models.
Finding Statistically Significant Communities in Networks
Lancichinetti, Andrea; Radicchi, Filippo; Ramasco, José J.; Fortunato, Santo
2011-01-01
Community structure is one of the main structural features of networks, revealing both their internal organization and the similarity of their elementary units. Despite the large variety of methods proposed to detect communities in graphs, there is a big need for multi-purpose techniques, able to handle different types of datasets and the subtleties of community structure. In this paper we present OSLOM (Order Statistics Local Optimization Method), the first method capable to detect clusters in networks accounting for edge directions, edge weights, overlapping communities, hierarchies and community dynamics. It is based on the local optimization of a fitness function expressing the statistical significance of clusters with respect to random fluctuations, which is estimated with tools of Extreme and Order Statistics. OSLOM can be used alone or as a refinement procedure of partitions/covers delivered by other techniques. We have also implemented sequential algorithms combining OSLOM with other fast techniques, so that the community structure of very large networks can be uncovered. Our method has a comparable performance as the best existing algorithms on artificial benchmark graphs. Several applications on real networks are shown as well. OSLOM is implemented in a freely available software (http://www.oslom.org), and we believe it will be a valuable tool in the analysis of networks. PMID:21559480
NASA Astrophysics Data System (ADS)
Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin
2015-11-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
NASA Astrophysics Data System (ADS)
Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.
2015-12-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N
2000-05-01
We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.
Challenges and opportunities in the manufacture and expansion of cells for therapy.
Maartens, Joachim H; De-Juan-Pardo, Elena; Wunner, Felix M; Simula, Antonio; Voelcker, Nicolas H; Barry, Simon C; Hutmacher, Dietmar W
2017-10-01
Laboratory-based ex vivo cell culture methods are largely manual in their manufacturing processes. This makes it extremely difficult to meet regulatory requirements for process validation, quality control and reproducibility. Cell culture concepts with a translational focus need to embrace a more automated approach where cell yields are able to meet the quantitative production demands, the correct cell lineage and phenotype is readily confirmed and reagent usage has been optimized. Areas covered: This article discusses the obstacles inherent in classical laboratory-based methods, their concomitant impact on cost-of-goods and that a technology step change is required to facilitate translation from bed-to-bedside. Expert opinion: While traditional bioreactors have demonstrated limited success where adherent cells are used in combination with microcarriers, further process optimization will be required to find solutions for commercial-scale therapies. New cell culture technologies based on 3D-printed cell culture lattices with favourable surface to volume ratios have the potential to change the paradigm in industry. An integrated Quality-by-Design /System engineering approach will be essential to facilitate the scaled-up translation from proof-of-principle to clinical validation.
Slow Cooling Cryopreservation Optimized to Human Pluripotent Stem Cells.
Miyazaki, Takamichi; Suemori, Hirofumi
2016-01-01
Human pluripotent stem cells (hPSCs) have the potential for unlimited expansion and differentiation into cells that form all three germ layers. Cryopreservation is one of the key processes for successful applications of hPSCs, because it allows semi-permanent preservation of cells and their easy transportation. Most animal cell lines, including mouse embryonic stem cells, are standardly cryopreserved by slow cooling; however, hPSCs have been difficult to preserve and their cell viability has been extremely low whenever cryopreservation has been attempted.Here, we investigate the reasons for failure of slow cooling in hPSC cryopreservation. Cryopreservation involves a series of steps and is not a straightforward process. Cells may die due to various reasons during cryopreservation. Indeed, hPSCs preserved by traditional methods often suffer necrosis during the freeze-thawing stages, and the colony state of hPSCs prior to cryopreservation is a major factor contributing to cell death.It has now become possible to cryopreserve hPSCs using conventional cryopreservation methods without any specific equipment. This review summarizes the advances in this area and discusses the optimization of slow cooling cryopreservation for hPSC storage.
Anderson, Jeffrey R; Barrett, Steven F
2009-01-01
Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.
NASA Astrophysics Data System (ADS)
Taha, Ahmad Fayez
Transportation networks, wearable devices, energy systems, and the book you are reading now are all ubiquitous cyber-physical systems (CPS). These inherently uncertain systems combine physical phenomena with communication, data processing, control and optimization. Many CPSs are controlled and monitored by real-time control systems that use communication networks to transmit and receive data from systems modeled by physical processes. Existing studies have addressed a breadth of challenges related to the design of CPSs. However, there is a lack of studies on uncertain CPSs subject to dynamic unknown inputs and cyber-attacks---an artifact of the insertion of communication networks and the growing complexity of CPSs. The objective of this dissertation is to create secure, computational foundations for uncertain CPSs by establishing a framework to control, estimate and optimize the operation of these systems. With major emphasis on power networks, the dissertation deals with the design of secure computational methods for uncertain CPSs, focusing on three crucial issues---(1) cyber-security and risk-mitigation, (2) network-induced time-delays and perturbations and (3) the encompassed extreme time-scales. The dissertation consists of four parts. In the first part, we investigate dynamic state estimation (DSE) methods and rigorously examine the strengths and weaknesses of the proposed routines under dynamic attack-vectors and unknown inputs. In the second part, and utilizing high-frequency measurements in smart grids and the developed DSE methods in the first part, we present a risk mitigation strategy that minimizes the encountered threat levels, while ensuring the continual observability of the system through available, safe measurements. The developed methods in the first two parts rely on the assumption that the uncertain CPS is not experiencing time-delays, an assumption that might fail under certain conditions. To overcome this challenge, networked unknown input observers---observers/estimators for uncertain CPSs---are designed such that the effect of time-delays and cyber-induced perturbations are minimized, enabling secure DSE and risk mitigation in the first two parts. The final part deals with the extreme time-scales encompassed in CPSs, generally, and smart grids, specifically. Operational decisions for long time-scales can adversely affect the security of CPSs for faster time-scales. We present a model that jointly describes steady-state operation and transient stability by combining convex optimal power flow with semidefinite programming formulations of an optimal control problem. This approach can be jointly utilized with the aforementioned parts of the dissertation work, considering time-delays and DSE. The research contributions of this dissertation furnish CPS stakeholders with insights on the design and operation of uncertain CPSs, whilst guaranteeing the system's real-time safety. Finally, although many of the results of this dissertation are tailored to power systems, the results are general enough to be applied for a variety of uncertain CPSs.
Jaeschke, Roman; Stevens, Scott M.; Goodacre, Steven; Wells, Philip S.; Stevenson, Matthew D.; Kearon, Clive; Schunemann, Holger J.; Crowther, Mark; Pauker, Stephen G.; Makdissi, Regina; Guyatt, Gordon H.
2012-01-01
Background: Objective testing for DVT is crucial because clinical assessment alone is unreliable and the consequences of misdiagnosis are serious. This guideline focuses on the identification of optimal strategies for the diagnosis of DVT in ambulatory adults. Methods: The methods of this guideline follow those described in Methodology for the Development of Antithrombotic Therapy and Prevention of Thrombosis Guidelines: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Results: We suggest that clinical assessment of pretest probability of DVT, rather than performing the same tests in all patients, should guide the diagnostic process for a first lower extremity DVT (Grade 2B). In patients with a low pretest probability of first lower extremity DVT, we recommend initial testing with D-dimer or ultrasound (US) of the proximal veins over no diagnostic testing (Grade 1B), venography (Grade 1B), or whole-leg US (Grade 2B). In patients with moderate pretest probability, we recommend initial testing with a highly sensitive D-dimer, proximal compression US, or whole-leg US rather than no testing (Grade 1B) or venography (Grade 1B). In patients with a high pretest probability, we recommend proximal compression or whole-leg US over no testing (Grade 1B) or venography (Grade 1B). Conclusions: Favored strategies for diagnosis of first DVT combine use of pretest probability assessment, D-dimer, and US. There is lower-quality evidence available to guide diagnosis of recurrent DVT, upper extremity DVT, and DVT during pregnancy. PMID:22315267
Improving Efficiency in Multi-Strange Baryon Reconstruction in d-Au at STAR
NASA Astrophysics Data System (ADS)
Leight, William
2003-10-01
We report preliminary multi-strange baryon measurements for d-Au collisions recorded at RHIC by the STAR experiment. After using classical topological analysis, in which cuts for each discriminating variable are adjusted by hand, we investigate improvements in signal-to-noise optimization using Linear Discriminant Analysis (LDA). LDA is an algorithm for finding, in the n-dimensional space of the n discriminating variables, the axis on which the signal and noise distributions are most separated. LDA is the first step in moving towards more sophisticated techniques for signal-to-noise optimization, such as Artificial Neural Nets. Due to the relatively low background and sufficiently high yields of d-Au collisions, they form an ideal system to study these possibilities for improving reconstruction methods. Such improvements will be extremely important for forthcoming Au-Au runs in which the size of the combinatoric background is a major problem in reconstruction efforts.
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Motion Compensation in Extremity Cone-Beam CT Using a Penalized Image Sharpness Criterion
Sisniega, A.; Stayman, J. W.; Yorkston, J.; Siewerdsen, J. H.; Zbijewski, W.
2017-01-01
Cone-beam CT (CBCT) for musculoskeletal imaging would benefit from a method to reduce the effects of involuntary patient motion. In particular, the continuing improvement in spatial resolution of CBCT may enable tasks such as quantitative assessment of bone microarchitecture (0.1 mm – 0.2 mm detail size), where even subtle, sub-mm motion blur might be detrimental. We propose a purely image based motion compensation method that requires no fiducials, tracking hardware or prior images. A statistical optimization algorithm (CMA-ES) is used to estimate a motion trajectory that optimizes an objective function consisting of an image sharpness criterion augmented by a regularization term that encourages smooth motion trajectories. The objective function is evaluated using a volume of interest (VOI, e.g. a single bone and surrounding area) where the motion can be assumed to be rigid. More complex motions can be addressed by using multiple VOIs. Gradient variance was found to be a suitable sharpness metric for this application. The performance of the compensation algorithm was evaluated in simulated and experimental CBCT data, and in a clinical dataset. Motion-induced artifacts and blurring were significantly reduced across a broad range of motion amplitudes, from 0.5 mm to 10 mm. Structure Similarity Index (SSIM) against a static volume was used in the simulation studies to quantify the performance of the motion compensation. In studies with translational motion, the SSIM improved from 0.86 before compensation to 0.97 after compensation for 0.5 mm motion, from 0.8 to 0.94 for 2 mm motion and from 0.52 to 0.87 for 10 mm motion (~70% increase). Similar reduction of artifacts was observed in a benchtop experiment with controlled translational motion of an anthropomorphic hand phantom, where SSIM (against a reconstruction of a static phantom) improved from 0.3 to 0.8 for 10 mm motion. Application to a clinical dataset of a lower extremity showed dramatic reduction of streaks and improvement in delineation of tissue boundaries and trabecular structures throughout the whole volume. The proposed method will support new applications of extremity CBCT in areas where patient motion may not be sufficiently managed by immobilization, such as imaging under load and quantitative assessment of subchondral bone architecture. PMID:28327471
A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models
NASA Astrophysics Data System (ADS)
Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele
2017-11-01
Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.
Application of a fast skyline computation algorithm for serendipitous searching problems
NASA Astrophysics Data System (ADS)
Koizumi, Kenichi; Hiraki, Kei; Inaba, Mary
2018-02-01
Skyline computation is a method of extracting interesting entries from a large population with multiple attributes. These entries, called skyline or Pareto optimal entries, are known to have extreme characteristics that cannot be found by outlier detection methods. Skyline computation is an important task for characterizing large amounts of data and selecting interesting entries with extreme features. When the population changes dynamically, the task of calculating a sequence of skyline sets is called continuous skyline computation. This task is known to be difficult to perform for the following reasons: (1) information of non-skyline entries must be stored since they may join the skyline in the future; (2) the appearance or disappearance of even a single entry can change the skyline drastically; (3) it is difficult to adopt a geometric acceleration algorithm for skyline computation tasks with high-dimensional datasets. Our new algorithm called jointed rooted-tree (JR-tree) manages entries using a rooted tree structure. JR-tree delays extend the tree to deep levels to accelerate tree construction and traversal. In this study, we presented the difficulties in extracting entries tagged with a rare label in high-dimensional space and the potential of fast skyline computation in low-latency cell identification technology.
Two normal incidence collimators designed for the calibration of the extreme ultraviolet explorer
NASA Technical Reports Server (NTRS)
Jelinsky, Sharon R.; Welsh, Barry; Jelinsky, Patrick; Spiller, Eberhard
1988-01-01
Two Dall-Kirkham, normal incidence collimators have been designed to calibrate the imaging properties of the Extreme Ultraviolet Explorer over the wavelength region from 114 to 2000 A. The mirrors of the short-wavelength, 25-cm diameter collimator are superpolished Zerodur which have been multilayer coated for optimal reflectivity at 114 A. The mirrors of the long-wavelength, 41.25-cm diameter collimator are gold coated Zerodur for high reflectance above 300 A. The design, performance, and future use of these collimators in the extreme ultra-violet is discussed.
An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming
NASA Astrophysics Data System (ADS)
Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu
In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.
Systematic parameter estimation in data-rich environments for cell signalling dynamics
Nim, Tri Hieu; Luo, Le; Clément, Marie-Véronique; White, Jacob K.; Tucker-Kellogg, Lisa
2013-01-01
Motivation: Computational models of biological signalling networks, based on ordinary differential equations (ODEs), have generated many insights into cellular dynamics, but the model-building process typically requires estimating rate parameters based on experimentally observed concentrations. New proteomic methods can measure concentrations for all molecular species in a pathway; this creates a new opportunity to decompose the optimization of rate parameters. Results: In contrast with conventional parameter estimation methods that minimize the disagreement between simulated and observed concentrations, the SPEDRE method fits spline curves through observed concentration points, estimates derivatives and then matches the derivatives to the production and consumption of each species. This reformulation of the problem permits an extreme decomposition of the high-dimensional optimization into a product of low-dimensional factors, each factor enforcing the equality of one ODE at one time slice. Coarsely discretized solutions to the factors can be computed systematically. Then the discrete solutions are combined using loopy belief propagation, and refined using local optimization. SPEDRE has unique asymptotic behaviour with runtime polynomial in the number of molecules and timepoints, but exponential in the degree of the biochemical network. SPEDRE performance is comparatively evaluated on a novel model of Akt activation dynamics including redox-mediated inactivation of PTEN (phosphatase and tensin homologue). Availability and implementation: Web service, software and supplementary information are available at www.LtkLab.org/SPEDRE Supplementary information: Supplementary data are available at Bioinformatics online. Contact: LisaTK@nus.edu.sg PMID:23426255
Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun
2017-01-01
Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.
Extreme learning machine based optimal embedding location finder for image steganography
Aljeroudi, Yazan
2017-01-01
In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM) algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM) index, fusion matrices, and mean square error (MSE). The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods. PMID:28196080
Safety and improvement of movement function after stroke with atomoxetine: A pilot randomized trial
Ward, Andrea; Carrico, Cheryl; Powell, Elizabeth; Westgate, Philip M.; Nichols, Laurie; Fleischer, Anne; Sawaki, Lumy
2016-01-01
Background: Intensive, task-oriented motor training has been associated with neuroplastic reorganization and improved upper extremity movement function after stroke. However, to optimize such training for people with moderate-to-severe movement impairment, pharmacological modulation of neuroplasticity may be needed as an adjuvant intervention. Objective: Evaluate safety, as well as improvement in movement function, associated with motor training paired with a drug to upregulate neuroplasticity after stroke. Methods: In this double-blind, randomized, placebo-controlled study, 12 subjects with chronic stroke received either atomoxetine or placebo paired with motor training. Safety was assessed using vital signs. Upper extremity movement function was assessed using Fugl-Meyer Assessment, Wolf Motor Function Test, and Action Research Arm Test at baseline, post-intervention, and 1-month follow-up. Results: No significant between-groups differences were found in mean heart rate (95% CI, –12.4–22.6; p = 0.23), mean systolic blood pressure (95% CI, –1.7–29.6; p = 0.21), or mean diastolic blood pressure (95% CI, –10.4–13.3; p = 0.08). A statistically significant between-groups difference on Fugl-Meyer at post-intervention favored the atomoxetine group (95% CI, 1.6–12.7; p = 0.016). Conclusion: Atomoxetine combined with motor training appears safe and may optimize motor training outcomes after stroke. PMID:27858723
NASA Astrophysics Data System (ADS)
Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.
2018-05-01
To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].
An intelligent agent for optimal river-reservoir system management
NASA Astrophysics Data System (ADS)
Rieker, Jeffrey D.; Labadie, John W.
2012-09-01
A generalized software package is presented for developing an intelligent agent for stochastic optimization of complex river-reservoir system management and operations. Reinforcement learning is an approach to artificial intelligence for developing a decision-making agent that learns the best operational policies without the need for explicit probabilistic models of hydrologic system behavior. The agent learns these strategies experientially in a Markov decision process through observational interaction with the environment and simulation of the river-reservoir system using well-calibrated models. The graphical user interface for the reinforcement learning process controller includes numerous learning method options and dynamic displays for visualizing the adaptive behavior of the agent. As a case study, the generalized reinforcement learning software is applied to developing an intelligent agent for optimal management of water stored in the Truckee river-reservoir system of California and Nevada for the purpose of streamflow augmentation for water quality enhancement. The intelligent agent successfully learns long-term reservoir operational policies that specifically focus on mitigating water temperature extremes during persistent drought periods that jeopardize the survival of threatened and endangered fish species.
Design of focused and restrained subsets from extremely large virtual libraries.
Jamois, Eric A; Lin, Chien T; Waldman, Marvin
2003-11-01
With the current and ever-growing offering of reagents along with the vast palette of organic reactions, virtual libraries accessible to combinatorial chemists can reach sizes of billions of compounds or more. Extracting practical size subsets for experimentation has remained an essential step in the design of combinatorial libraries. A typical approach to computational library design involves enumeration of structures and properties for the entire virtual library, which may be unpractical for such large libraries. This study describes a new approach termed as on the fly optimization (OTFO) where descriptors are computed as needed within the subset optimization cycle and without intermediate enumeration of structures. Results reported herein highlight the advantages of coupling an ultra-fast descriptor calculation engine to subset optimization capabilities. We also show that enumeration of properties for the entire virtual library may not only be unpractical but also wasteful. Successful design of focused and restrained subsets can be achieved while sampling only a small fraction of the virtual library. We also investigate the stability of the method and compare results obtained from simulated annealing (SA) and genetic algorithms (GA).
Greek classicism in living structure? Some deductive pathways in animal morphology.
Zweers, G A
1985-01-01
Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".
NASA Astrophysics Data System (ADS)
Keilis-Borok, V. I.; Soloviev, A.; Gabrielov, A.
2011-12-01
We describe a uniform approach to predicting different extreme events, also known as critical phenomena, disasters, or crises. The following types of such events are considered: strong earthquakes; economic recessions (their onset and termination); surges of unemployment; surges of crime; and electoral changes of the governing party. A uniform approach is possible due to the common feature of these events: each of them is generated by a certain hierarchical dissipative complex system. After a coarse-graining, such systems exhibit regular behavior patterns; we look among them for "premonitory patterns" that signal the approach of an extreme event. We introduce methodology, based on the optimal control theory, assisting disaster management in choosing optimal set of disaster preparedness measures undertaken in response to a prediction. Predictions with their currently realistic (limited) accuracy do allow preventing a considerable part of the damage by a hierarchy of preparedness measures. Accuracy of prediction should be known, but not necessarily high.
Two-phase computerized planning of cryosurgery using bubble-packing and force-field analogy.
Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed
2006-02-01
Cryosurgery is the destruction of undesired tissues by freezing, as in prostate cryosurgery, for example. Minimally invasive cryosurgery is currently performed by means of an array of cryoprobes, each in the shape of a long hypodermic needle. The optimal arrangement of the cryoprobes, which is known to have a dramatic effect on the quality of the cryoprocedure, remains an art held by the cryosurgeon, based on the cryosurgeon's experience and "rules of thumb." An automated computerized technique for cryosurgery planning is the subject matter of the current paper, in an effort to improve the quality of cryosurgery. A two-phase optimization method is proposed for this purpose, based on two previous and independent developments by this research team. Phase I is based on a bubble-packing method, previously used as an efficient method for finite element meshing. Phase II is based on a force-field analogy method, which has proven to be robust at the expense of a typically long runtime. As a proof-of-concept, results are demonstrated on a two-dimensional case of a prostate cross section. The major contribution of this study is to affirm that in many instances cryosurgery planning can be performed without extremely expensive simulations of bioheat transfer, achieved in Phase I. This new method of planning has proven to reduce planning runtime from hours to minutes, making automated planning practical in a clinical time frame.
Extreme 3D reconstruction of the final ROSETTA/PHILAE landing site
NASA Astrophysics Data System (ADS)
Capanna, Claire; Jorda, Laurent; Lamy, Philippe; Gesquiere, Gilles; Delmas, Cédric; Durand, Joelle; Garmier, Romain; Gaudon, Philippe; Jurado, Eric
2016-04-01
The Philae lander aboard the Rosetta spacecraft successfully landed at the surface of comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G) after two rebounds on November 12, 2014. The final landing site, now known as « Abydos », has been identified on images acquired by the OSIRIS imaging system onboard the Rosetta orbiter[1]. The available images of Abydos are very limited in number and reveal a very extreme topography containing cliffs and overhangs. Furthermore, the surface is only observed under very high incidence angles of 60° on average, which implies that the images also exhibit lots of cast shadows. This makes it very difficult to reconstruct the 3D topography with standard methods such as photogrammetry or standard clinometry. We apply a new method called ''Multiresolution PhotoClinometry by Deformation'' (MPCD, [2]) to retrieve the 3D topography of the area around Abydos. The method works in two main steps: (i) a DTM of this region is extracted from a low resolution MPCD global shape model of comet 67P/C-G, and (ii) the resulting triangular mesh is progressively deformed at increasing spatial sampling down to 0.25 m in order to match a set of 14 images of Abydos with projected pixel scales between 1 and 8 m. The method used to perform the image matching is a quasi-Newton non-linear optimization method called L-BFGS-b[3] especially suited to large-scale problems. Finally, we also checked the compatibility of the final MPCD digital terrain model with a set of five panoramic images obtained by the CIVA-P instrument aboard Philae[4]. [1] Lamy et al., 2016, submitted. [2] Capanna et al., Three dimensional reconstruction using multiresoluton photoclinometry by deformation, The visual Computer, v. 29(6-8) pp. 825-835, 2013. [3] Morales et al., Remark on "Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization", v.38(1) pp.1-4, ACM Trans. Math. Softw., 2011 [4] Bibring et al., 67P/Churyumov-Gerasimenko surface properties as derived from CIVA panoramic images, Science, v. 349(6247), 2015
Finger muscle attachments for an OpenSim upper-extremity model.
Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L
2015-01-01
We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements.
Finger Muscle Attachments for an OpenSim Upper-Extremity Model
Lee, Jong Hwa; Asakawa, Deanna S.; Dennerlein, Jack T.; Jindrich, Devin L.
2015-01-01
We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements. PMID:25853869
NASA Astrophysics Data System (ADS)
Raseman, W. J.; Kasprzyk, J. R.; Rosario-Ortiz, F.; Summers, R. S.; Stewart, J.; Livneh, B.
2016-12-01
To promote public health, the United States Environmental Protection Agency (US EPA), and similar entities around the world enact strict laws to regulate drinking water quality. These laws, such as the Stage 1 and 2 Disinfectants and Disinfection Byproducts (D/DBP) Rules, come at a cost to water treatment plants (WTPs) which must alter their operations and designs to meet more stringent standards and the regulation of new contaminants of concern. Moreover, external factors such as changing influent water quality due to climate extremes and climate change, may force WTPs to adapt their treatment methods. To grapple with these issues, decision support systems (DSSs) have been developed to aid WTP operation and planning. However, there is a critical need to better address long-term decision making for WTPs. In this poster, we propose a DSS framework for WTPs for long-term planning, which improves upon the current treatment of deep uncertainties within the overall potable water system including the impact of climate on influent water quality and uncertainties in treatment process efficiencies. We present preliminary results exploring how a multi-objective evolutionary algorithm (MOEA) search can be coupled with models of WTP processes to identify high-performing plans for their design and operation. This coupled simulation-optimization technique uses Borg MOEA, an auto-adaptive algorithm, and the Water Treatment Plant Model, a simulation model developed by the US EPA to assist in creating the D/DBP Rules. Additionally, Monte Carlo sampling methods were used to study the impact of uncertainty of influent water quality on WTP decision-making and generate plans for robust WTP performance.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Summers, R. J.; Boudreaux, D. P.; Srinivasan, V. R.
1979-01-01
Steady-state continuous culture was used to optimize lean chemically defined media for a Cellulomonas sp. and Bacillus cereus strain T. Both organisms were extremely sensitive to variations in trace-metal concentrations. However, medium optimization by this technique proved rapid, and multifactor screening was easily conducted by using a minimum of instrumentation. The optimized media supported critical dilution rates of 0.571 and 0.467 h−1 for Cellulomonas and Bacillus, respectively. These values approximated maximum growth rate values observed in batch culture. PMID:16345417
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred
2015-04-01
The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to find the suitable method, that is, the mode of estimation and uncertainty-measure determination that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate dataset. This conference paper illustrates by means of three examples that optimal estimation has the potential to shape future big climate data analysis. First, we consider various hypothesis tests to study whether climate extremes are increasing in their occurrence. Second, we compare Pearson's and Spearman's correlation measures. Third, we introduce a novel estimator of the tail index, which helps to better quantify climate-change related risks.
Extremely efficient flexible organic light-emitting diodes with modified graphene anode
NASA Astrophysics Data System (ADS)
Han, Tae-Hee; Lee, Youngbin; Choi, Mi-Ri; Woo, Seong-Hoon; Bae, Sang-Hoon; Hong, Byung Hee; Ahn, Jong-Hyun; Lee, Tae-Woo
2012-02-01
Although graphene films have a strong potential to replace indium tin oxide anodes in organic light-emitting diodes (OLEDs), to date, the luminous efficiency of OLEDs with graphene anodes has been limited by a lack of efficient methods to improve the low work function and reduce the sheet resistance of graphene films to the levels required for electrodes. Here, we fabricate flexible OLEDs by modifying the graphene anode to have a high work function and low sheet resistance, and thus achieve extremely high luminous efficiencies (37.2 lm W-1 in fluorescent OLEDs, 102.7 lm W-1 in phosphorescent OLEDs), which are significantly higher than those of optimized devices with an indium tin oxide anode (24.1 lm W-1 in fluorescent OLEDs, 85.6 lm W-1 in phosphorescent OLEDs). We also fabricate flexible white OLED lighting devices using the graphene anode. These results demonstrate the great potential of graphene anodes for use in a wide variety of high-performance flexible organic optoelectronics.
NASA Astrophysics Data System (ADS)
Soummer, Rémi; Pueyo, Laurent; Ferrari, André; Aime, Claude; Sivaramakrishnan, Anand; Yaitskova, Natalia
2009-04-01
We study the application of Lyot coronagraphy to future Extremely Large Telescopes (ELTs), showing that Apodized Pupil Lyot Coronagraphs enable high-contrast imaging for exoplanet detection and characterization with ELTs. We discuss the properties of the optimal pupil apodizers for this application (generalized prolate spheroidal functions). The case of a circular aperture telescope with a central obstruction is considered in detail, and we discuss the effects of primary mirror segmentation and secondary mirror support structures as a function of the occulting mask size. In most cases where inner working distance is critical, e.g., for exoplanet detection, these additional features do not alter the solutions derived with just the central obstruction, although certain applications such as quasar-host galaxy coronagraphic observations could benefit from designs that explicitly accomodate ELT spider geometries. We illustrate coronagraphic designs for several ELT geometries including ESO/OWL, the Thirty Mirror Telescope, the Giant Magellan Telescope, and describe numerical methods for generating these designs.
The role of ensemble post-processing for modeling the ensemble tail
NASA Astrophysics Data System (ADS)
Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-04-01
The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol. Soc. 134: 2051-2066.Buizza and Leutbecher, 2015: The forecast skill horizon, Q. J. R. Meteorol. Soc. 141: 3366-3382.Ferro, 2007: A probability model for verifying deterministic forecasts of extreme events. Weather and Forecasting 22 (5), 1089-1100.Friederichs, 2010: Statistical downscaling of extreme precipitation events using extreme value theory. Extremes 13, 109-132.Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.
Structural Design of a Horizontal-Axis Tidal Current Turbine Composite Blade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bir, G. S.; Lawson, M. J.; Li, Y.
2011-10-01
This paper describes the structural design of a tidal composite blade. The structural design is preceded by two steps: hydrodynamic design and determination of extreme loads. The hydrodynamic design provides the chord and twist distributions along the blade length that result in optimal performance of the tidal turbine over its lifetime. The extreme loads, i.e. the extreme flap and edgewise loads that the blade would likely encounter over its lifetime, are associated with extreme tidal flow conditions and are obtained using a computational fluid dynamics (CFD) software. Given the blade external shape and the extreme loads, we use a laminate-theory-basedmore » structural design to determine the optimal layout of composite laminas such that the ultimate-strength and buckling-resistance criteria are satisfied at all points in the blade. The structural design approach allows for arbitrary specification of the chord, twist, and airfoil geometry along the blade and an arbitrary number of shear webs. In addition, certain fabrication criteria are imposed, for example, each composite laminate must be an integral multiple of its constituent ply thickness. In the present effort, the structural design uses only static extreme loads; dynamic-loads-based fatigue design will be addressed in the future. Following the blade design, we compute the distributed structural properties, i.e. flap stiffness, edgewise stiffness, torsion stiffness, mass, moments of inertia, elastic-axis offset, and center-of-mass offset along the blade. Such properties are required by hydro-elastic codes to model the tidal current turbine and to perform modal, stability, loads, and response analyses.« less
NASA Astrophysics Data System (ADS)
von Trentini, F.; Willkofer, F.; Wood, R. R.; Schmid, F. J.; Ludwig, R.
2017-12-01
The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. Therefore, a hydro-meteorological model chain is applied. It employs high performance computing capacity of the Leibniz Supercomputing Centre facility SuperMUC to dynamically downscale 50 members of the Global Circulation Model CanESM2 over European and Eastern North American domains using the Canadian Regional Climate Model (RCM) CRCM5. Over Europe, the unique single model ensemble is conjointly analyzed with the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change in the dynamics of extreme events. Furthermore, these 50 members of a single RCM will enhance extreme value statistics (extreme return periods) by exploiting the available 1500 model years for the reference period from 1981 to 2010. Hence, the RCM output is applied to drive the process based, fully distributed, and deterministic hydrological model WaSiM in high temporal (3h) and spatial (500m) resolution. WaSiM and the large ensemble are further used to derive a variety of hydro-meteorological patterns leading to severe flood events. A tool for virtual perfect prediction shall provide a combination of optimal lead time and management strategy to mitigate certain flood events following these patterns.
Jones, Siana; Shun-Shin, Matthew J; Cole, Graham D; Sau, Arunashis; March, Katherine; Williams, Suzanne; Kyriacou, Andreas; Hughes, Alun D; Mayet, Jamil; Frenneaux, Michael; Manisty, Charlotte H; Whinnett, Zachary I; Francis, Darrel P
2014-04-01
Full-disclosure study describing Doppler patterns during iterative atrioventricular delay (AVD) optimization of biventricular pacemakers (cardiac resynchronization therapy, CRT). Doppler traces of the first 50 eligible patients undergoing iterative Doppler AVD optimization in the BRAVO trial were examined. Three experienced observers classified conformity to guideline-described patterns. Each observer then selected the optimum AVD on two separate occasions: blinded and unblinded to AVD. Four Doppler E-A patterns occurred: A (always merged, 18% of patients), B (incrementally less fusion at short AVDs, 12%), C (full separation at short AVDs, as described by the guidelines, 28%), and D (always separated, 42%). In Groups A and D (60%), the iterative guidelines therefore cannot specify one single AVD. On the kappa scale (0 = chance alone; 1 = perfect agreement), observer agreement for the ideal AVD in Classes B and C was poor (0.32) and appeared worse in Groups A and D (0.22). Blinding caused the scattering of the AVD selected as optimal to widen (standard deviation rising from 37 to 49 ms, P < 0.001). By blinding 28% of the selected optimum AVDs were ≤60 or ≥200 ms. All 50 Doppler datasets are presented, to support future methodological testing. In most patients, the iterative method does not clearly specify one AVD. In all the patients, agreement on the ideal AVD between skilled observers viewing identical images is poor. The iterative protocol may successfully exclude some extremely unsuitable AVDs, but so might simply accepting factory default. Irreproducibility of the gold standard also prevents alternative physiological optimization methods from being validated honestly.
Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Men Chunhua; Romeijn, H. Edwin; Jia Xun
2010-11-15
Purpose: To develop a novel aperture-based algorithm for volumetric modulated arc therapy (VMAT) treatment plan optimization with high quality and high efficiency. Methods: The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequentialmore » way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. Results: The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. Conclusions: The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.« less
Pan, Tonya M; Mills, Sarah D; Fox, Rina S; Baik, Sharon H; Harry, Kadie M; Roesch, Scott C; Sadler, Georgia Robins; Malcarne, Vanessa L
2017-12-01
The Life Orientation Test-Revised (LOT-R) is a widely used measure of optimism and pessimism, with three positively worded and three negatively worded content items. This study examined the structural validity and invariance, internal consistency reliability, and convergent and divergent validity of the English and Spanish versions of the LOT-R among Hispanic Americans. A community sample of Hispanic Americans ( N = 422) completed self-report measures, including the LOT-R, Patient Health Questionnaire-9, and Generalized Anxiety Disorder-7, in their preferred language of English or Spanish. Based on the literature, four structural models were tested: one-factor , oblique two-factor , orthogonal two-factor method effects with positive specific factor , and orthogonal two-factor method effects with negative specific factor . Baseline support for both of the English and Spanish versions was not achieved for any model; in all models, the negatively worded items in Spanish had non-significant factor loadings. Therefore, the positively worded three-item optimism subscale of the LOT-R was examined separately and fit the data, with factor loadings equivalent across language-preference groups. Coefficient alphas for the optimism subscale were consistent across both language-preference groups (αs = .61 [English] and .66 [Spanish]). In contrast, the six-item total score and three-item pessimism subscale demonstrated extremely low or inconsistent alphas. Convergent and divergent validity were established for the optimism subscale in both languages. In sum, the optimism subscale of the LOT-R demonstrated minimally acceptable to good psychometric properties across English and Spanish language-preference groups. However, neither the total score nor the pessimism subscale showed adequate psychometric properties for Spanish-speaking Hispanic Americans, likely due to translation and cultural adaptation issues, and thus are not supported for use with this population.
Gu, Ruiting; Zhou, Yi; Song, Xiaoyue; Xu, Shaochun; Zhang, Xiaomei; Lin, Haiying; Xu, Shuai; Yue, Shidong; Zhu, Shuyu
2018-01-01
Seeds are important materials for the restoration of globally-threatened marine angiosperm (seagrass) populations. In this study, we investigated the differences between different Ruppia sinensis seed types and developed two feasible long-term R. sinensis seed storage methods. The ability of R. sinensis seeds to tolerate the short-term desiccation and extreme cold had been investigated. The tolerance of R. sinensis seeds to long-term exposure of high salinity, cold temperature, and desiccation had been considered as potential methods for long-term seed storage. Also, three morphological and nine physiological indices were measured and compared between two types of seeds: Shape L and Shape S. We found that: (1) wet storage at a salinity of 30-40 psu and 0°C were the optimal long-term storage conditions, and the proportion of viable seeds reached over 90% after a storage period of 11 months since the seeds were collected from the reproductive shoots; (2) dry condition was not the optimal choice for long-term storage of R. sinensis seeds; however, storing seeds in a dry condition at 5°C and 33 ± 10% relative humidity for 9 months had a relatively high percentage (74.44 ± 2.22%) of viable seeds, consequently desiccation exposure could also be an acceptable seed storage method; (3) R. sinensis seeds would lose vigor in the interaction of extreme cold (-27°C) and desiccation; (4) there were significant differences in seed weight, seed curvature, and endocarp thickness between the two types of seeds. These findings provided fundamental physiological information for R. sinensis seeds and supported the long-term storage of its seeds. Our results may also serve as useful reference for seed storage of other threatened seagrass species and facilitate their ex situ conservation and habitat restoration.
USDA-ARS?s Scientific Manuscript database
Solution blow spinning (SBS) is a process to produce non-woven fiber sheets with high porosity and an extremely large amount of surface area. In this study, a Box-Behnken experimental design (BBD) was used to optimize the processing parameters for the production of nanofibers from polymer solutions ...
Optimal radiotherapy dose schedules under parametric uncertainty
NASA Astrophysics Data System (ADS)
Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin
2016-01-01
We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.
Impacts of climate extremes on gross primary production under global warming
Williams, I. N.; Torn, M. S.; Riley, W. J.; ...
2014-09-24
The impacts of historical droughts and heat-waves on ecosystems are often considered indicative of future global warming impacts, under the assumption that water stress sets in above a fixed high temperature threshold. Historical and future (RCP8.5) Earth system model (ESM) climate projections were analyzed in this study to illustrate changes in the temperatures for onset of water stress under global warming. The ESMs examined here predict sharp declines in gross primary production (GPP) at warm temperature extremes in historical climates, similar to the observed correlations between GPP and temperature during historical heat-waves and droughts. However, soil moisture increases at themore » warm end of the temperature range, and the temperature at which soil moisture declines with temperature shifts to a higher temperature. The temperature for onset of water stress thus increases under global warming and is associated with a shift in the temperature for maximum GPP to warmer temperatures. Despite the shift in this local temperature optimum, the impacts of warm extremes on GPP are approximately invariant when extremes are defined relative to the optimal temperature within each climate period. The GPP sensitivity to these relative temperature extremes therefore remains similar between future and present climates, suggesting that the heat- and drought-induced GPP reductions seen recently can be expected to be similar in the future, and may be underestimates of future impacts given model projections of increased frequency and persistence of heat-waves and droughts. The local temperature optimum can be understood as the temperature at which the combination of water stress and light limitations is minimized, and this concept gives insights into how GPP responds to climate extremes in both historical and future climate periods. Both cold (temperature and light-limited) and warm (water-limited) relative temperature extremes become more persistent in future climate projections, and the time taken to return to locally optimal climates for GPP following climate extremes increases by more than 25% over many land regions.« less
Impacts of climate extremes on gross primary production under global warming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, I. N.; Torn, M. S.; Riley, W. J.
The impacts of historical droughts and heat-waves on ecosystems are often considered indicative of future global warming impacts, under the assumption that water stress sets in above a fixed high temperature threshold. Historical and future (RCP8.5) Earth system model (ESM) climate projections were analyzed in this study to illustrate changes in the temperatures for onset of water stress under global warming. The ESMs examined here predict sharp declines in gross primary production (GPP) at warm temperature extremes in historical climates, similar to the observed correlations between GPP and temperature during historical heat-waves and droughts. However, soil moisture increases at themore » warm end of the temperature range, and the temperature at which soil moisture declines with temperature shifts to a higher temperature. The temperature for onset of water stress thus increases under global warming and is associated with a shift in the temperature for maximum GPP to warmer temperatures. Despite the shift in this local temperature optimum, the impacts of warm extremes on GPP are approximately invariant when extremes are defined relative to the optimal temperature within each climate period. The GPP sensitivity to these relative temperature extremes therefore remains similar between future and present climates, suggesting that the heat- and drought-induced GPP reductions seen recently can be expected to be similar in the future, and may be underestimates of future impacts given model projections of increased frequency and persistence of heat-waves and droughts. The local temperature optimum can be understood as the temperature at which the combination of water stress and light limitations is minimized, and this concept gives insights into how GPP responds to climate extremes in both historical and future climate periods. Both cold (temperature and light-limited) and warm (water-limited) relative temperature extremes become more persistent in future climate projections, and the time taken to return to locally optimal climates for GPP following climate extremes increases by more than 25% over many land regions.« less
Assessing the Value of Information for Identifying Optimal Floodplain Management Portfolios
NASA Astrophysics Data System (ADS)
Read, L.; Bates, M.; Hui, R.; Lund, J. R.
2014-12-01
Floodplain management is a complex portfolio problem that can be analyzed from an integrated perspective incorporating traditionally structural and nonstructural options. One method to identify effective strategies for preparing, responding to, and recovering from floods is to optimize for a portfolio of temporary (emergency) and permanent floodplain management options. A risk-based optimization approach to this problem assigns probabilities to specific flood events and calculates the associated expected damages. This approach is currently limited by: (1) the assumption of perfect flood forecast information, i.e. implementing temporary management activities according to the actual flood event may differ from optimizing based on forecasted information and (2) the inability to assess system resilience across a range of possible future events (risk-centric approach). Resilience is defined here as the ability of a system to absorb and recover from a severe disturbance or extreme event. In our analysis, resilience is a system property that requires integration of physical, social, and information domains. This work employs a 3-stage linear program to identify the optimal mix of floodplain management options using conditional probabilities to represent perfect and imperfect flood stages (forecast vs. actual events). We assess the value of information in terms of minimizing damage costs for two theoretical cases - urban and rural systems. We use portfolio analysis to explore how the set of optimal management options differs depending on whether the goal is for the system to be risk-adverse to a specified event or resilient over a range of events.
Particle swarm optimization of ascent trajectories of multistage launch vehicles
NASA Astrophysics Data System (ADS)
Pontani, Mauro
2014-02-01
Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state and costate components, the coast duration, and the upper stage thrust duration. In addition, a simple approach is introduced and successfully applied with the purpose of satisfying exactly the path constraint related to the maximum dynamical pressure in the atmospheric phase. The basic version of the swarming technique, which is used in this research, is extremely simple and easy to program. Nevertheless, the algorithm proves to be capable of yielding the optimal rocket trajectory with a very satisfactory numerical accuracy.
Optimal simultaneous superpositioning of multiple structures with missing data
Theobald, Douglas L.; Steindel, Phillip A.
2012-01-01
Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369
Ultra-high field upper extremity peripheral nerve and non-contrast enhanced vascular imaging
Raval, Shailesh B.; Britton, Cynthia A.; Zhao, Tiejun; Krishnamurthy, Narayanan; Santini, Tales; Gorantla, Vijay S.; Ibrahim, Tamer S.
2017-01-01
Objective The purpose of this study was to explore the efficacy of Ultra-high field [UHF] 7 Tesla [T] MRI as compared to 3T MRI in non-contrast enhanced [nCE] imaging of structural anatomy in the elbow, forearm, and hand [upper extremity]. Materials and method A wide range of sequences including T1 weighted [T1] volumetric interpolate breath-hold exam [VIBE], T2 weighted [T2] double-echo steady state [DESS], susceptibility weighted imaging [SWI], time-of-flight [TOF], diffusion tensor imaging [DTI], and diffusion spectrum imaging [DSI] were optimized and incorporated with a radiofrequency [RF] coil system composed of a transverse electromagnetic [TEM] transmit coil combined with an 8-channel receive-only array for 7T upper extremity [UE] imaging. In addition, Siemens optimized protocol/sequences were used on a 3T scanner and the resulting images from T1 VIBE and T2 DESS were compared to that obtained at 7T qualitatively and quantitatively [SWI was only qualitatively compared]. DSI studio was utilized to identify nerves based on analysis of diffusion weighted derived fractional anisotropy images. Images of forearm vasculature were extracted using a paint grow manual segmentation method based on MIPAV [Medical Image Processing, Analysis, and Visualization]. Results High resolution and high quality signal-to-noise ratio [SNR] and contrast-to-noise ratio [CNR]—images of the hand, forearm, and elbow were acquired with nearly homogeneous 7T excitation. Measured [performed on the T1 VIBE and T2 DESS sequences] SNR and CNR values were almost doubled at 7T vs. 3T. Cartilage, synovial fluid and tendon structures could be seen with higher clarity in the 7T T1 and T2 weighted images. SWI allowed high resolution and better quality imaging of large and medium sized arteries and veins, capillary networks and arteriovenous anastomoses at 7T when compared to 3T. 7T diffusion weighted sequence [not performed at 3T] demonstrates that the forearm nerves are clearly delineated by fiber tractography. The proper digital palmar arteries and superficial palmar arch could also be clearly visualized using TOF nCE 7T MRI. Conclusion Ultra-high resolution neurovascular imaging in upper extremities is possible at 7T without use of renal toxic intravenous contrast. 7T MRI can provide superior peripheral nerve [based on fiber anisotropy and diffusion coefficient parameters derived from diffusion tensor/spectrum imaging] and vascular [nCE MRA and vessel segmentation] imaging. PMID:28662061
Two-phase Computerized Planning of Cryosurgery Using Bubble-packing and Force-field Analogy
Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed
2007-01-01
Background: Cryosurgery is the destruction of undesired tissues by freezing, as in prostate cryosurgery, for example. Minimally-invasive cryosurgery is currently performed by means of an array of cryoprobes, each in the shape of a long hypodermic needle. The optimal arrangement of the cryoprobes, which is known to have a dramatic effect on the quality of the cryoprocedure, remains an art held by the cryosurgeon, based on the cryosurgeon's experience and “rules of thumb.” An automated computerized technique for cryosurgery planning is the subject matter of the current report, in an effort to improve the quality of cryosurgery. Method of Approach: A two-phase optimization method is proposed for this purpose, based on two previous and independent developments by this research team. Phase I is based on a bubble-packing method, previously used as an efficient method for finite elements meshing. Phase II is based on a force-field analogy method, which has proven to be robust at the expense of a typically long runtime. Results: As a proof-of-concept, results are demonstrated on a 2D case of a prostate cross-section. The major contribution of this study is to affirm that in many instances cryosurgery planning can be performed without extremely expensive simulations of bioheat transfer, achieved in Phase I. Conclusions: This new method of planning has proven to reduce planning runtime from hours to minutes, making automated planning practical in a clinical time frame. PMID:16532617
Energy Center Structure Optimization by using Smart Technologies in Process Control System
NASA Astrophysics Data System (ADS)
Shilkina, Svetlana V.
2018-03-01
The article deals with practical application of fuzzy logic methods in process control systems. A control object - agroindustrial greenhouse complex, which includes its own energy center - is considered. The paper analyzes object power supply options taking into account connection to external power grids and/or installation of own power generating equipment with various layouts. The main problem of a greenhouse facility basic process is extremely uneven power consumption, which forces to purchase redundant generating equipment idling most of the time, which quite negatively affects project profitability. Energy center structure optimization is largely based on solving the object process control system construction issue. To cut investor’s costs it was proposed to optimize power consumption by building an energy-saving production control system based on a fuzzy logic controller. The developed algorithm of automated process control system functioning ensured more even electric and thermal energy consumption, allowed to propose construction of the object energy center with a smaller number of units due to their more even utilization. As a result, it is shown how practical use of microclimate parameters fuzzy control system during object functioning leads to optimization of agroindustrial complex energy facility structure, which contributes to a significant reduction in object construction and operation costs.
Watts, Seth; Tortorelli, Daniel A.
2017-04-13
Topology optimization is a methodology for assigning material or void to each point in a design domain in a way that extremizes some objective function, such as the compliance of a structure under given loads, subject to various imposed constraints, such as an upper bound on the mass of the structure. Geometry projection is a means to parameterize the topology optimization problem, by describing the design in a way that is independent of the mesh used for analysis of the design's performance; it results in many fewer design parameters, necessarily resolves the ill-posed nature of the topology optimization problem, andmore » provides sharp descriptions of the material interfaces. We extend previous geometric projection work to 3 dimensions and design unit cells for lattice materials using inverse homogenization. We perform a sensitivity analysis of the geometric projection and show it has smooth derivatives, making it suitable for use with gradient-based optimization algorithms. The technique is demonstrated by designing unit cells comprised of a single constituent material plus void space to obtain light, stiff materials with cubic and isotropic material symmetry. Here, we also design a single-constituent isotropic material with negative Poisson's ratio and a light, stiff material comprised of 2 constituent solids plus void space.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Seth; Tortorelli, Daniel A.
Topology optimization is a methodology for assigning material or void to each point in a design domain in a way that extremizes some objective function, such as the compliance of a structure under given loads, subject to various imposed constraints, such as an upper bound on the mass of the structure. Geometry projection is a means to parameterize the topology optimization problem, by describing the design in a way that is independent of the mesh used for analysis of the design's performance; it results in many fewer design parameters, necessarily resolves the ill-posed nature of the topology optimization problem, andmore » provides sharp descriptions of the material interfaces. We extend previous geometric projection work to 3 dimensions and design unit cells for lattice materials using inverse homogenization. We perform a sensitivity analysis of the geometric projection and show it has smooth derivatives, making it suitable for use with gradient-based optimization algorithms. The technique is demonstrated by designing unit cells comprised of a single constituent material plus void space to obtain light, stiff materials with cubic and isotropic material symmetry. Here, we also design a single-constituent isotropic material with negative Poisson's ratio and a light, stiff material comprised of 2 constituent solids plus void space.« less
Liu, Guo-hai; Jiang, Hui; Xiao, Xia-hong; Zhang, Dong-juan; Mei, Cong-li; Ding, Yu-han
2012-04-01
Fourier transform near-infrared (FT-NIR) spectroscopy was attempted to determine pH, which is one of the key process parameters in solid-state fermentation of crop straws. First, near infrared spectra of 140 solid-state fermented product samples were obtained by near infrared spectroscopy system in the wavelength range of 10 000-4 000 cm(-1), and then the reference measurement results of pH were achieved by pH meter. Thereafter, the extreme learning machine (ELM) was employed to calibrate model. In the calibration model, the optimal number of PCs and the optimal number of hidden-layer nodes of ELM network were determined by the cross-validation. Experimental results showed that the optimal ELM model was achieved with 1040-1 topology construction as follows: R(p) = 0.961 8 and RMSEP = 0.104 4 in the prediction set. The research achievement could provide technological basis for the on-line measurement of the process parameters in solid-state fermentation.
An efficient inverse radiotherapy planning method for VMAT using quadratic programming optimization.
Hoegele, W; Loeschel, R; Merkle, N; Zygmanski, P
2012-01-01
The purpose of this study is to investigate the feasibility of an inverse planning optimization approach for the Volumetric Modulated Arc Therapy (VMAT) based on quadratic programming and the projection method. The performance of this method is evaluated against a reference commercial planning system (eclipse(TM) for rapidarc(TM)) for clinically relevant cases. The inverse problem is posed in terms of a linear combination of basis functions representing arclet dose contributions and their respective linear coefficients as degrees of freedom. MLC motion is decomposed into basic motion patterns in an intuitive manner leading to a system of equations with a relatively small number of equations and unknowns. These equations are solved using quadratic programming under certain limiting physical conditions for the solution, such as the avoidance of negative dose during optimization and Monitor Unit reduction. The modeling by the projection method assures a unique treatment plan with beneficial properties, such as the explicit relation between organ weightings and the final dose distribution. Clinical cases studied include prostate and spine treatments. The optimized plans are evaluated by comparing isodose lines, DVH profiles for target and normal organs, and Monitor Units to those obtained by the clinical treatment planning system eclipse(TM). The resulting dose distributions for a prostate (with rectum and bladder as organs at risk), and for a spine case (with kidneys, liver, lung and heart as organs at risk) are presented. Overall, the results indicate that similar plan qualities for quadratic programming (QP) and rapidarc(TM) could be achieved at significantly more efficient computational and planning effort using QP. Additionally, results for the quasimodo phantom [Bohsung et al., "IMRT treatment planning: A comparative inter-system and inter-centre planning exercise of the estro quasimodo group," Radiother. Oncol. 76(3), 354-361 (2005)] are presented as an example for an extreme concave case. Quadratic programming is an alternative approach for inverse planning which generates clinically satisfying plans in comparison to the clinical system and constitutes an efficient optimization process characterized by uniqueness and reproducibility of the solution.
Zhao, Xian-En; Yan, Ping; Wang, Renjun; Zhu, Shuyun; You, Jinmao; Bai, Yu; Liu, Huwei
2016-08-01
Quantitative analysis of cholesterol and its metabolic steroid hormones plays a vital role in diagnosing endocrine disorders and understanding disease progression, as well as in clinical medicine studies. Because of their extremely low abundance in body fluids, it remains a challenging task to develop a sensitive detection method. A hyphenated technique of dual ultrasonic-assisted dispersive liquid-liquid microextraction (dual-UADLLME) coupled with microwave-assisted derivatization (MAD) was proposed for cleansing, enrichment and sensitivity enhancement. 4'-Carboxy-substituted rosamine (CSR) was synthesized and used as derivatization reagent. An ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) method was developed for determination of cholesterol and its metabolic steroid hormones in the multiple reaction monitoring mode. Parameters of dual-UADLLME, MAD and UHPLC-MS/MS were all optimized. Satisfactory linearity, recovery, repeatability, accuracy and precision, absence of matrix effect and extremely low limits of detection (LODs, 0.08-0.15 pg mL(-1) ) were achieved. Through the combination of dual-UADLLME and MAD, a determination method for cholesterol and its metabolic steroid hormones in human plasma, serum and urine samples was developed and validated with high sensitivity, selectivity, accuracy and perfect matrix effect results. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Optimization of VLf/ELF Wave Generation using Beam Painting
NASA Astrophysics Data System (ADS)
Robinson, A.; Moore, R. C.
2017-12-01
A novel optimized beam painting algorithm (OBP) is used to generate high amplitude very low frequency (VLF) and extremely low frequency (ELF) waves in the D-region of the ionosphere above the High-frequency Active Auroral Research Program (HAARP) observatory. The OBP method creates a phased array of sources in the ionosphere by varying the azimuth and zenith angles of the high frequency (HF) transmitter to capitalize on the constructive interference of propagating VLF/ELF waves. OBP generates higher amplitude VLF/ELF signals than any other previously proposed method. From April through June during 2014, OBP was performed at HAARP over 1200 times. We compare the BP generated signals against vertical amplitude modulated transmissions at 50 % duty cycle (V), oblique amplitude modulated transmissions at 15 degrees zenith and 81 degrees azimuth at 50 % duty cycle (O), and geometric (circle-sweep) modulation at 15 degrees off-zenith angle at 1562.5 Hz, 3125 Hz, and 5000 Hz. We present an analysis of the directional dependence of each signal, its polarization, and its dependence on the properties of the different source region elements. We find that BP increases the received signal amplitudes of VLF and ELF waves when compared to V, O, and GM methods over a statistically significant number of trials.
Einspieler, Christa; Marschik, Peter B.; Urlesberger, Berndt; Pansy, Jasmin; Scheuchenegger, Anna; Krieber, Magdalena; Yang, Hong; Kornacka, Maria K.; Rowinska, Edyta; Soloveichick, Marina; Ferrari, Fabrizio; Guzzetta, Andrea; Cioni, Giovanni; Bos, Arend F.
2018-01-01
Aim To explore the appropriateness of applying a detailed assessment of general movements (GMs) and characterise the relationship between global and detailed assessment. Method The analysis was based on 783 video-recordings of 233 infants (79 females) who had been videoed from 27 to 45 weeks postmenstrual age. Apart from assessing the global GM categories (normal, poor repertoire [PR], cramped-synchronised [CS] or chaotic GMs), we scored the amplitude, speed, spatial range, proximal and distal rotations, onset and offset, tremulous and cramped components of the upper and lower extremities. Applying the optimality concept, the maximum GM optimality score of 42 indicates the optimal performance. Results GM optimality scores differentiated between normal GMs (Median=39; P75=41, P25=37); PR GMs (Median=25; P75=29, P25=22), and CS GMs (Median=12; P75=14, P25=10; p<0.01). The optimality score for chaotic GMs (mainly occurring at late preterm age) was similar to those for CS GMs (Median=14; P75=17, P25=12). Short-lasting tremulous movements occurred from very preterm age to postterm age across all GM categories, including normal GMs. The detailed score at postterm age was slightly lower compared to the scores at preterm and term age for both normal (p=0.02) and PR GMs (p<0.01). Interpretation Further research might demonstrate that the GM optimality score provides a solid base for the prediction of improvement vs. deterioration within an individual GM trajectory. PMID:26365130
Canganella, Francesco; Wiegel, Juergen
2014-01-01
The term “extremophile” was introduced to describe any organism capable of living and growing under extreme conditions. With the further development of studies on microbial ecology and taxonomy, a variety of “extreme” environments have been found and an increasing number of extremophiles are being described. Extremophiles have also been investigated as far as regarding the search for life on other planets and even evaluating the hypothesis that life on Earth originally came from space. The first extreme environments to be largely investigated were those characterized by elevated temperatures. The naturally “hot environments” on Earth range from solar heated surface soils and water with temperatures up to 65 °C, subterranean sites such as oil reserves and terrestrial geothermal with temperatures ranging from slightly above ambient to above 100 °C, to submarine hydrothermal systems with temperatures exceeding 300 °C. There are also human-made environments with elevated temperatures such as compost piles, slag heaps, industrial processes and water heaters. Thermophilic anaerobic microorganisms have been known for a long time, but scientists have often resisted the belief that some organisms do not only survive at high temperatures, but actually thrive under those hot conditions. They are perhaps one of the most interesting varieties of extremophilic organisms. These microorganisms can thrive at temperatures over 50 °C and, based on their optimal temperature, anaerobic thermophiles can be subdivided into three main groups: thermophiles with an optimal temperature between 50 °C and 64 °C and a maximum at 70 °C, extreme thermophiles with an optimal temperature between 65 °C and 80 °C, and finally hyperthermophiles with an optimal temperature above 80 °C and a maximum above 90 °C. The finding of novel extremely thermophilic and hyperthermophilic anaerobic bacteria in recent years, and the fact that a large fraction of them belong to the Archaea has definitely made this area of investigation more exciting. Particularly fascinating are their structural and physiological features allowing them to withstand extremely selective environmental conditions. These properties are often due to specific biomolecules (DNA, lipids, enzymes, osmolites, etc.) that have been studied for years as novel sources for biotechnological applications. In some cases (DNA-polymerase, thermostable enzymes), the search and applications successful exceeded preliminary expectations, but certainly further exploitations are still needed. PMID:25370030
Operational load estimation of a smart wind turbine rotor blade
NASA Astrophysics Data System (ADS)
White, Jonathan R.; Adams, Douglas E.; Rumsey, Mark A.
2009-03-01
Rising energy prices and carbon emission standards are driving a fundamental shift from fossil fuels to alternative sources of energy such as biofuel, solar, wind, clean coal and nuclear. In 2008, the U.S. installed 8,358 MW of new wind capacity increasing the total installed wind power by 50% to 25,170 MW. A key technology to improve the efficiency of wind turbines is smart rotor blades that can monitor the physical loads being applied by the wind and then adapt the airfoil for increased energy capture. For extreme wind and gust events, the airfoil could be changed to reduce the loads to prevent excessive fatigue or catastrophic failure. Knowledge of the actual loading to the turbine is also useful for maintenance planning and design improvements. In this work, an array of uniaxial and triaxial accelerometers was integrally manufactured into a 9m smart rotor blade. DC type accelerometers were utilized in order to estimate the loading and deflection from both quasi-steady-state and dynamic events. A method is presented that designs an estimator of the rotor blade static deflection and loading and then optimizes the placement of the sensor(s). Example results show that the method can identify the optimal location for the sensor for both simple example cases and realistic complex loading. The optimal location of a single sensor shifts towards the tip as the curvature of the blade deflection increases with increasingly complex wind loading. The framework developed is practical for the expansion of sensor optimization in more complex blade models and for higher numbers of sensors.
Lin, Chao-Yuan; Fu, Kuei-Lin; Lin, Cheng-Yu
2016-11-01
Recent extreme rainfall events led to many landslides due to climate changes in Taiwan. How to effectively promote post-disaster treatment and/or management works in a watershed/drainage basin is a crucial issue. Regarding the processes of watershed treatment and/or management works, disaster hotspot scanning and treatment priority setup should be carried out in advance. A scanning method using landslide ratio to determine the appropriate outlet of an interested watershed, and an optimal subdivision system with better homogeneity and accuracy in landslide ratio estimation were developed to help efficient executions of treatment and/or management works. Topography is a key factor affecting watershed landslide ratio. Considering the complexity and uncertainty of the natural phenomenon, multivariate analysis was applied to understand the relationship between topographic factors and landslide ratio in the interested watershed. The concept of species-area curve, which is usually adopted at on-site vegetation investigation to determinate the suitable quadrate size, was used to derive the optimal threshold in subdivisions. Results show that three main component axes including factors of scale, network and shape extracted from Digital Terrain Model coupled with areas of landslide can effectively explain the characteristics of landslide ratio in the interested watershed, and a relation curve obtained from the accuracy of landslide ratio classification and number of subdivisions could be established to derive optimal subdivision of the watershed. The subdivision method promoted in this study could be further used for priority rank and benefit assessment of landslide treatment in a watershed.
NASA Astrophysics Data System (ADS)
Lin, Chao-Yuan; Fu, Kuei-Lin; Lin, Cheng-Yu
2016-11-01
Recent extreme rainfall events led to many landslides due to climate changes in Taiwan. How to effectively promote post-disaster treatment and/or management works in a watershed/drainage basin is a crucial issue. Regarding the processes of watershed treatment and/or management works, disaster hotspot scanning and treatment priority setup should be carried out in advance. A scanning method using landslide ratio to determine the appropriate outlet of an interested watershed, and an optimal subdivision system with better homogeneity and accuracy in landslide ratio estimation were developed to help efficient executions of treatment and/or management works. Topography is a key factor affecting watershed landslide ratio. Considering the complexity and uncertainty of the natural phenomenon, multivariate analysis was applied to understand the relationship between topographic factors and landslide ratio in the interested watershed. The concept of species-area curve, which is usually adopted at on-site vegetation investigation to determinate the suitable quadrate size, was used to derive the optimal threshold in subdivisions. Results show that three main component axes including factors of scale, network and shape extracted from Digital Terrain Model coupled with areas of landslide can effectively explain the characteristics of landslide ratio in the interested watershed, and a relation curve obtained from the accuracy of landslide ratio classification and number of subdivisions could be established to derive optimal subdivision of the watershed. The subdivision method promoted in this study could be further used for priority rank and benefit assessment of landslide treatment in a watershed.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Understanding neuromotor strategy during functional upper extremity tasks using symbolic dynamics.
Nathan, Dominic E; Guastello, Stephen J; Prost, Robert W; Jeutter, Dean C
2012-01-01
The ability to model and quantify brain activation patterns that pertain to natural neuromotor strategy of the upper extremities during functional task performance is critical to the development of therapeutic interventions such as neuroprosthetic devices. The mechanisms of information flow, activation sequence and patterns, and the interaction between anatomical regions of the brain that are specific to movement planning, intention and execution of voluntary upper extremity motor tasks were investigated here. This paper presents a novel method using symbolic dynamics (orbital decomposition) and nonlinear dynamic tools of entropy, self-organization and chaos to describe the underlying structure of activation shifts in regions of the brain that are involved with the cognitive aspects of functional upper extremity task performance. Several questions were addressed: (a) How is it possible to distinguish deterministic or causal patterns of activity in brain fMRI from those that are really random or non-contributory to the neuromotor control process? (b) Can the complexity of activation patterns over time be quantified? (c) What are the optimal ways of organizing fMRI data to preserve patterns of activation, activation levels, and extract meaningful temporal patterns as they evolve over time? Analysis was performed using data from a custom developed time resolved fMRI paradigm involving human subjects (N=18) who performed functional upper extremity motor tasks with varying time delays between the onset of intention and onset of actual movements. The results indicate that there is structure in the data that can be quantified through entropy and dimensional complexity metrics and statistical inference, and furthermore, orbital decomposition is sensitive in capturing the transition of states that correlate with the cognitive aspects of functional task performance.
Final Technical Report: Distributed Controls for High Penetrations of Renewables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Raymond H.; Neely, Jason C.; Rashkin, Lee J.
2015-12-01
The goal of this effort was to apply four potential control analysis/design approaches to the design of distributed grid control systems to address the impact of latency and communications uncertainty with high penetrations of photovoltaic (PV) generation. The four techniques considered were: optimal fixed structure control; Nyquist stability criterion; vector Lyapunov analysis; and Hamiltonian design methods. A reduced order model of the Western Electricity Coordinating Council (WECC) developed for the Matlab Power Systems Toolbox (PST) was employed for the study, as well as representative smaller systems (e.g., a two-area, three-area, and four-area power system). Excellent results were obtained with themore » optimal fixed structure approach, and the methodology we developed was published in a journal article. This approach is promising because it offers a method for designing optimal control systems with the feedback signals available from Phasor Measurement Unit (PMU) data as opposed to full state feedback or the design of an observer. The Nyquist approach inherently handles time delay and incorporates performance guarantees (e.g., gain and phase margin). We developed a technique that works for moderate sized systems, but the approach does not scale well to extremely large system because of computational complexity. The vector Lyapunov approach was applied to a two area model to demonstrate the utility for modeling communications uncertainty. Application to large power systems requires a method to automatically expand/contract the state space and partition the system so that communications uncertainty can be considered. The Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) design methodology was selected to investigate grid systems for energy storage requirements to support high penetration of variable or stochastic generation (such as wind and PV) and loads. This method was applied to several small system models.« less
ERIC Educational Resources Information Center
Imangulova, Tatiyana; Makogonov, Aleksandr; Kulakhmetova, Gulbaram; Sardarov, Osman
2016-01-01
The development of desert areas in the industrial and tourist and educational purposes related to the implementation of physical activity in extreme conditions. A complex set of hot climate causes the body deep adaptive adjustment, impact on health, human physical performance. Optimization of physical activity in hot climates is of particular…
Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process
NASA Astrophysics Data System (ADS)
Migawa, Klaudiusz
2012-12-01
The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.
A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events
NASA Astrophysics Data System (ADS)
Zorzetto, E.; Marani, M.
2017-12-01
The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.
2011-01-01
Background Recovery patterns of upper extremity motor function have been described in several longitudinal studies, but most of these studies have had selected samples, short follow up times or insufficient outcomes on motor function. The general understanding is that improvements in upper extremity occur mainly during the first month after the stroke incident and little if any, significant recovery can be gained after 3-6 months. The purpose of this study is to describe the recovery of upper extremity function longitudinally in a non-selected sample initially admitted to a stroke unit with first ever stroke, living in Gothenburg urban area. Methods/Design A sample of 120 participants with a first-ever stroke and impaired upper extremity function will be consecutively included from an acute stroke unit and followed longitudinally for one year. Assessments are performed at eight occasions: at day 3 and 10, week 3, 4 and 6, month 3, 6 and 12 after onset of stroke. The primary clinical outcome measures are Action Research Arm Test and Fugl-Meyer Assessment for Upper Extremity. As additional measures, two new computer based objective methods with kinematic analysis of arm movements are used. The ABILHAND questionnaire of manual ability, Stroke Impact Scale, grip strength, spasticity, pain, passive range of motion and cognitive function will be assessed as well. At one year follow up, two patient reported outcomes, Impact on Participation and Autonomy and EuroQol Quality of Life Scale, will be added to cover the status of participation and aspects of health related quality of life. Discussion This study comprises a non-selected population with first ever stroke and impaired arm function. Measurements are performed both using traditional clinical assessments as well as computer based measurement systems providing objective kinematic data. The ICF classification of functioning, disability and health is used as framework for the selection of assessment measures. The study design with several repeated measurements on motor function will give us more confident information about the recovery patterns after stroke. This knowledge is essential both for optimizing rehabilitation planning as well as providing important information to the patient about the recovery perspectives. Trial registration ClinicalTrials.gov: NCT01115348 PMID:21612620
An Ensemble-Based Forecasting Framework to Optimize Reservoir Releases
NASA Astrophysics Data System (ADS)
Ramaswamy, V.; Saleh, F.
2017-12-01
Increasing frequency of extreme precipitation events are stressing the need to manage water resources on shorter timescales. Short-term management of water resources becomes proactive when inflow forecasts are available and this information can be effectively used in the control strategy. This work investigates the utility of short term hydrological ensemble forecasts for operational decision making during extreme weather events. An advanced automated hydrologic prediction framework integrating a regional scale hydrologic model, GIS datasets and the meteorological ensemble predictions from the European Center for Medium Range Weather Forecasting (ECMWF) was coupled to an implicit multi-objective dynamic programming model to optimize releases from a water supply reservoir. The proposed methodology was evaluated by retrospectively forecasting the inflows to the Oradell reservoir in the Hackensack River basin in New Jersey during the extreme hydrologic event, Hurricane Irene. Additionally, the flexibility of the forecasting framework was investigated by forecasting the inflows from a moderate rainfall event to provide important perspectives on using the framework to assist reservoir operations during moderate events. The proposed forecasting framework seeks to provide a flexible, assistive tool to alleviate the complexity of operational decision-making.
A Novel Designed Bioreactor for Recovering Precious Metals from Waste Printed Circuit Boards
Jujun, Ruan; Jie, Zheng; Jian, Hu; Zhang, Jianwen
2015-01-01
For recovering precious metals from waste printed circuit boards (PCBs), a novel hybrid technology including physical and biological methods was developed. It consisted of crushing, corona-electrostatic separation, and bioleaching. Bioleaching process is the focus of this paper. A novel bioreactor for bioleaching was designed. Bioleaching was carried out using Pseudomonas chlororaphis. Bioleaching experiments using mixed particles of Au and Cu were performed and leachate contained 0.006 mg/L, 2823 mg/L Au+ and Cu2+ respectively. It showed when Cu existed, the concentrations of Au were extremely small. This provided the feasibility to separate Cu from Au. The method of orthogonal experimental design was employed in the simulation bioleaching experiments. Experimental results showed the optimized parameters for separating Cu from Au particles were pH 7.0, temperature 22.5 °C, and rotation speed 80 r/min. Based on the optimized parameters obtained, the bioreactor was operated for recovering mixed Au and Cu particles. 88.1 wt.% of Cu and 76.6 wt.% of Au were recovered. The paper contributed important information to recover precious metals from waste PCBs. PMID:26316021
Uncertainty reasoning in expert systems
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik
1993-01-01
Intelligent control is a very successful way to transform the expert's knowledge of the type 'if the velocity is big and the distance from the object is small, hit the brakes and decelerate as fast as possible' into an actual control. To apply this transformation, one must choose appropriate methods for reasoning with uncertainty, i.e., one must: (1) choose the representation for words like 'small', 'big'; (2) choose operations corresponding to 'and' and 'or'; (3) choose a method that transforms the resulting uncertain control recommendations into a precise control strategy. The wrong choice can drastically affect the quality of the resulting control, so the problem of choosing the right procedure is very important. From a mathematical viewpoint these choice problems correspond to non-linear optimization and are therefore extremely difficult. In this project, a new mathematical formalism (based on group theory) is developed that allows us to solve the problem of optimal choice and thus: (1) explain why the existing choices are really the best (in some situations); (2) explain a rather mysterious fact that fuzzy control (i.e., control based on the experts' knowledge) is often better than the control by these same experts; and (3) give choice recommendations for the cases when traditional choices do not work.
Extreme Terrestrial Environments: Life in Thermal Stress and Hypoxia. A Narrative Review.
Burtscher, Martin; Gatterer, Hannes; Burtscher, Johannes; Mairbäurl, Heimo
2018-01-01
Living, working and exercising in extreme terrestrial environments are challenging tasks even for healthy humans of the modern new age. The issue is not just survival in remote environments but rather the achievement of optimal performance in everyday life, occupation, and sports. Various adaptive biological processes can take place to cope with the specific stressors of extreme terrestrial environments like cold, heat, and hypoxia (high altitude). This review provides an overview of the physiological and morphological aspects of adaptive responses in these environmental stressors at the level of organs, tissues, and cells. Furthermore, adjustments existing in native people living in such extreme conditions on the earth as well as acute adaptive responses in newcomers are discussed. These insights into general adaptability of humans are complemented by outcomes of specific acclimatization/acclimation studies adding important information how to cope appropriately with extreme environmental temperatures and hypoxia.
Stress fractures of the ribs and upper extremities: causation, evaluation, and management.
Miller, Timothy L; Harris, Joshua D; Kaeding, Christopher C
2013-08-01
Stress fractures are common troublesome injuries in athletes and non-athletes. Historically, stress fractures have been thought to predominate in the lower extremities secondary to the repetitive stresses of impact loading. Stress injuries of the ribs and upper extremities are much less common and often unrecognized. Consequently, these injuries are often omitted from the differential diagnosis of rib or upper extremity pain. Given the infrequency of this diagnosis, few case reports or case series have reported on their precipitating activities and common locations. Appropriate evaluation for these injuries requires a thorough history and physical examination. Radiographs may be negative early, requiring bone scintigraphy or MRI to confirm the diagnosis. Nonoperative and operative treatment recommendations are made based on location, injury classification, and causative activity. An understanding of the most common locations of upper extremity stress fractures and their associated causative activities is essential for prompt diagnosis and optimal treatment.
Extreme Terrestrial Environments: Life in Thermal Stress and Hypoxia. A Narrative Review
Burtscher, Martin; Gatterer, Hannes; Burtscher, Johannes; Mairbäurl, Heimo
2018-01-01
Living, working and exercising in extreme terrestrial environments are challenging tasks even for healthy humans of the modern new age. The issue is not just survival in remote environments but rather the achievement of optimal performance in everyday life, occupation, and sports. Various adaptive biological processes can take place to cope with the specific stressors of extreme terrestrial environments like cold, heat, and hypoxia (high altitude). This review provides an overview of the physiological and morphological aspects of adaptive responses in these environmental stressors at the level of organs, tissues, and cells. Furthermore, adjustments existing in native people living in such extreme conditions on the earth as well as acute adaptive responses in newcomers are discussed. These insights into general adaptability of humans are complemented by outcomes of specific acclimatization/acclimation studies adding important information how to cope appropriately with extreme environmental temperatures and hypoxia. PMID:29867589
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
Translational research to improve the treatment of severe extremity injuries.
Brown, Kate V; Penn-Barwell, J G; Rand, B C; Wenke, J C
2014-06-01
Severe extremity injuries are the most significant injury sustained in combat wounds. Despite optimal clinical management, non-union and infection remain common complications. In a concerted effort to dovetail research efforts, there has been a collaboration between the UK and USA, with British military surgeons conducting translational studies under the auspices of the US Institute of Surgical Research. This paper describes 3 years of work. A variety of studies were conducted using, and developing, a previously validated rat femur critical-sized defect model. Timing of surgical debridement and irrigation, different types of irrigants and different means of delivery of antibiotic and growth factors for infection control and to promote bone healing were investigated. Early debridement and irrigation were independently shown to reduce infection. Normal saline was the most optimal irrigant, superior to disinfectant solutions. A biodegradable gel demonstrated superior antibiotic delivery capabilities than standard polymethylmethacrylate beads. A polyurethane scaffold was shown to have the ability to deliver both antibiotics and growth factors. The importance of early transit times to Role 3 capabilities for definitive surgical care has been underlined. Novel and superior methods of antibiotic and growth factor delivery, compared with current clinical standards of care, have been shown. There is the potential for translation to clinical studies to promote infection control and bone healing in these devastating injuries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Statistical downscaling modeling with quantile regression using lasso to estimate extreme rainfall
NASA Astrophysics Data System (ADS)
Santri, Dewi; Wigena, Aji Hamim; Djuraidah, Anik
2016-02-01
Rainfall is one of the climatic elements with high diversity and has many negative impacts especially extreme rainfall. Therefore, there are several methods that required to minimize the damage that may occur. So far, Global circulation models (GCM) are the best method to forecast global climate changes include extreme rainfall. Statistical downscaling (SD) is a technique to develop the relationship between GCM output as a global-scale independent variables and rainfall as a local- scale response variable. Using GCM method will have many difficulties when assessed against observations because GCM has high dimension and multicollinearity between the variables. The common method that used to handle this problem is principal components analysis (PCA) and partial least squares regression. The new method that can be used is lasso. Lasso has advantages in simultaneuosly controlling the variance of the fitted coefficients and performing automatic variable selection. Quantile regression is a method that can be used to detect extreme rainfall in dry and wet extreme. Objective of this study is modeling SD using quantile regression with lasso to predict extreme rainfall in Indramayu. The results showed that the estimation of extreme rainfall (extreme wet in January, February and December) in Indramayu could be predicted properly by the model at quantile 90th.
2013-10-01
therapies for prevention or mitigation of HO can be optimally targeted. This study seeks to contribute to advancement in each of these key areas...NOTES 14. ABSTRACT This study will recruit wounded warriors with severe extremity trauma, which places them at high risk for heterotopic... therapies for prevention or mitigation of HO. 15. SUBJECT TERMS Wound healing 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18
2013-10-01
so that therapies for prevention or mitigation of HO can be optimally targeted. This study seeks to contribute to advancement in each of these key...SUPPLEMENTARY NOTES 14. ABSTRACT This study will recruit wounded warriors with severe extremity trauma, which places them at high risk for...define potential therapies for prevention or mitigation of HO. 15. SUBJECT TERMS Wound healing 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF
Extremophilic Enzymatic Response for Protection against UV-Radiation Damage
2012-09-17
superoxide dismutase from the thermophile E1 is a very active enzyme and extremely efficient in its function as antioxidant by capturing superoxide radicals...Ollivet-Besson, Papić, L., Blamey J.M. “Optimization of the antioxidant activity of the enzyme superoxide dismutase from the thermophile E1 induced by...antioxidant enzymes , superoxide dismutase and catalase, from selected microorganisms and the contribution of these enzymes to the resistance to extreme and
Mechanisms of Stability of Robust Chaperones from Hyperthermophiles
2009-02-03
basis for high temperature stability is still under active study. Activity and stability of enzymes at high temperature is an obvious and critically...important adaptation for the survival of thermophiles at the extremes of their temperature ranges. One of the novel aspects of our project is that we...with optimal growth at 100°C, with homologous proteins from Methanococcus jannaschii, an 88°C extreme thermophile . We have previously shown that
NASA Astrophysics Data System (ADS)
Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen
2017-12-01
Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.
Relating precipitation to fronts at a sub-daily basis
NASA Astrophysics Data System (ADS)
Hénin, Riccardo; Ramos, Alexandre M.; Liberato, Margarida L. R.; Gouveia, Célia
2017-04-01
High impact events over Western Iberia include precipitation extremes that are cause for concern as they lead to flooding, landslides, extensive property damage and human casualties. These events are usually associated with low pressure systems over the North Atlantic moving eastward towards the European western coasts (Liberato and Trigo, 2014). A method to detect fronts and to associate amounts of precipitation to each front is tested, distinguishing between warm and cold fronts. The 6-hourly ERA-interim 1979-2012 reanalysis with 1°x1° horizontal resolution is used for the purpose. An objective front identification method (the Thermal Method described in Shemm et al., 2014) is applied to locate fronts all over the Northern Hemisphere considering the equivalent potential temperature as thermal parameter to use in the model. On the other hand, we settled a squared search box of tuneable dimension (from 2 to 10 degrees long) to look for a front in the neighbourhood of a grid point affected by precipitation. A sensitivity analysis is performed and the optimal dimension of the box is assessed in order to avoid over(under) estimation of precipitation. This is performed in the light of the variability and typical dynamics of warm/cold frontal systems in the Western Europe region. Afterwards, using the extreme event ranking over Iberia proposed by Ramos et al. (2014) the first ranked extreme events are selected in order to validate the method with specific case studies. Finally, climatological and trend maps of frontal activity are produced both on annual and seasonal scales. Trend maps show a decrease of frontal precipitation over north-western Europe and a slight increase over south-western Europe, mainly due to warm fronts. REFERENCES Liberato M.L.R. and R.M. Trigo (2014) Extreme precipitation events and related impacts in Western Iberia. Hydrology in a Changing World: Environmental and Human Dimensions. IAHS Red Book No 363, 171-176. ISSN: 0144-7815. Ramos A.M., R.M. Trigo and M.L.R. Liberato (2014) A ranking of high-resolution daily precipitation extreme events for the Iberian Peninsula, Atmospheric Science Letters 15, 328 - 334. doi: 10.1002/asl2.507. Shemm S., I. Rudeva and I. Simmonds (2014) Extratropical fronts in the lower troposphere - global perspectives obtained from two automated methods. Quarterly Journal of the Royal Meteorological Society, 141: 1686-1698, doi: 10.1002/qj.2471. ACKNOWLEDGEMENTS This work is supported by FCT - project UID/GEO/50019/2013 - Instituto Dom Luiz. Fundação para a Ciência e a Tecnologia, Portugal (FCT) is also providing for R. Hénin doctoral grant (PD/BD/114479/2016) and A.M. Ramos postdoctoral grant (FCT/DFRH/SFRH/BPD/84328/2012).
NASA Technical Reports Server (NTRS)
Roth, Don J.; Kautz, Harold E.; Abel, Phillip B.; Whalen, Mike F.; Hendricks, J. Lynne; Bodis, James R.
2000-01-01
Surface topography, which significantly affects the performance of many industrial components, is normally measured with diamond-tip profilometry over small areas or with optical scattering methods over larger areas. To develop air-coupled surface profilometry, the NASA Glenn Research Center at Lewis Field initiated a Space Act Agreement with Sonix, Inc., through two Glenn programs, the Advanced High Temperature Engine Materials Program (HITEMP) and COMMTECH. The work resulted in quantitative surface topography profiles obtained using only high-frequency, focused ultrasonic pulses in air. The method is nondestructive, noninvasive, and noncontact, and it does not require light-reflective surfaces. Air surface profiling may be desirable when diamond-tip or laserbased methods are impractical, such as over large areas, when a significant depth range is required, or for curved surfaces. When the configuration is optimized, the method is reasonably rapid and all the quantitative analysis facilities are online, including two- and three-dimensional visualization, extreme value filtering (for faulty data), and leveling.
Fully implicit adaptive mesh refinement solver for 2D MHD
NASA Astrophysics Data System (ADS)
Philip, B.; Chacon, L.; Pernice, M.
2008-11-01
Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)
NASA Astrophysics Data System (ADS)
Koneva, M. S.; Rudenko, O. V.; Usatikov, S. V.; Bugaets, N. A.; Tereshchenko, I. V.
2018-05-01
To reduce the duration of the process and to ensure the microbiological purity of the germinated material, an improved method of germination has been developed based on the complex use of physical factors: electrochemically activated water (ECHA-water), electromagnetic field of extremely low frequencies (EMF ELF) with round-the-clock artificial illumination by LED lamps. The increase in the efficiency of the "numerical" technology for solving computational problems of parametric optimization of the technological process of hydroponic germination of wheat grains is considered. In this situation, the quality criteria are contradictory and part of them is given by implicit functions of many variables. A solution algorithm is offered without the construction of a Pareto set in which a relatively small number of elements of a set of alternatives is used to obtain a linear convolution of the criteria with given weights, normalized to their "ideal" values from the solution of the problems of single-criterion private optimizations. The use of the proposed mathematical models describing the processes of hydroponic germination of wheat grains made it possible to intensify the germination process and to shorten the time of obtaining wheat sprouts "Altayskaya 105" for 27 hours.
Dynamically Reconfigurable Approach to Multidisciplinary Problems
NASA Technical Reports Server (NTRS)
Alexandrov, Natalie M.; Lewis, Robert Michael
2003-01-01
The complexity and autonomy of the constituent disciplines and the diversity of the disciplinary data formats make the task of integrating simulations into a multidisciplinary design optimization problem extremely time-consuming and difficult. We propose a dynamically reconfigurable approach to MDO problem formulation wherein an appropriate implementation of the disciplinary information results in basic computational components that can be combined into different MDO problem formulations and solution algorithms, including hybrid strategies, with relative ease. The ability to re-use the computational components is due to the special structure of the MDO problem. We believe that this structure can and should be used to formulate and solve optimization problems in the multidisciplinary context. The present work identifies the basic computational components in several MDO problem formulations and examines the dynamically reconfigurable approach in the context of a popular class of optimization methods. We show that if the disciplinary sensitivity information is implemented in a modular fashion, the transfer of sensitivity information among the formulations under study is straightforward. This enables not only experimentation with a variety of problem formations in a research environment, but also the flexible use of formulations in a production design environment.
NASA Astrophysics Data System (ADS)
Tapia, V.; González, A.; Finger, R.; Mena, F. P.; Monasterio, D.; Reyes, N.; Sánchez, M.; Bronfman, L.
2017-03-01
We present the design, implementation, and characterization of the optics of ALMA Band 1, the lowest frequency band in the most advanced radio astronomical telescope. Band 1 covers the broad frequency range from 35 to 50 GHz, with the goal of minor degradation up to 52 GHz. This is, up to now, the largest fractional bandwidth of all ALMA bands. Since the optics is the first subsystem of any receiver, low noise figure and maximum aperture efficiency are fundamental for best sensitivity. However, a conjunction of several factors (small cryostat apertures, mechanical constraints, and cost limitations) makes extremely challenging to achieve these goals. To overcome these problems, the optics presented here includes two innovative solutions, a compact optimized-profile corrugated horn and a modified Fresnel lens. The horn profile was optimized for optimum performance and easy fabrication by a single-piece manufacturing process in a lathe. In this way, manufacturability is eased when compared with traditional fabrication methods. To minimize the noise contribution of the optics, a one-step zoned lens was designed. Its parameters were carefully optimized to maximize the frequency coverage and reduce losses. The optical assembly reported here fully complies with ALMA specifications.
Necessary and sufficient criterion for extremal quantum correlations in the simplest Bell scenario
NASA Astrophysics Data System (ADS)
Ishizaka, Satoshi
2018-05-01
In the study of quantum nonlocality, one obstacle is that the analytical criterion for identifying the boundaries between quantum and postquantum correlations has not yet been given, even in the simplest Bell scenario. We propose a plausible, analytical, necessary and sufficient condition ensuring that a nonlocal quantum correlation in the simplest scenario is an extremal boundary point. Our extremality condition amounts to certifying an information-theoretical quantity; the probability of guessing a measurement outcome of a distant party optimized using any quantum instrument. We show that this quantity can be upper and lower bounded from any correlation in a device-independent way, and we use numerical calculations to confirm that coincidence of the upper and lower bounds appears to be necessary and sufficient for the extremality.
Interventional Therapy for Upper Extremity Deep Vein Thrombosis
Carlon, Timothy A.; Sudheendra, Deepak
2017-01-01
Approximately 10% of all deep vein thromboses occur in the upper extremity, and that number is increasing due to the use of peripherally inserted central catheters. Sequelae of upper extremity deep vein thrombosis (UEDVT) are similar to those for lower extremity deep vein thrombosis (LEDVT) and include postthrombotic syndrome and pulmonary embolism. In addition to systemic anticoagulation, there are multiple interventional treatment options for UEDVT with the potential to reduce the incidence of these sequelae. To date, there have been no randomized trials to define the optimal management strategy for patients presenting with UEDVT, so many conclusions are drawn from smaller, single-center studies or from LEDVT research. In this article, the authors describe the evidence for the currently available treatment options and an approach to a patient with acute UEDVT. PMID:28265130
NASA Technical Reports Server (NTRS)
Jones, Gregory S.; Yao, Chung-Sheng; Allan, Brian G.
2006-01-01
Recent efforts in extreme short takeoff and landing aircraft configurations have renewed the interest in circulation control wing design and optimization. The key to accurately designing and optimizing these configurations rests in the modeling of the complex physics of these flows. This paper will highlight the physics of the stagnation and separation regions on two typical circulation control airfoil sections.
Hard beta and gamma emissions of 124I. Impact on occupational dose in PET/CT.
Kemerink, G J; Franssen, R; Visser, M G W; Urbach, C J A; Halders, S G E A; Frantzen, M J; Brans, B; Teule, G J J; Mottaghy, F M
2011-01-01
The hard beta and gamma radiation of 124I can cause high doses to PET/CT workers. In this study we tried to quantify this occupational exposure and to optimize radioprotection. Thin MCP-Ns thermoluminescent dosimeters suitable for measuring beta and gamma radiation were used for extremity dosimetry, active personal dosimeters for whole-body dosimetry. Extremity doses were determined during dispensing of 124I and oral administration of the activity to the patient, the body dose during all phases of the PET/CT procedure. In addition, dose rates of vials and syringes as used in clinical practice were measured. The procedure for dispensing 124I was optimized using newly developed shielding. Skin dose rates up to 100 mSv/min were measured when in contact with the manufacturer's vial containing 370 MBq of 124I. For an unshielded 5 ml syringe the positron skin dose was about seven times the gamma dose. Before optimization of the preparation of 124I, using an already reasonably safe technique, the highest mean skin dose caused by handling 370 MBq was 1.9 mSv (max. 4.4 mSv). After optimization the skin dose was below 0.2 mSv. The highly energetic positrons emitted by 124I can cause high skin doses if radioprotection is poor. Under optimized conditions occupational doses are acceptable. Education of workers is of paramount importance.
Hoppe, Andreas; Hoffmann, Sabrina; Holzhütter, Hermann-Georg
2007-01-01
Background In recent years, constrained optimization – usually referred to as flux balance analysis (FBA) – has become a widely applied method for the computation of stationary fluxes in large-scale metabolic networks. The striking advantage of FBA as compared to kinetic modeling is that it basically requires only knowledge of the stoichiometry of the network. On the other hand, results of FBA are to a large degree hypothetical because the method relies on plausible but hardly provable optimality principles that are thought to govern metabolic flux distributions. Results To augment the reliability of FBA-based flux calculations we propose an additional side constraint which assures thermodynamic realizability, i.e. that the flux directions are consistent with the corresponding changes of Gibb's free energies. The latter depend on metabolite levels for which plausible ranges can be inferred from experimental data. Computationally, our method results in the solution of a mixed integer linear optimization problem with quadratic scoring function. An optimal flux distribution together with a metabolite profile is determined which assures thermodynamic realizability with minimal deviations of metabolite levels from their expected values. We applied our novel approach to two exemplary metabolic networks of different complexity, the metabolic core network of erythrocytes (30 reactions) and the metabolic network iJR904 of Escherichia coli (931 reactions). Our calculations show that increasing network complexity entails increasing sensitivity of predicted flux distributions to variations of standard Gibb's free energy changes and metabolite concentration ranges. We demonstrate the usefulness of our method for assessing critical concentrations of external metabolites preventing attainment of a metabolic steady state. Conclusion Our method incorporates the thermodynamic link between flux directions and metabolite concentrations into a practical computational algorithm. The weakness of conventional FBA to rely on intuitive assumptions about the reversibility of biochemical reactions is overcome. This enables the computation of reliable flux distributions even under extreme conditions of the network (e.g. enzyme inhibition, depletion of substrates or accumulation of end products) where metabolite concentrations may be drastically altered. PMID:17543097
Rahman Prize Lecture: Lattice Boltzmann simulation of complex states of flowing matter
NASA Astrophysics Data System (ADS)
Succi, Sauro
Over the last three decades, the Lattice Boltzmann (LB) method has gained a prominent role in the numerical simulation of complex flows across an impressively broad range of scales, from fully-developed turbulence in real-life geometries, to multiphase flows in micro-fluidic devices, all the way down to biopolymer translocation in nanopores and lately, even quark-gluon plasmas. After a brief introduction to the main ideas behind the LB method and its historical developments, we shall present a few selected applications to complex flow problems at various scales of motion. Finally, we shall discuss prospects for extreme-scale LB simulations of outstanding problems in the physics of fluids and its interfaces with material sciences and biology, such as the modelling of fluid turbulence, the optimal design of nanoporous gold catalysts and protein folding/aggregation in crowded environments.
GAME: GAlaxy Machine learning for Emission lines
NASA Astrophysics Data System (ADS)
Ucci, G.; Ferrara, A.; Pallottini, A.; Gallerani, S.
2018-06-01
We present an updated, optimized version of GAME (GAlaxy Machine learning for Emission lines), a code designed to infer key interstellar medium physical properties from emission line intensities of ultraviolet /optical/far-infrared galaxy spectra. The improvements concern (a) an enlarged spectral library including Pop III stars, (b) the inclusion of spectral noise in the training procedure, and (c) an accurate evaluation of uncertainties. We extensively validate the optimized code and compare its performance against empirical methods and other available emission line codes (PYQZ and HII-CHI-MISTRY) on a sample of 62 SDSS stacked galaxy spectra and 75 observed HII regions. Very good agreement is found for metallicity. However, ionization parameters derived by GAME tend to be higher. We show that this is due to the use of too limited libraries in the other codes. The main advantages of GAME are the simultaneous use of all the measured spectral lines and the extremely short computational times. We finally discuss the code potential and limitations.
Automatic yield-line analysis of slabs using discontinuity layout optimization
Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.
2014-01-01
The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905
Tipping points? Curvilinear associations between activity level and mental development in toddlers.
Flom, Megan; Cohen, Madeleine; Saudino, Kimberly J
2017-05-01
The Theory of Optimal Stimulation (Zentall & Zentall, Psychological Bulletin, 94, 1983, 446) posits that the relation between activity level (AL) and cognitive performance follows an inverted U shape where midrange AL predicts better cognitive performance than AL at the extremes. We explored this by fitting linear and quadratic models predicting mental development from AL assessed via multiple methods (parent ratings, observations, and actigraphs) and across multiple situations (laboratory play, laboratory test, home) in over 600 twins (2- and 3-year olds). Only observed AL in the laboratory was curvilinearly related to mental development scores. Results replicated across situations, age, and twin samples, providing strong support for the optimal stimulation model for this measure of AL in early childhood. Different measures of AL provide different information. Observations of AL which include both qualitative and quantitative aspects of AL within structured situations are able to capture beneficial aspects of normative AL as well as detriments of both low and high AL. © 2016 Association for Child and Adolescent Mental Health.
Live biospeckle laser imaging of root tissues.
Braga, Roberto A; Dupuy, L; Pasqual, M; Cardoso, R R
2009-06-01
Live imaging is now a central component for the study of plant developmental processes. Currently, most techniques are extremely constraining: they rely on the marking of specific cellular structures which generally apply to model species because they require genetic transformations. The biospeckle laser (BSL) system was evaluated as an instrument to measure biological activity in plant tissues. The system allows collecting biospeckle patterns from roots which are grown in gels. Laser illumination has been optimized to obtain the images without undesirable specular reflections from the glass tube. Data on two different plant species were obtained and the ability of three different methods to analyze the biospeckle patterns are presented. The results showed that the biospeckle could provide quantitative indicators of the molecular activity from roots which are grown in gel substrate in tissue culture. We also presented a particular experimental configuration and the optimal approach to analyze the images. This may serve as a basis to further works on live BSL in order to study root development.
Narcissism and Discrepancy between Self and Friends’ Perceptions of Personality
Park, Sun W.; Colvin, C. Randall
2013-01-01
Objective Most research on narcissism and person perception has used strangers as perceivers. However, research has demonstrated that strangers’ ratings are influenced by narcissists’ stylish appearance (Back, Schmukle, & Egloff, 2010). In the present study, we recruited participants and their close friends, individuals whose close relationship should immunize them to participants’ superficial appearance cues. We investigated the relation between narcissism and personality ratings by self and friends. Method Participants (N = 66; 38 women; mean age = 20.83) completed the Narcissistic Personality Inventory (Raskin & Terry, 1988) and described their personality on the 100-item California Adult Q-sort (CAQ; Block, 2008). Participants’ personality was also described on the CAQ by close friends. The “optimally adjusted individual” prototype was used to summarize participant and friend personality ratings (Block, 2008). Results Participants with high narcissism scores were ascribed higher optimal adjustment by self than by friends. Conclusion Narcissistic individuals’ self-ratings are extremely positive and more favorable than friends’ ratings of them. PMID:23799917
Ecklund, M M
1995-11-01
Critically ill patients have multiple risk factors for deep vein thrombosis and pulmonary embolism. The majority of patients with pulmonary embolism have a lower extremity deep vein thrombosis as a source of origin. Pulmonary embolism causes a high mortality rate in the hemodynamically compromised individual. Awareness of risk factors relative to the development of deep vein thrombosis and pulmonary embolism is important for the critical care nurse. Understanding the pathophysiology can help guide prophylaxis and treatment plans. The therapies, from invasive to mechanical, all carry risks and benefits, and are weighed for each patient. The advanced practice nurse, whether in the direct or indirect role, has an opportunity to impact the care of the high risk patient. Options range from teaching the nurse who is new to critical care, to teaching patients and families. Development of multidisciplinary protocols and clinical pathways are ways to impact the standard of care. Improved delivery of care methods can optimize the care rendered in an ever changing field of critical care.
Theoretical investigation and optimization of fiber grating based slow light
NASA Astrophysics Data System (ADS)
Wang, Qi; Wang, Peng; Du, Chao; Li, Jin; Hu, Haifeng; Zhao, Yong
2017-07-01
On the edge of bandgap in a fiber grating, narrow peaks of high transimittivity exist at frequencies where light interferes constructively in the forward direction. In the vicinity of these transmittivity peaks, light reflects back and forth numerous times across the periodic structure and experiences a large group delay. In order to generate the extremely slow light in fiber grating for applications, in this research, the common sense of formation mechanism of slow light in fiber grating was introduced. The means of producing and operating fiber grating was studied to support structural slow light with a group index that can be in principle as high as several thousand. The simulations proceeded by transfer matrix method in the paper were presented to elucidate how the fiber grating parameters effect group refractive index. The main parameters that need to be optimized include grating length, refractive index contrast, grating period, loss coefficient, chirp and apodization functions, those can influence fiber grating characteristics.
A method for aircraft concept exploration using multicriteria interactive genetic algorithms
NASA Astrophysics Data System (ADS)
Buonanno, Michael Alexander
2005-08-01
The problem of aircraft concept selection has become increasingly difficult in recent years due to changes in the primary evaluation criteria of concepts. In the past, performance was often the primary discriminator, whereas modern programs have placed increased emphasis on factors such as environmental impact, economics, supportability, aesthetics, and other metrics. The revolutionary nature of the vehicles required to simultaneously meet these conflicting requirements has prompted a shift from design using historical data regression techniques for metric prediction to the use of sophisticated physics-based analysis tools that are capable of analyzing designs outside of the historical database. The use of optimization methods with these physics-based tools, however, has proven difficult because of the tendency of optimizers to exploit assumptions present in the models and drive the design towards a solution which, while promising to the computer, may be infeasible due to factors not considered by the computer codes. In addition to this difficulty, the number of discrete options available at this stage may be unmanageable due to the combinatorial nature of the concept selection problem, leading the analyst to select a sub-optimum baseline vehicle. Some extremely important concept decisions, such as the type of control surface arrangement to use, are frequently made without sufficient understanding of their impact on the important system metrics due to a lack of historical guidance, computational resources, or analysis tools. This thesis discusses the difficulties associated with revolutionary system design, and introduces several new techniques designed to remedy them. First, an interactive design method has been developed that allows the designer to provide feedback to a numerical optimization algorithm during runtime, thereby preventing the optimizer from exploiting weaknesses in the analytical model. This method can be used to account for subjective criteria, or as a crude measure of un-modeled quantitative criteria. Other contributions of the work include a modified Structured Genetic Algorithm that enables the efficient search of large combinatorial design hierarchies and an improved multi-objective optimization procedure that can effectively optimize several objectives simultaneously. A new conceptual design method has been created by drawing upon each of these new capabilities and aspects of more traditional design methods. The ability of this new technique to assist in the design of revolutionary vehicles has been demonstrated using a problem of contemporary interest: the concept exploration of a supersonic business jet. This problem was found to be a good demonstration case because of its novelty and unique requirements, and the results of this proof of concept exercise indicate that the new method is effective at providing additional insight into the relationship between a vehicle's requirements and its favorable attributes.
Microsurgery within reconstructive surgery of extremities.
Pheradze, I; Pheradze, T; Tsilosani, G; Goginashvili, Z; Mosiava, T
2006-05-01
Reconstructive surgery of extremities is an object of a special attention of surgeons. Vessel and nerve damages, deficiency of soft tissue, bone, associated with infection results in a complete loss of extremity function, it also raises a question of amputation. The goal of the study was to improve the role of microsurgery in reconstructive surgery of limbs. We operated on 294 patients with various diseases and damages of extremities: pathology of nerves, vessels, tissue loss. An original method of treatment of large simultaneous functional defects of limbs has been used. Good functional and aesthetic results were obtained. Results of reconstructive operations on extremities might be improved by using of microsurgery methods. Microsurgery is deemed as a method of choice for extremities' reconstructive surgery as far as outcomes achieved through application of microsurgical technique significantly surpass the outcomes obtained through the use of routine surgical methods.
Sun, Yu; Tamarit, Daniel
2017-01-01
Abstract The major codon preference model suggests that codons read by tRNAs in high concentrations are preferentially utilized in highly expressed genes. However, the identity of the optimal codons differs between species although the forces driving such changes are poorly understood. We suggest that these questions can be tackled by placing codon usage studies in a phylogenetic framework and that bacterial genomes with extreme nucleotide composition biases provide informative model systems. Switches in the background substitution biases from GC to AT have occurred in Gardnerella vaginalis (GC = 32%), and from AT to GC in Lactobacillus delbrueckii (GC = 62%) and Lactobacillus fermentum (GC = 63%). We show that despite the large effects on codon usage patterns by these switches, all three species evolve under selection on synonymous sites. In G. vaginalis, the dramatic codon frequency changes coincide with shifts of optimal codons. In contrast, the optimal codons have not shifted in the two Lactobacillus genomes despite an increased fraction of GC-ending codons. We suggest that all three species are in different phases of an on-going shift of optimal codons, and attribute the difference to a stronger background substitution bias and/or longer time since the switch in G. vaginalis. We show that comparative and correlative methods for optimal codon identification yield conflicting results for genomes in flux and discuss possible reasons for the mispredictions. We conclude that switches in the direction of the background substitution biases can drive major shifts in codon preference patterns even under sustained selection on synonymous codon sites. PMID:27540085
NASA Astrophysics Data System (ADS)
Steckiewicz, Adam; Butrylo, Boguslaw
2017-08-01
In this paper we discussed the results of a multi-criteria optimization scheme as well as numerical calculations of periodic conductive structures with selected geometry. Thin printed structures embedded on a flexible dielectric substrate may be applied as simple, cheap, passive low-pass filters with an adjustable cutoff frequency in low (up to 1 MHz) radio frequency range. The analysis of an electromagnetic phenomena in presented structures was realized on the basis of a three-dimensional numerical model of three proposed geometries of periodic elements. The finite element method (FEM) was used to obtain a solution of an electromagnetic harmonic field. Equivalent lumped electrical parameters of printed cells obtained in such manner determine the shape of an amplitude transmission characteristic of a low-pass filter. A nonlinear influence of a printed cell geometry on equivalent parameters of cells electric model, makes it difficult to find the desired optimal solution. Therefore an optimization problem of optimal cell geometry estimation with regard to an approximation of the determined amplitude transmission characteristic with an adjusted cutoff frequency, was obtained by the particle swarm optimization (PSO) algorithm. A dynamically suitable inertia factor was also introduced into the algorithm to improve a convergence to a global extremity of a multimodal objective function. Numerical results as well as PSO simulation results were characterized in terms of approximation accuracy of predefined amplitude characteristics in a pass-band, stop-band and cutoff frequency. Three geometries of varying degrees of complexity were considered and their use in signal processing systems was evaluated.
Minimum-fuel, 3-dimensional flightpath guidance of transfer jets
NASA Technical Reports Server (NTRS)
Neuman, F.; Kreindler, E.
1984-01-01
Minimum fuel, three dimensional flightpaths for commercial jet aircraft are discussed. The theoretical development is divided into two sections. In both sections, the necessary conditions of optimal control, including singular arcs and state constraints, are used. One section treats the initial and final portions (below 10,000 ft) of long optimal flightpaths. Here all possible paths can be derived by generating fields of extremals. Another section treats the complete intermediate length, three dimensional terminal area flightpaths. Here only representative sample flightpaths can be computed. Sufficient detail is provided to give the student of optimal control a complex example of a useful application of optimal control theory.
Serrano, Ana; van Bommel, Maarten; Hallett, Jessica
2013-11-29
An evaluation was undertaken of ultrahigh pressure liquid chromatography (UHPLC) in comparison to high-performance liquid chromatography (HPLC) for characterizing natural dyes in cultural heritage objects. A new UHPLC method was optimized by testing several analytical parameters adapted from prior UHPLC studies developed in diverse fields of research. Different gradient elution programs were tested on seven UHPLC columns with different dimensions and stationary phase compositions by applying several mobile phases, flow rates, temperatures, and runtimes. The UHPLC method successfully provided more improved data than that achieved by the HPLC method. Indeed, even though carminic acid has shown circa 146% higher resolution with HPLC, UHPLC resulted in an increase of 41-61% resolution and a decrease of 91-422% limit of detection, depending on the dye compound. The optimized method was subsequently assigned to analyse 59 natural reference materials, in which 85 different components were ascribed with different physicochemical properties, in order to create a spectral database for future characterization of dyes in cultural heritage objects. The majority of these reference samples could be successfully distinguished with one single method through the examination of these compounds' retention times and their spectra acquired with a photodiode array detector. These results demonstrate that UHPLC analyses are extremely valuable for the acquisition of more precise chromatographic information concerning natural dyes with complex mixtures of different and/or closely related physicochemical properties, essential for distinguishing similar species of plants and animals used to colour cultural heritage objects. Copyright © 2013 Elsevier B.V. All rights reserved.
Tsunami Modeling and Prediction Using a Data Assimilation Technique with Kalman Filters
NASA Astrophysics Data System (ADS)
Barnier, G.; Dunham, E. M.
2016-12-01
Earthquake-induced tsunamis cause dramatic damages along densely populated coastlines. It is difficult to predict and anticipate tsunami waves in advance, but if the earthquake occurs far enough from the coast, there may be enough time to evacuate the zones at risk. Therefore, any real-time information on the tsunami wavefield (as it propagates towards the coast) is extremely valuable for early warning systems. After the 2011 Tohoku earthquake, a dense tsunami-monitoring network (S-net) based on cabled ocean-bottom pressure sensors has been deployed along the Pacific coast in Northeastern Japan. Maeda et al. (GRL, 2015) introduced a data assimilation technique to reconstruct the tsunami wavefield in real time by combining numerical solution of the shallow water wave equations with additional terms penalizing the numerical solution for not matching observations. The penalty or gain matrix is determined though optimal interpolation and is independent of time. Here we explore a related data assimilation approach using the Kalman filter method to evolve the gain matrix. While more computationally expensive, the Kalman filter approach potentially provides more accurate reconstructions. We test our method on a 1D tsunami model derived from the Kozdon and Dunham (EPSL, 2014) dynamic rupture simulations of the 2011 Tohoku earthquake. For appropriate choices of model and data covariance matrices, the method reconstructs the tsunami wavefield prior to wave arrival at the coast. We plan to compare the Kalman filter method to the optimal interpolation method developed by Maeda et al. (GRL, 2015) and then to implement the method for 2D.
Improved mine blast algorithm for optimal cost design of water distribution systems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon
2015-12-01
The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.
NASA Astrophysics Data System (ADS)
Kong, Wenwen; Liu, Fei; Zhang, Chu; Zhang, Jianfeng; Feng, Hailin
2016-10-01
The feasibility of hyperspectral imaging with 400-1000 nm was investigated to detect malondialdehyde (MDA) content in oilseed rape leaves under herbicide stress. After comparing the performance of different preprocessing methods, linear and nonlinear calibration models, the optimal prediction performance was achieved by extreme learning machine (ELM) model with only 23 wavelengths selected by competitive adaptive reweighted sampling (CARS), and the result was RP = 0.929 and RMSEP = 2.951. Furthermore, MDA distribution map was successfully achieved by partial least squares (PLS) model with CARS. This study indicated that hyperspectral imaging technology provided a fast and nondestructive solution for MDA content detection in plant leaves.
The multi-purpose three-axis spectrometer (TAS) MIRA at FRM II
NASA Astrophysics Data System (ADS)
Georgii, R.; Weber, T.; Brandl, G.; Skoulatos, M.; Janoschek, M.; Mühlbauer, S.; Pfleiderer, C.; Böni, P.
2018-02-01
The cold-neutron three-axis spectrometer MIRA is an instrument optimized for low-energy excitations. Its excellent intrinsic Q-resolution makes it ideal for studying incommensurate magnetic systems (elastic and inelastic). MIRA is at the forefront of using advanced neutron focusing optics such as elliptic guides, which enable the investigation of small samples under extreme conditions. Another advantage of MIRA is the modular assembly allowing for instrumental adaption to the needs of the experiment within a few hours. The development of new methods such as the spin-echo technique MIEZE is another important application at MIRA. Scientific topics include the investigation of complex inter-metallic alloys and spectroscopy on incommensurate magnetic structures.
The Construction and Validation of the Heat Vulnerability Index, a Review
Bao, Junzhe; Li, Xudong; Yu, Chuanhua
2015-01-01
The occurrence of extreme heat and its adverse effects will be exacerbated with the trend of global warming. An increasing number of researchers have been working on aggregating multiple heat-related indicators to create composite indices for heat vulnerability assessments and have visualized the vulnerability through geographic information systems to provide references for reducing the adverse effects of extreme heat more effectively. This review includes 15 studies concerning heat vulnerability assessment. We have studied the indicators utilized and the methods adopted in these studies for the construction of the heat vulnerability index (HVI) and then further reviewed some of the studies that validated the HVI. We concluded that the HVI is useful for targeting the intervention of heat risk, and that heat-related health outcomes could be used to validate and optimize the HVI. In the future, more studies should be conducted to provide references for the selection of heat-related indicators and the determination of weight values of these indicators in the development of the HVI. Studies concerning the application of the HVI are also needed. PMID:26132476
NASA Astrophysics Data System (ADS)
Castelle, Bruno; Dodet, Guillaume; Masselink, Gerd; Scott, Tim
2017-02-01
A pioneering and replicable method based on a 66-year numerical weather and wave hindcast is developed to optimize a climate index based on the sea level pressure (SLP) that best explains winter wave height variability along the coast of western Europe, from Portugal to UK (36-52°N). The resulting so-called Western Europe Pressure Anomaly (WEPA) is based on the sea level pressure gradient between the stations Valentia (Ireland) and Santa Cruz de Tenerife (Canary Islands). The WEPA positive phase reflects an intensified and southward shifted SLP difference between the Icelandic low and the Azores high, driving severe storms that funnel high-energy waves toward western Europe southward of 52°N. WEPA outscores by 25-150% the other leading atmospheric modes in explaining winter-averaged significant wave height, and even by a largest amount the winter-averaged extreme wave heights. WEPA is also the only index capturing the 2013/2014 extreme winter that caused widespread coastal erosion and flooding in western Europe.
Development of a miniature Stirling cryocooler for LWIR small satellite applications
NASA Astrophysics Data System (ADS)
Kirkconnell, C. S.; Hon, R. C.; Perella, M. D.; Crittenden, T. M.; Ghiaasiaan, S. M.
2017-05-01
The optimum small satellite (SmallSat) cryocooler system must be extremely compact and lightweight, achieved in this paper by operating a linear cryocooler at a frequency of approximately 300 Hz. Operation at this frequency, which is well in excess of the 100-150 Hz reported in recent papers on related efforts, requires an evolution beyond the traditional Oxford-class, flexure-based methods of setting the mechanical resonance. A novel approach that optimizes the electromagnetic design and the mechanical design together to simultaneously achieve the required dynamic and thermodynamic performances is described. Since highly miniaturized pulse tube coolers are fundamentally ill-suited for the sub-80K temperature range of interest because the boundary layer losses inside the pulse tube become dominant at the associated very small pulse tube size, a moving displacer Stirling cryocooler architecture is used. Compact compressor mechanisms developed on a previous program are reused for this design, and they have been adapted to yield an extremely compact Stirling warm end motor mechanism. Supporting thermodynamic and electromagnetic analysis results are reported.
Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)
NASA Astrophysics Data System (ADS)
Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand
2018-03-01
Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.
NASA Astrophysics Data System (ADS)
Wang, Yaohui; Xin, Xuegang; Guo, Lei; Chen, Zhifeng; Liu, Feng
2018-05-01
The switching of a gradient coil current in magnetic resonance imaging will induce an eddy current in the surrounding conducting structures while the secondary magnetic field produced by the eddy current is harmful for the imaging. To minimize the eddy current effects, the stray field shielding in the gradient coil design is usually realized by minimizing the magnetic fields on the cryostat surface or the secondary magnetic fields over the imaging region. In this work, we explicitly compared these two active shielding design methods. Both the stray field and eddy current on the cryostat inner surface were quantitatively discussed by setting the stray field constraint with an ultra-low maximum intensity of 2 G and setting the secondary field constraint with an extreme small shielding ratio of 0.000 001. The investigation revealed that the secondary magnetic field control strategy can produce coils with a better performance. However, the former (minimizing the magnetic fields) is preferable when designing a gradient coil with an ultra-low eddy current that can also strictly control the stray field leakage at the edge of the cryostat inner surface. A wrapped-edge gradient coil design scheme was then optimized for a more effective control of the stray fields. The numerical simulation on the wrapped-edge coil design shows that the optimized wrapping angles for the x and z coils in terms of our coil dimensions are 40° and 90°, respectively.
Peikert, Tobias; Duan, Fenghai; Rajagopalan, Srinivasan; Karwoski, Ronald A; Clay, Ryan; Robb, Richard A; Qin, Ziling; Sicks, JoRean; Bartholmai, Brian J; Maldonado, Fabien
2018-01-01
Optimization of the clinical management of screen-detected lung nodules is needed to avoid unnecessary diagnostic interventions. Herein we demonstrate the potential value of a novel radiomics-based approach for the classification of screen-detected indeterminate nodules. Independent quantitative variables assessing various radiologic nodule features such as sphericity, flatness, elongation, spiculation, lobulation and curvature were developed from the NLST dataset using 726 indeterminate nodules (all ≥ 7 mm, benign, n = 318 and malignant, n = 408). Multivariate analysis was performed using least absolute shrinkage and selection operator (LASSO) method for variable selection and regularization in order to enhance the prediction accuracy and interpretability of the multivariate model. The bootstrapping method was then applied for the internal validation and the optimism-corrected AUC was reported for the final model. Eight of the originally considered 57 quantitative radiologic features were selected by LASSO multivariate modeling. These 8 features include variables capturing Location: vertical location (Offset carina centroid z), Size: volume estimate (Minimum enclosing brick), Shape: flatness, Density: texture analysis (Score Indicative of Lesion/Lung Aggression/Abnormality (SILA) texture), and surface characteristics: surface complexity (Maximum shape index and Average shape index), and estimates of surface curvature (Average positive mean curvature and Minimum mean curvature), all with P<0.01. The optimism-corrected AUC for these 8 features is 0.939. Our novel radiomic LDCT-based approach for indeterminate screen-detected nodule characterization appears extremely promising however independent external validation is needed.
Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase
NASA Technical Reports Server (NTRS)
Born, G. J.; Kai, T.
1975-01-01
A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.
Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon
Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.
Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale
Engelmann, Christian; Hukerikar, Saurabh
2017-09-01
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less
Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Hukerikar, Saurabh
Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less
Sergeyeva, Tetyana; Yarynka, Daria; Piletska, Elena; Lynnik, Rostyslav; Zaporozhets, Olga; Brovko, Oleksandr; Piletsky, Sergey; El'skaya, Anna
2017-12-01
Nanostructured polymeric membranes for selective recognition of aflatoxin B1 were synthesized in situ and used as highly sensitive recognition elements in the developed fluorescent sensor. Artificial binding sites capable of selective recognition of aflatoxin B1 were formed in the structure of the polymeric membranes using the method of molecular imprinting. A composition of molecularly imprinted polymer (MIP) membranes was optimized using the method of computational modeling. The MIP membranes were synthesized using the non-toxic close structural analogue of aflatoxin B1, ethyl-2-oxocyclopentanecarboxylate as a dummy template. The MIP membranes with the optimized composition demonstrated extremely high selectivity towards aflatoxin B1 (AFB1). Negligible binding of close structural analogues of AFB1 - aflatoxins B2 (AFB2), aflatoxin G2 (AFG2), and ochratoxin A (OTA) was demonstrated. Binding of AFB1 by the MIP membranes was investigated as a function of both type and concentration of the functional monomer in the initial monomer composition used for the membranes' synthesis, as well as sample composition. The conditions of the solid-phase extraction of the mycotoxin using the MIP membrane as a stationary phase (pH, ionic strength, buffer concentration, volume of the solution, ratio between water and organic solvent, filtration rate) were optimized. The fluorescent sensor system based on the optimized MIP membranes provided a possibility of AFB1 detection within the range 14-500ngmL -1 demonstrating detection limit (3Ϭ) of 14ngmL -1 . The developed technique was successfully applied for the analysis of model solutions and waste waters from bread-making plants. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Data Capture Technique for High Speed Signaling
Barrett, Wayne Melvin; Chen, Dong; Coteus, Paul William; Gara, Alan Gene; Jackson, Rory; Kopcsay, Gerard Vincent; Nathanson, Ben Jesse; Vranas, Paylos Michael; Takken, Todd E.
2008-08-26
A data capture technique for high speed signaling to allow for optimal sampling of an asynchronous data stream. This technique allows for extremely high data rates and does not require that a clock be sent with the data as is done in source synchronous systems. The present invention also provides a hardware mechanism for automatically adjusting transmission delays for optimal two-bit simultaneous bi-directional (SiBiDi) signaling.
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
NASA Astrophysics Data System (ADS)
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
NASA Astrophysics Data System (ADS)
Lazoglou, Georgia; Anagnostopoulou, Christina; Tolika, Konstantia; Kolyva-Machera, Fotini
2018-04-01
The increasing trend of the intensity and frequency of temperature and precipitation extremes during the past decades has substantial environmental and socioeconomic impacts. Thus, the objective of the present study is the comparison of several statistical methods of the extreme value theory (EVT) in order to identify which is the most appropriate to analyze the behavior of the extreme precipitation, and high and low temperature events, in the Mediterranean region. The extremes choice was made using both the block maxima and the peaks over threshold (POT) technique and as a consequence both the generalized extreme value (GEV) and generalized Pareto distributions (GPDs) were used to fit them. The results were compared, in order to select the most appropriate distribution for extremes characterization. Moreover, this study evaluates the maximum likelihood estimation, the L-moments and the Bayesian method, based on both graphical and statistical goodness-of-fit tests. It was revealed that the GPD can characterize accurately both precipitation and temperature extreme events. Additionally, GEV distribution with the Bayesian method is proven to be appropriate especially for the greatest values of extremes. Another important objective of this investigation was the estimation of the precipitation and temperature return levels for three return periods (50, 100, and 150 years) classifying the data into groups with similar characteristics. Finally, the return level values were estimated with both GEV and GPD and with the three different estimation methods, revealing that the selected method can affect the return level values for both the parameter of precipitation and temperature.
Fuss, Franz Konstantin
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals. PMID:24151522
The Engineering for Climate Extremes Partnership
NASA Astrophysics Data System (ADS)
Holland, G. J.; Tye, M. R.
2014-12-01
Hurricane Sandy and the recent floods in Thailand have demonstrated not only how sensitive the urban environment is to the impact of severe weather, but also the associated global reach of the ramifications. These, together with other growing extreme weather impacts and the increasing interdependence of global commercial activities point towards a growing vulnerability to weather and climate extremes. The Engineering for Climate Extremes Partnership brings academia, industry and government together with the goals encouraging joint activities aimed at developing new, robust, and well-communicated responses to this increasing vulnerability. Integral to the approach is the concept of 'graceful failure' in which flexible designs are adopted that protect against failure by combining engineering or network strengths with a plan for efficient and rapid recovery if and when they fail. Such an approach enables optimal planning for both known future scenarios and their assessed uncertainty.
Congenital Differences of the Upper Extremity: Classification and Treatment Principles
2011-01-01
For hand surgeons, the treatment of children with congenital differences of the upper extremity is challenging because of the diverse spectrum of conditions encountered, but the task is also rewarding because it provides surgeons with the opportunity to impact a child's growth and development. An ideal classification of congenital differences of the upper extremity would reflect the full spectrum of morphologic abnormalities and encompass etiology, a guide to treatment, and provide prognoses. In this report, I review current classification systems and discuss their contradictions and limitations. In addition, I present a modified classification system and provide treatment principles. As our understanding of the etiology of congenital differences of the upper extremity increases and as experience of treating difficult cases accumulates, even an ideal classification system and optimal treatment strategies will undoubtedly continue to evolve. PMID:21909463
NASA Astrophysics Data System (ADS)
Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.
2017-05-01
Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.
Biota and Biomolecules in Extreme Environments on Earth: Implications for Life Detection on Mars
Aerts, Joost W.; Röling, Wilfred F.M.; Elsaesser, Andreas; Ehrenfreund, Pascale
2014-01-01
The three main requirements for life as we know it are the presence of organic compounds, liquid water, and free energy. Several groups of organic compounds (e.g., amino acids, nucleobases, lipids) occur in all life forms on Earth and are used as diagnostic molecules, i.e., biomarkers, for the characterization of extant or extinct life. Due to their indispensability for life on Earth, these biomarkers are also prime targets in the search for life on Mars. Biomarkers degrade over time; in situ environmental conditions influence the preservation of those molecules. Nonetheless, upon shielding (e.g., by mineral surfaces), particular biomarkers can persist for billions of years, making them of vital importance in answering questions about the origins and limits of life on early Earth and Mars. The search for organic material and biosignatures on Mars is particularly challenging due to the hostile environment and its effect on organic compounds near the surface. In support of life detection on Mars, it is crucial to investigate analogue environments on Earth that resemble best past and present Mars conditions. Terrestrial extreme environments offer a rich source of information allowing us to determine how extreme conditions affect life and molecules associated with it. Extremophilic organisms have adapted to the most stunning conditions on Earth in environments with often unique geological and chemical features. One challenge in detecting biomarkers is to optimize extraction, since organic molecules can be low in abundance and can strongly adsorb to mineral surfaces. Methods and analytical tools in the field of life science are continuously improving. Amplification methods are very useful for the detection of low concentrations of genomic material but most other organic molecules are not prone to amplification methods. Therefore, a great deal depends on the extraction efficiency. The questions “what to look for”, “where to look”, and “how to look for it” require more of our attention to ensure the success of future life detection missions on Mars. PMID:25370528
Biota and biomolecules in extreme environments on Earth: implications for life detection on Mars.
Aerts, Joost W; Röling, Wilfred F M; Elsaesser, Andreas; Ehrenfreund, Pascale
2014-10-13
The three main requirements for life as we know it are the presence of organic compounds, liquid water, and free energy. Several groups of organic compounds (e.g., amino acids, nucleobases, lipids) occur in all life forms on Earth and are used as diagnostic molecules, i.e., biomarkers, for the characterization of extant or extinct life. Due to their indispensability for life on Earth, these biomarkers are also prime targets in the search for life on Mars. Biomarkers degrade over time; in situ environmental conditions influence the preservation of those molecules. Nonetheless, upon shielding (e.g., by mineral surfaces), particular biomarkers can persist for billions of years, making them of vital importance in answering questions about the origins and limits of life on early Earth and Mars. The search for organic material and biosignatures on Mars is particularly challenging due to the hostile environment and its effect on organic compounds near the surface. In support of life detection on Mars, it is crucial to investigate analogue environments on Earth that resemble best past and present Mars conditions. Terrestrial extreme environments offer a rich source of information allowing us to determine how extreme conditions affect life and molecules associated with it. Extremophilic organisms have adapted to the most stunning conditions on Earth in environments with often unique geological and chemical features. One challenge in detecting biomarkers is to optimize extraction, since organic molecules can be low in abundance and can strongly adsorb to mineral surfaces. Methods and analytical tools in the field of life science are continuously improving. Amplification methods are very useful for the detection of low concentrations of genomic material but most other organic molecules are not prone to amplification methods. Therefore, a great deal depends on the extraction efficiency. The questions "what to look for", "where to look", and "how to look for it" require more of our attention to ensure the success of future life detection missions on Mars.
Optimal accelerometer placement on a robot arm for pose estimation
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Sanford, Joseph D.; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Das, Sumit K.; Popa, Dan O.
2017-05-01
The performance of robots to carry out tasks depends in part on the sensor information they can utilize. Usually, robots are fitted with angle joint encoders that are used to estimate the position and orientation (or the pose) of its end-effector. However, there are numerous situations, such as in legged locomotion, mobile manipulation, or prosthetics, where such joint sensors may not be present at every, or any joint. In this paper we study the use of inertial sensors, in particular accelerometers, placed on the robot that can be used to estimate the robot pose. Studying accelerometer placement on a robot involves many parameters that affect the performance of the intended positioning task. Parameters such as the number of accelerometers, their size, geometric placement and Signal-to-Noise Ratio (SNR) are included in our study of their effects for robot pose estimation. Due to the ubiquitous availability of inexpensive accelerometers, we investigated pose estimation gains resulting from using increasingly large numbers of sensors. Monte-Carlo simulations are performed with a two-link robot arm to obtain the expected value of an estimation error metric for different accelerometer configurations, which are then compared for optimization. Results show that, with a fixed SNR model, the pose estimation error decreases with increasing number of accelerometers, whereas for a SNR model that scales inversely to the accelerometer footprint, the pose estimation error increases with the number of accelerometers. It is also shown that the optimal placement of the accelerometers depends on the method used for pose estimation. The findings suggest that an integration-based method favors placement of accelerometers at the extremities of the robot links, whereas a kinematic-constraints-based method favors a more uniformly distributed placement along the robot links.
Impact of extreme precipitation events in the Miño-Sil river basin
NASA Astrophysics Data System (ADS)
Fernández-González, Manuel; Añel, Juan Antonio; de la Torre, Laura
2015-04-01
We herein research the impact of extreme rainfall events in the Miño-Sil basin, a heavily dammed basin located in the northwestern Iberian Peninsula. Extreme rainfall events are very important in this basin because with 106 dams it is the most dammed in Spain. These dams are almost exclusively used for hydropower generation, the installed generating capacity reaches more than 2700 MW and represents almost 9% of the total installed electrical generation capacity of the Iberian Peninsula, therefore with a potential impact on the energy market. We research the extreme events of rainfall an their return periods trying to reproduce the past extreme events of rainfall and their time periods to prove the proper functioning of the adapted model, so we can forecast future extreme events of rainfall in the basin. This research tries to optimize the storage of dams and adapt the management to problems as climate change. The results obtained are very relevant for hydroelectric generation because the operation of hydropower system depends primarily on the availability of storaged water.
Tissue expansion in the treatment of giant congenital melanocytic nevi of the upper extremity
Ma, Tengxiao; Fan, Ke; Li, Lei; Xie, Feng; Li, Hao; Chou, Haiyan; Zhang, Zhengwen
2017-01-01
Abstract The aim of our study was to use tissue expansion for the treatment of giant congenital melanocytic nevi of the upper extremity and examine potential advantages over traditional techniques. There were 3 stages in the treatment of giant congenital melanocytic nevi of the upper extremities using tissue expansion: first, the expander was inserted into the subcutaneous pocket; second, the expander was removed, lesions were excised, and the wound of the upper extremity was placed into the pocket to delay healing; third, the residual lesion was excised and the pedicle was removed. The pedicle flap was then unfolded to resurface the wound. During the period between June 2007 and December 2015, there were 11 patients with giant congenital melanocytic nevi of the upper extremities who underwent reconstruction at our department with skin expansion. Few complications were noted in each stage of treatment. The functional and aesthetic results were observed and discussed in this study. Optimal aesthetic and functional results were obtained using tissue expansion to reconstruct the upper extremities due to the giant congenital melanocytic nevi. PMID:28353563
Multi-window detection for P-wave in electrocardiograms based on bilateral accumulative area.
Chen, Riqing; Huang, Yingsong; Wu, Jian
2016-11-01
P-wave detection is one of the most challenging aspects in electrocardiograms (ECGs) due to its low amplitude, low frequency, and variable waveforms. This work introduces a novel multi-window detection method for P-wave delineation based on the bilateral accumulative area. The bilateral accumulative area is calculated by summing the areas covered by the P-wave curve with left and right sliding windows. The onset and offset of a positive P-wave correspond to the local maxima of the area detector. The position drift and difference in area variation of local extreme points with different windows are used to systematically combine multi-window and 12-lead synchronous detection methods, which are used to screen the optimization boundary points from all extreme points of different window widths and adaptively match the P-wave location. The proposed method was validated with ECG signals from various databases, including the Standard CSE Database, T-Wave Alternans Challenge Database, PTB Diagnostic ECG Database, and the St. Petersburg Institute of Cardiological Technics 12-Lead Arrhythmia Database. The average sensitivity Se was 99.44% with a positive predictivity P+ of 99.37% for P-wave detection. Standard deviations of 3.7 and 4.3ms were achieved for the onset and offset of P-waves, respectively, which is in agreement with the accepted tolerances required by the CSE committee. Compared with well-known delineation methods, this method can achieve high sensitivity and positive predictability using a simple calculation process. The experiment results suggest that the bilateral accumulative area could be an effective detection tool for ECG signal analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Okada, Yukimasa; Ono, Kouichi; Eriguchi, Koji
2017-06-01
Aggressive shrinkage and geometrical transition to three-dimensional structures in metal-oxide-semiconductor field-effect transistors (MOSFETs) lead to potentially serious problems regarding plasma processing such as plasma-induced physical damage (PPD). For the precise control of material processing and future device designs, it is extremely important to clarify the depth and energy profiles of PPD. Conventional methods to estimate the PPD profile (e.g., wet etching) are time-consuming. In this study, we propose an advanced method using a simple capacitance-voltage (C-V) measurement. The method first assumes the depth and energy profiles of defects in Si substrates, and then optimizes the C-V curves. We applied this methodology to evaluate the defect generation in (100), (111), and (110) Si substrates. No orientation dependence was found regarding the surface-oxide layers, whereas a large number of defects was assigned in the case of (110). The damaged layer thickness and areal density were estimated. This method provides the highly sensitive PPD prediction indispensable for designing future low-damage plasma processes.
Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.
Sh, Jiying; Jin, Dan; Lu, Wei; Zhang, Xiaoyu; Zhang, Chao; Li, Liang; Ma, Ruiqiang; Xiao, Lei; Wang, Yiding; Lin, Min
2008-06-01
To isolate and characterize a glyphosate-resistant strain from extremely polluted environment. A glyphosate-resistant strain was isolated from extremely polluted soil taking glyphosate as the selection pressure. Its glyphosate resistance, growth optimal pH and antibiotic sensitivity were detected. Its morphology, cultural characteristics, physiological and biochemical properties, chemotaxonomy and 16S rDNA sequences were studied. Based on these results, the strain was identified according to the ninth edition of Bergey's manual of determinative bacteriology. The isolate was named SL06500. It could grow in M9 minimal medium containing up to 500 mmol/L glyphosate. The cell growth optimal pH of SL06500 was 4.0. It was resistant to ampicillin, kanamycin, tetracycline and chloromycetin. The 16S rDNA of SL06500 was amplified by PCR and sequenced. Compared with the published nucleotide sequence of 16S rDNA in NCBI (National Center for Biotechnology Information), SL06500 showed high identity with Achromobacter and Alcaligenes. Based on morphological, physiological and biochemical characteristics, the strain was identified as Alcaligenes xylosoxidans subsp.xylosoxidans SL06500 according to the ninth edition of Bergey's manual of determinative bacteriology. Strain SL06500 is worthy to be studied because of its high glyphosate resistance.
Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829
Integrated assessment of water-power grid systems under changing climate
NASA Astrophysics Data System (ADS)
Yan, E.; Zhou, Z.; Betrie, G.
2017-12-01
Energy and water systems are intrinsically interconnected. Due to an increase in climate variability and extreme weather events, interdependency between these two systems has been recently intensified resulting significant impacts on both systems and energy output. To address this challenge, an Integrated Water-Energy Systems Assessment Framework (IWESAF) is being developed to integrate multiple existing or developed models from various sectors. In this presentation, we are focusing on recent improvement in model development of thermoelectric power plant water use simulator, power grid operation and cost optimization model, and model integration that facilitate interaction among water and electricity generation under extreme climate events. A process based thermoelectric power water use simulator includes heat-balance, climate, and cooling system modules that account for power plant characteristics, fuel types, and cooling technology. The model is validated with more than 800 power plants of fossil-fired, nuclear and gas-turbine power plants with different cooling systems. The power grid operation and cost optimization model was implemented for a selected regional in the Midwest. The case study will be demonstrated to evaluate the sensitivity and resilience of thermoelectricity generation and power grid under various climate and hydrologic extremes and potential economic consequences.
A unified RANS–LES model: Computational development, accuracy and cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopalan, Harish, E-mail: hgopalan@uwyo.edu; Heinz, Stefan, E-mail: heinz@uwyo.edu; Stöllinger, Michael K., E-mail: MStoell@uwyo.edu
2013-09-15
Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged Navier–Stokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANS–LES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANS–LES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unifiedmore » RANS–LES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANS–LES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANS–LES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANS–LES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07Re{sup 0.46}. For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANS–LES models, it is shown that the LUM provides significantly improved predictions.« less
Translator for Optimizing Fluid-Handling Components
NASA Technical Reports Server (NTRS)
Landon, Mark; Perry, Ernest
2007-01-01
A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.
Yeganeh, Ali; Otoukesh, Babak; Kaghazian, Peyman; Yeganeh, Nima; Boddohi, Bahram; Moghtadaei, Mehdi
2015-01-01
Background: Orthopedics implants are important tools for treatment of bone fractures. Despite available recommendations for designing and making the implants, there are multiple cases of fracture of these implants in the body. Hence, in this study the frequency of failure of implants in long bones of lower extremities was evaluated. Methods and Materials: In this cross-sectional study, two types of fractured implants in the body were analyzed and underwent metalogical, mechanical, and modeling and stress-bending analysis. Results: The results revealed that the main cause of fractures was decreased mechanical resistance due to inappropriate chemical composition (especially decreased percentages of Nickel and Molybdenum). Conclusions: It may be concluded that following the standard chemical composition and use of optimal making method are the most important works for prevention of failure of implants. PMID:26843735
Query Auto-Completion Based on Word2vec Semantic Similarity
NASA Astrophysics Data System (ADS)
Shao, Taihua; Chen, Honghui; Chen, Wanyu
2018-04-01
Query auto-completion (QAC) is the first step of information retrieval, which helps users formulate the entire query after inputting only a few prefixes. Regarding the models of QAC, the traditional method ignores the contribution from the semantic relevance between queries. However, similar queries always express extremely similar search intention. In this paper, we propose a hybrid model FS-QAC based on query semantic similarity as well as the query frequency. We choose word2vec method to measure the semantic similarity between intended queries and pre-submitted queries. By combining both features, our experiments show that FS-QAC model improves the performance when predicting the user’s query intention and helping formulate the right query. Our experimental results show that the optimal hybrid model contributes to a 7.54% improvement in terms of MRR against a state-of-the-art baseline using the public AOL query logs.
Coaxial printing method for directly writing stretchable cable as strain sensor
NASA Astrophysics Data System (ADS)
Yan, Hai-liang; Chen, Yan-qiu; Deng, Yong-qiang; Zhang, Li-long; Hong, Xiao; Lau, Woon-ming; Mei, Jun; Hui, David; Yan, Hui; Liu, Yu
2016-08-01
Through applying the liquid metal and elastomer as the core and shell materials, respectively, a coaxial printing method is being developed in this work for preparing a stretchable and conductive cable. When liquid metal alloy eutectic Gallium-Indium is embedded into the elastomer matrix under optimized control, the cable demonstrates well-posed extreme mechanic performance, under stretching for more than 350%. Under developed compression test, the fabricated cable also demonstrates the ability for recovering original properties due to the high flowability of the liquid metal and super elasticity of the elastomeric shell. The written cable presents high cycling reliability regarding its stretchability and conductivity, two properties which can be clearly predicted in theoretical calculation. This work can be further investigated as a strain sensor for monitoring motion status including frequency and amplitude of a curved object, with extensive applications in wearable devices, soft robots, electronic skins, and wireless communication.
Information Gain Based Dimensionality Selection for Classifying Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumidu Wijayasekara; Milos Manic; Miles McQueen
2013-06-01
Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less
Coaxial printing method for directly writing stretchable cable as strain sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hai-liang; Chengdu Green Energy and Green Manufacturing Technology R&D Center, 610299 Chengdu; Chen, Yan-qiu, E-mail: yu.liu@vip.163.com, E-mail: cyqleaf@qq.com, E-mail: hyan@but.ac.cn
Through applying the liquid metal and elastomer as the core and shell materials, respectively, a coaxial printing method is being developed in this work for preparing a stretchable and conductive cable. When liquid metal alloy eutectic Gallium-Indium is embedded into the elastomer matrix under optimized control, the cable demonstrates well–posed extreme mechanic performance, under stretching for more than 350%. Under developed compression test, the fabricated cable also demonstrates the ability for recovering original properties due to the high flowability of the liquid metal and super elasticity of the elastomeric shell. The written cable presents high cycling reliability regarding its stretchabilitymore » and conductivity, two properties which can be clearly predicted in theoretical calculation. This work can be further investigated as a strain sensor for monitoring motion status including frequency and amplitude of a curved object, with extensive applications in wearable devices, soft robots, electronic skins, and wireless communication.« less
Enhancement method for rendered images of home decoration based on SLIC superpixels
NASA Astrophysics Data System (ADS)
Dai, Yutong; Jiang, Xiaotong
2018-04-01
Rendering technology has been widely used in the home decoration industry in recent years for images of home decoration design. However, due to the fact that rendered images of home decoration design rely heavily on the parameters of renderer and the lights of scenes, most rendered images in this industry require further optimization afterwards. To reduce workload and enhance rendered images automatically, an algorithm utilizing neural networks is proposed in this manuscript. In addition, considering few extreme conditions such as strong sunlight and lights, SLIC superpixels based segmentation is used to choose out these bright areas of an image and enhance them independently. Finally, these chosen areas are merged with the entire image. Experimental results show that the proposed method effectively enhances the rendered images when compared with some existing algorithms. Besides, the proposed strategy is proven to be adaptable especially to those images with obvious bright parts.
New color-based tracking algorithm for joints of the upper extremities
NASA Astrophysics Data System (ADS)
Wu, Xiangping; Chow, Daniel H. K.; Zheng, Xiaoxiang
2007-11-01
To track the joints of the upper limb of stroke sufferers for rehabilitation assessment, a new tracking algorithm which utilizes a developed color-based particle filter and a novel strategy for handling occlusions is proposed in this paper. Objects are represented by their color histogram models and particle filter is introduced to track the objects within a probability framework. Kalman filter, as a local optimizer, is integrated into the sampling stage of the particle filter that steers samples to a region with high likelihood and therefore fewer samples is required. A color clustering method and anatomic constraints are used in dealing with occlusion problem. Compared with the general basic particle filtering method, the experimental results show that the new algorithm has reduced the number of samples and hence the computational consumption, and has achieved better abilities of handling complete occlusion over a few frames.
Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar
2017-12-07
Pathological brain detection has made notable stride in the past years, as a consequence many pathological brain detection systems (PBDSs) have been proposed. But, the accuracy of these systems still needs significant improvement in order to meet the necessity of real world diagnostic situations. In this paper, an efficient PBDS based on MR images is proposed that markedly improves the recent results. The proposed system makes use of contrast limited adaptive histogram equalization (CLAHE) to enhance the quality of the input MR images. Thereafter, two-dimensional PCA (2DPCA) strategy is employed to extract the features and subsequently, a PCA+LDA approach is used to generate a compact and discriminative feature set. Finally, a new learning algorithm called MDE-ELM is suggested that combines modified differential evolution (MDE) and extreme learning machine (ELM) for segregation of MR images as pathological or healthy. The MDE is utilized to optimize the input weights and hidden biases of single-hidden-layer feed-forward neural networks (SLFN), whereas an analytical method is used for determining the output weights. The proposed algorithm performs optimization based on both the root mean squared error (RMSE) and norm of the output weights of SLFNs. The suggested scheme is benchmarked on three standard datasets and the results are compared against other competent schemes. The experimental outcomes show that the proposed scheme offers superior results compared to its counterparts. Further, it has been noticed that the proposed MDE-ELM classifier obtains better accuracy with compact network architecture than conventional algorithms.
Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.
2017-12-01
SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.
Ultra low density biodegradable shape memory polymer foams with tunable physical properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singhal, Pooja; Wilson, Thomas S.; Cosgriff-Hernandez, Elizabeth
Compositions and/or structures of degradable shape memory polymers (SMPs) ranging in form from neat/unfoamed to ultra low density materials of down to 0.005 g/cc density. These materials show controllable degradation rate, actuation temperature and breadth of transitions along with high modulus and excellent shape memory behavior. A method of m ly low density foams (up to 0.005 g/cc) via use of combined chemical and physical aking extreme blowing agents, where the physical blowing agents may be a single compound or mixtures of two or more compounds, and other related methods, including of using multiple co-blowing agents of successively higher boilingmore » points in order to achieve a large range of densities for a fixed net chemical composition. Methods of optimization of the physical properties of the foams such as porosity, cell size and distribution, cell openness etc. of these materials, to further expand their uses and improve their performance.« less
Qian, Wenjuan; Lu, Ying; Meng, Youqing; Ye, Zunzhong; Wang, Liu; Wang, Rui; Zheng, Qiqi; Wu, Hui; Wu, Jian
2018-06-06
' Candidatus Liberibacter asiaticus' (Las) is the most prevalent bacterium associated with huanglongbing, which is one of the most destructive diseases of citrus. In this paper, an extremely rapid and simple method for field detection of Las from leaf samples, based on recombinase polymerase amplification (RPA), is described. Three RPA primer pairs were designed and evaluated. RPA amplification was optimized so that it could be accomplished within 10 min. In combination with DNA crude extraction by a 50-fold dilution after 1 min of grinding in 0.5 M sodium hydroxide and visual detection via fluorescent DNA dye (positive samples display obvious green fluorescence while negative samples remain colorless), the whole detection process can be accomplished within 15 min. The sensitivity and specificity of this RPA-based method were evaluated and were proven to be equal to those of real-time PCR. The reliability of this method was also verified by analyzing field samples.
A Large-Telescope Natural Guide Star AO System
NASA Technical Reports Server (NTRS)
Redding, David; Milman, Mark; Needels, Laura
1994-01-01
None given. From overview and conclusion:Keck Telescope case study. Objectives-low cost, good sky coverage. Approach--natural guide star at 0.8um, correcting at 2.2um.Concl- Good performance is possible for Keck with natural guide star AO system (SR>0.2 to mag 17+).AO-optimized CCD should b every effective. Optimizing td is very effective.Spatial Coadding is not effective except perhaps at extreme low light levels.
Analysis of the dependence of extreme rainfalls
NASA Astrophysics Data System (ADS)
Padoan, Simone; Ancey, Christophe; Parlange, Marc
2010-05-01
The aim of spatial analysis is to quantitatively describe the behavior of environmental phenomena such as precipitation levels, wind speed or daily temperatures. A number of generic approaches to spatial modeling have been developed[1], but these are not necessarily ideal for handling extremal aspects given their focus on mean process levels. The areal modelling of the extremes of a natural process observed at points in space is important in environmental statistics; for example, understanding extremal spatial rainfall is crucial in flood protection. In light of recent concerns over climate change, the use of robust mathematical and statistical methods for such analyses has grown in importance. Multivariate extreme value models and the class of maxstable processes [2] have a similar asymptotic motivation to the univariate Generalized Extreme Value (GEV) distribution , but providing a general approach to modeling extreme processes incorporating temporal or spatial dependence. Statistical methods for max-stable processes and data analyses of practical problems are discussed by [3] and [4]. This work illustrates methods to the statistical modelling of spatial extremes and gives examples of their use by means of a real extremal data analysis of Switzerland precipitation levels. [1] Cressie, N. A. C. (1993). Statistics for Spatial Data. Wiley, New York. [2] de Haan, L and Ferreria A. (2006). Extreme Value Theory An Introduction. Springer, USA. [3] Padoan, S. A., Ribatet, M and Sisson, S. A. (2009). Likelihood-Based Inference for Max-Stable Processes. Journal of the American Statistical Association, Theory & Methods. In press. [4] Davison, A. C. and Gholamrezaee, M. (2009), Geostatistics of extremes. Journal of the Royal Statistical Society, Series B. To appear.
Optimal security investments and extreme risk.
Mohtadi, Hamid; Agiwal, Swati
2012-08-01
In the aftermath of 9/11, concern over security increased dramatically in both the public and the private sector. Yet, no clear algorithm exists to inform firms on the amount and the timing of security investments to mitigate the impact of catastrophic risks. The goal of this article is to devise an optimum investment strategy for firms to mitigate exposure to catastrophic risks, focusing on how much to invest and when to invest. The latter question addresses the issue of whether postponing a risk mitigating decision is an optimal strategy or not. Accordingly, we develop and estimate both a one-period model and a multiperiod model within the framework of extreme value theory (EVT). We calibrate these models using probability measures for catastrophic terrorism risks associated with attacks on the food sector. We then compare our findings with the purchase of catastrophic risk insurance. © 2012 Society for Risk Analysis.
Difficult Decisions Made Easier
NASA Technical Reports Server (NTRS)
2006-01-01
NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.
Evaluation of antimicrobial properties of cork.
Gonçalves, Filipa; Correia, Patrícia; Silva, Susana P; Almeida-Aguiar, Cristina
2016-02-01
Cork presents a range of diverse and versatile properties making this material suitable for several and extremely diverse industrial applications. Despite the wide uses of cork, its antimicrobial properties and potential applications have deserved little attention from industry and the scientific community. Thus, the main purpose of this work was the evaluation of the antibacterial properties of cork, by comparison with commercially available antimicrobial materials (Ethylene-Vinyl Acetate copolymer and a currently used antimicrobial commercial additive (ACA)), following the previous development and optimization of a method for such antimicrobial assay. The AATCC 100-2004 standard method, a quantitative procedure developed for the assessment of antimicrobial properties in textile materials, was used as reference and optimized to assess cork antibacterial activity. Cork displayed high antibacterial activity against Staphylococcus aureus, with a bacterial reduction of almost 100% (96.93%) after 90 minutes of incubation, similar to the one obtained with ACA. A more reduced but time-constant antibacterial action was observed against Escherichia coli (36% reduction of the initial number of bacterial colonies). To complement this study, antibacterial activity was further evaluated for a water extract of cork and an MIC of 6 mg mL(-1) was obtained against the reference strain S. aureus. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Microwave-assisted extraction of cyclotides from Viola ignobilis.
Farhadpour, Mohsen; Hashempour, Hossein; Talebpour, Zahra; A-Bagheri, Nazanin; Shushtarian, Mozhgan Sadat; Gruber, Christian W; Ghassempour, Alireza
2016-03-15
Cyclotides are an interesting family of circular plant peptides. Their unique three-dimensional structure, comprising a head-to-tail circular backbone chain and three disulfide bonds, confers them stability against thermal, chemical, and enzymatic degradation. Their unique stability under extreme conditions creates an idea about the possibility of using harsh extraction methods such as microwave-assisted extraction (MAE) without affecting their structures. MAE has been introduced as a potent extraction method for extraction of natural compounds, but it is seldom used for peptide and protein extraction. In this work, microwave irradiation was applied to the extraction of cyclotides. The procedure was performed in various steps using a microwave instrument under different conditions. High-performance liquid chromatography (HPLC) and matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) results show stability of cyclotide structures on microwave radiation. The influential parameters, including time, temperature, and the ratio of solvents that are affecting the MAE potency, were optimized. Optimal conditions were obtained at 20 min of irradiation time, 1200 W of system power in 60 °C, and methanol/water at the ratio of 90:10 (v/v) as solvent. The comparison of MAE results with maceration extraction shows that there are similarities between cyclotide sequences and extraction yields. Copyright © 2015 Elsevier Inc. All rights reserved.
Novel Scalable 3-D MT Inverse Solver
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.
2016-12-01
We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.
Automated Identification of Coronal Holes from Synoptic EUV Maps
NASA Astrophysics Data System (ADS)
Hamada, Amr; Asikainen, Timo; Virtanen, Ilpo; Mursula, Kalevi
2018-04-01
Coronal holes (CHs) are regions of open magnetic field lines in the solar corona and the source of the fast solar wind. Understanding the evolution of coronal holes is critical for solar magnetism as well as for accurate space weather forecasts. We study the extreme ultraviolet (EUV) synoptic maps at three wavelengths (195 Å/193 Å, 171 Å and 304 Å) measured by the Solar and Heliospheric Observatory/Extreme Ultraviolet Imaging Telescope (SOHO/EIT) and the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) instruments. The two datasets are first homogenized by scaling the SDO/AIA data to the SOHO/EIT level by means of histogram equalization. We then develop a novel automated method to identify CHs from these homogenized maps by determining the intensity threshold of CH regions separately for each synoptic map. This is done by identifying the best location and size of an image segment, which optimally contains portions of coronal holes and the surrounding quiet Sun allowing us to detect the momentary intensity threshold. Our method is thus able to adjust itself to the changing scale size of coronal holes and to temporally varying intensities. To make full use of the information in the three wavelengths we construct a composite CH distribution, which is more robust than distributions based on one wavelength. Using the composite CH dataset we discuss the temporal evolution of CHs during the Solar Cycles 23 and 24.
Application of field dependent polynomial model
NASA Astrophysics Data System (ADS)
Janout, Petr; Páta, Petr; Skala, Petr; Fliegel, Karel; Vítek, Stanislav; Bednář, Jan
2016-09-01
Extremely wide-field imaging systems have many advantages regarding large display scenes whether for use in microscopy, all sky cameras, or in security technologies. The Large viewing angle is paid by the amount of aberrations, which are included with these imaging systems. Modeling wavefront aberrations using the Zernike polynomials is known a longer time and is widely used. Our method does not model system aberrations in a way of modeling wavefront, but directly modeling of aberration Point Spread Function of used imaging system. This is a very complicated task, and with conventional methods, it was difficult to achieve the desired accuracy. Our optimization techniques of searching coefficients space-variant Zernike polynomials can be described as a comprehensive model for ultra-wide-field imaging systems. The advantage of this model is that the model describes the whole space-variant system, unlike the majority models which are partly invariant systems. The issue that this model is the attempt to equalize the size of the modeled Point Spread Function, which is comparable to the pixel size. Issues associated with sampling, pixel size, pixel sensitivity profile must be taken into account in the design. The model was verified in a series of laboratory test patterns, test images of laboratory light sources and consequently on real images obtained by an extremely wide-field imaging system WILLIAM. Results of modeling of this system are listed in this article.
NASA Astrophysics Data System (ADS)
Hou, Zeyu; Lu, Wenxi
2018-05-01
Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.
A Unified Framework for Street-View Panorama Stitching
Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei
2016-01-01
In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481
Evolutionary-Optimized Photonic Network Structure in White Beetle Wing Scales.
Wilts, Bodo D; Sheng, Xiaoyuan; Holler, Mirko; Diaz, Ana; Guizar-Sicairos, Manuel; Raabe, Jörg; Hoppe, Robert; Liu, Shu-Hao; Langford, Richard; Onelli, Olimpia D; Chen, Duyu; Torquato, Salvatore; Steiner, Ullrich; Schroer, Christian G; Vignolini, Silvia; Sepe, Alessandro
2018-05-01
Most studies of structural color in nature concern periodic arrays, which through the interference of light create color. The "color" white however relies on the multiple scattering of light within a randomly structured medium, which randomizes the direction and phase of incident light. Opaque white materials therefore must be much thicker than periodic structures. It is known that flying insects create "white" in extremely thin layers. This raises the question, whether evolution has optimized the wing scale morphology for white reflection at a minimum material use. This hypothesis is difficult to prove, since this requires the detailed knowledge of the scattering morphology combined with a suitable theoretical model. Here, a cryoptychographic X-ray tomography method is employed to obtain a full 3D structural dataset of the network morphology within a white beetle wing scale. By digitally manipulating this 3D representation, this study demonstrates that this morphology indeed provides the highest white retroreflection at the minimum use of material, and hence weight for the organism. Changing any of the network parameters (within the parameter space accessible by biological materials) either increases the weight, increases the thickness, or reduces reflectivity, providing clear evidence for the evolutionary optimization of this morphology. © 2017 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xuanlu M.; Louie, Alexander V.; Ashman, Jonathan
Purpose: Surgery combined with radiation therapy (RT) is the cornerstone of multidisciplinary management of extremity soft tissue sarcoma (STS). Although RT can be given in either the preoperative or the postoperative setting with similar local recurrence and survival outcomes, the side effect profiles, costs, and long-term functional outcomes are different. The aim of this study was to use decision analysis to determine optimal sequencing of RT with surgery in patients with extremity STS. Methods and Materials: A cost-effectiveness analysis was conducted using a state transition Markov model, with quality-adjusted life years (QALYs) as the primary outcome. A time horizon ofmore » 5 years, a cycle length of 3 months, and a willingness-to-pay threshold of $50,000/QALY was used. One-way deterministic sensitivity analyses were performed to determine the thresholds at which each strategy would be preferred. The robustness of the model was assessed by probabilistic sensitivity analysis. Results: Preoperative RT is a more cost-effective strategy ($26,633/3.00 QALYs) than postoperative RT ($28,028/2.86 QALYs) in our base case scenario. Preoperative RT is the superior strategy with either 3-dimensional conformal RT or intensity-modulated RT. One-way sensitivity analyses identified the relative risk of chronic adverse events as having the greatest influence on the preferred timing of RT. The likelihood of preoperative RT being the preferred strategy was 82% on probabilistic sensitivity analysis. Conclusions: Preoperative RT is more cost effective than postoperative RT in the management of resectable extremity STS, primarily because of the higher incidence of chronic adverse events with RT in the postoperative setting.« less
Improving the Predictability of Severe Water Levels along the Coasts of Marginal Seas
NASA Astrophysics Data System (ADS)
Ridder, N. N.; de Vries, H.; van den Brink, H.; De Vries, H.
2016-12-01
Extreme water levels can lead to catastrophic consequences with severe societal and economic repercussions. Particularly vulnerable are countries that are largely situated below sea level. To support and optimize forecast models, as well as future adaptation efforts, this study assesses the modeled contribution of storm surges and astronomical tides to total water levels under different air-sea momentum transfer parameterizations in a numerical surge model (WAQUA/DCSMv5) of the North Sea. It particularly focuses on the implications for the representation of extreme and rapidly recurring severe water levels over the past decades based on the example of the Netherlands. For this, WAQUA/DCSMv5, which is currently used to forecast coastal water levels in the Netherlands, is forced with ERA Interim reanalysis data. Model results are obtained from two different methodologies to parameterize air-sea momentum transfer. The first calculates the governing wind stress forcing using a drag coefficient derived from the conventional approach of wind speed dependent Charnock constants. The other uses instantaneous wind stress from the parameterization of the quasi-linear theory applied within the ECMWF wave model which is expected to deliver a more realistic forcing. The performance of both methods is tested by validating the model output with observations, paying particular attention to their ability to reproduce rapidly succeeding high water levels and extreme events. In a second step, the common features of and connections between these events are analyzed. The results of this study will allow recommendations for the improvement of water level forecasts within marginal seas and support decisions by policy makers. Furthermore, they will strengthen the general understanding of severe and extreme water levels as a whole and help to extend the currently limited knowledge about clustering events.
Identification of Extremely Premature Infants at High Risk of Rehospitalization
Carlo, Waldemar A.; McDonald, Scott A.; Yao, Qing; Das, Abhik; Higgins, Rosemary D.
2011-01-01
OBJECTIVE: Extremely low birth weight infants often require rehospitalization during infancy. Our objective was to identify at the time of discharge which extremely low birth weight infants are at higher risk for rehospitalization. METHODS: Data from extremely low birth weight infants in Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network centers from 2002–2005 were analyzed. The primary outcome was rehospitalization by the 18- to 22-month follow-up, and secondary outcome was rehospitalization for respiratory causes in the first year. Using variables and odds ratios identified by stepwise logistic regression, scoring systems were developed with scores proportional to odds ratios. Classification and regression-tree analysis was performed by recursive partitioning and automatic selection of optimal cutoff points of variables. RESULTS: A total of 3787 infants were evaluated (mean ± SD birth weight: 787 ± 136 g; gestational age: 26 ± 2 weeks; 48% male, 42% black). Forty-five percent of the infants were rehospitalized by 18 to 22 months; 14.7% were rehospitalized for respiratory causes in the first year. Both regression models (area under the curve: 0.63) and classification and regression-tree models (mean misclassification rate: 40%–42%) were moderately accurate. Predictors for the primary outcome by regression were shunt surgery for hydrocephalus, hospital stay of >120 days for pulmonary reasons, necrotizing enterocolitis stage II or higher or spontaneous gastrointestinal perforation, higher fraction of inspired oxygen at 36 weeks, and male gender. By classification and regression-tree analysis, infants with hospital stays of >120 days for pulmonary reasons had a 66% rehospitalization rate compared with 42% without such a stay. CONCLUSIONS: The scoring systems and classification and regression-tree analysis models identified infants at higher risk of rehospitalization and might assist planning for care after discharge. PMID:22007016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chen; Wang, Jianhui; Ton, Dan
Recent severe power outages caused by extreme weather hazards have highlighted the importance and urgency of improving the resilience of the electric power grid. As the distribution grids still remain vulnerable to natural disasters, the power industry has focused on methods of restoring distribution systems after disasters in an effective and quick manner. The current distribution system restoration practice for utilities is mainly based on predetermined priorities and tends to be inefficient and suboptimal, and the lack of situational awareness after the hazard significantly delays the restoration process. As a result, customers may experience an extended blackout, which causes largemore » economic loss. On the other hand, the emerging advanced devices and technologies enabled through grid modernization efforts have the potential to improve the distribution system restoration strategy. However, utilizing these resources to aid the utilities in better distribution system restoration decision-making in response to extreme weather events is a challenging task. Therefore, this paper proposes an integrated solution: a distribution system restoration decision support tool designed by leveraging resources developed for grid modernization. We first review the current distribution restoration practice and discuss why it is inadequate in response to extreme weather events. Then we describe how the grid modernization efforts could benefit distribution system restoration, and we propose an integrated solution in the form of a decision support tool to achieve the goal. The advantages of the solution include improving situational awareness of the system damage status and facilitating survivability for customers. The paper provides a comprehensive review of how the existing methodologies in the literature could be leveraged to achieve the key advantages. The benefits of the developed system restoration decision support tool include the optimal and efficient allocation of repair crews and resources, the expediting of the restoration process, and the reduction of outage durations for customers, in response to severe blackouts due to extreme weather hazards.« less
Remote Sensing Decision Support System for Optimal Access Restoration in Post Disaster Environments
DOT National Transportation Integrated Search
2017-01-01
Access restoration is an extremely important part of disaster response. Without access to the site, critically important emergency functions like search and rescue, emergency evacuation, and relief distribution, cannot commence. Frequently, roads are...
EPA's Safe and Sustainable Water Resources Research Program: Water Systems Research
Water systems challenged by limited resources, aging infrastructure, shifting demographics, climate change, and extreme weather events need transformative approaches to meet public health and environmental goals, while optimizing water treatment and maximizing resource recovery a...
Popović, Dejan B; Popović, Mirjana B
2006-01-01
This paper suggests that the optimal method for promoting of the recovery of upper extremity function in hemiplegic individuals is the use of hybrid assistive systems (HAS). The suggested HAS is a combination of stimulation of paralyzed distal segments (hand) in synchrony with robot controlled movements of proximal segments (upper arm and forearm). The use of HAS is envisioned as part of voluntary activation of preserved sensory-motor systems during task related exercise. This HAS design follows our results from functional electrical therapy, constraint induced movement therapy, intensive exercise therapy, and use of robots for rehabilitation. The suggestion is also based on strong evidences that cortical plasticity is best promoted by task related exercise and patterned electrical stimulation.
New control concepts for uncertain water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming
1993-06-01
A major complicating factor in water resources systems management is handling unknown inputs. Stochastic optimization provides a sound mathematical framework but requires that enough data exist to develop statistical input representations. In cases where data records are insufficient (e.g., extreme events) or atypical of future input realizations, stochastic methods are inadequate. This article presents a control approach where input variables are only expected to belong in certain sets. The objective is to determine sets of admissible control actions guaranteeing that the system will remain within desirable bounds. The solution is based on dynamic programming and derived for the case where all sets are convex polyhedra. A companion paper (Yao and Georgakakos, this issue) addresses specific applications and problems in relation to reservoir system management.
Formulating Spatially Varying Performance in the Statistical Fusion Framework
Landman, Bennett A.
2012-01-01
To date, label fusion methods have primarily relied either on global (e.g. STAPLE, globally weighted vote) or voxelwise (e.g. locally weighted vote) performance models. Optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs (i.e., the labeling process model). Hitherto, approaches have tended to focus on the extremes of potential models. Herein, we propose an extension to the STAPLE approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth, voxelwise performance level field that is unique to each rater. This approach, Spatial STAPLE, provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets. PMID:22438513
Kong, Wenwen; Liu, Fei; Zhang, Chu; Zhang, Jianfeng; Feng, Hailin
2016-01-01
The feasibility of hyperspectral imaging with 400–1000 nm was investigated to detect malondialdehyde (MDA) content in oilseed rape leaves under herbicide stress. After comparing the performance of different preprocessing methods, linear and nonlinear calibration models, the optimal prediction performance was achieved by extreme learning machine (ELM) model with only 23 wavelengths selected by competitive adaptive reweighted sampling (CARS), and the result was RP = 0.929 and RMSEP = 2.951. Furthermore, MDA distribution map was successfully achieved by partial least squares (PLS) model with CARS. This study indicated that hyperspectral imaging technology provided a fast and nondestructive solution for MDA content detection in plant leaves. PMID:27739491
The multi-purpose three-axis spectrometer (TAS) MIRA at FRM II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georgii, Robert; Weber, Tobias; Brandl, Georg
The cold-neutron three-axis spectrometer MIRA is an instrument optimized for low-energy excitations. Its excellent intrinsic $Q$-resolution makes it ideal for studying incommensurate magnetic systems (elastic and inelastic). MIRA is at the forefront of using advanced neutron focusing optics such as elliptic guides, which enable the investigation of small samples under extreme conditions. Another advantage of MIRA is the modular assembly allowing for instrumental adaption to the needs of the experiment within a few hours. The development of new methods such as the spin-echo technique MIEZE is another important application at MIRA. Finally, scientific topics include the investigation of complex inter-metallicmore » alloys and spectroscopy on incommensurate magnetic structures.« less
Gaythorpe, Katy; Adams, Ben
2016-05-21
Epidemics of water-borne infections often follow natural disasters and extreme weather events that disrupt water management processes. The impact of such epidemics may be reduced by deployment of transmission control facilities such as clinics or decontamination plants. Here we use a relatively simple mathematical model to examine how demographic and environmental heterogeneities, population behaviour, and behavioural change in response to the provision of facilities, combine to determine the optimal configurations of limited numbers of facilities to reduce epidemic size, and endemic prevalence. We show that, if the presence of control facilities does not affect behaviour, a good general rule for responsive deployment to minimise epidemic size is to place them in exactly the locations where they will directly benefit the most people. However, if infected people change their behaviour to seek out treatment then the deployment of facilities offering treatment can lead to complex effects that are difficult to foresee. So careful mathematical analysis is the only way to get a handle on the optimal deployment. Behavioural changes in response to control facilities can also lead to critical facility numbers at which there is a radical change in the optimal configuration. So sequential improvement of a control strategy by adding facilities to an existing optimal configuration does not always produce another optimal configuration. We also show that the pre-emptive deployment of control facilities has conflicting effects. The configurations that minimise endemic prevalence are very different to those that minimise epidemic size. So cost-benefit analysis of strategies to manage endemic prevalence must factor in the frequency of extreme weather events and natural disasters. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hinze, J. F.; Klein, S. A.; Nellis, G. F.
2015-12-01
Mixed refrigerant (MR) working fluids can significantly increase the cooling capacity of a Joule-Thomson (JT) cycle. The optimization of MRJT systems has been the subject of substantial research. However, most optimization techniques do not model the recuperator in sufficient detail. For example, the recuperator is usually assumed to have a heat transfer coefficient that does not vary with the mixture. Ongoing work at the University of Wisconsin-Madison has shown that the heat transfer coefficients for two-phase flow are approximately three times greater than for a single phase mixture when the mixture quality is between 15% and 85%. As a result, a system that optimizes a MR without also requiring that the flow be in this quality range may require an extremely large recuperator or not achieve the performance predicted by the model. To ensure optimal performance of the JT cycle, the MR should be selected such that it is entirely two-phase within the recuperator. To determine the optimal MR composition, a parametric study was conducted assuming a thermodynamically ideal cycle. The results of the parametric study are graphically presented on a contour plot in the parameter space consisting of the extremes of the qualities that exist within the recuperator. The contours show constant values of the normalized refrigeration power. This ‘map’ shows the effect of MR composition on the cycle performance and it can be used to select the MR that provides a high cooling load while also constraining the recuperator to be two phase. The predicted best MR composition can be used as a starting point for experimentally determining the best MR.
2017-03-21
Energy and Water Projects March 21, 2017 REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of...included reduced system energy use and cost as well as improved performance driven by autonomous commissioning and optimized system control. In the end...improve system performance and reduce energy use and cost. However, implementing these solutions into the extremely heterogeneous and often
Marfeo, Elizabeth E; Ni, Pengsheng; Chan, Leighton; Rasch, Elizabeth K; Jette, Alan M
2014-07-01
The goal of this article was to investigate optimal functioning of using frequency vs. agreement rating scales in two subdomains of the newly developed Work Disability Functional Assessment Battery: the Mood & Emotions and Behavioral Control scales. A psychometric study comparing rating scale performance embedded in a cross-sectional survey used for developing a new instrument to measure behavioral health functioning among adults applying for disability benefits in the United States was performed. Within the sample of 1,017 respondents, the range of response category endorsement was similar for both frequency and agreement item types for both scales. There were fewer missing values in the frequency items than the agreement items. Both frequency and agreement items showed acceptable reliability. The frequency items demonstrated optimal effectiveness around the mean ± 1-2 standard deviation score range; the agreement items performed better at the extreme score ranges. Findings suggest an optimal response format requires a mix of both agreement-based and frequency-based items. Frequency items perform better in the normal range of responses, capturing specific behaviors, reactions, or situations that may elicit a specific response. Agreement items do better for those whose scores are more extreme and capture subjective content related to general attitudes, behaviors, or feelings of work-related behavioral health functioning. Copyright © 2014 Elsevier Inc. All rights reserved.
Predicting protein amidation sites by orchestrating amino acid sequence features
NASA Astrophysics Data System (ADS)
Zhao, Shuqiu; Yu, Hua; Gong, Xiujun
2017-08-01
Amidation is the fourth major category of post-translational modifications, which plays an important role in physiological and pathological processes. Identifying amidation sites can help us understanding the amidation and recognizing the original reason of many kinds of diseases. But the traditional experimental methods for predicting amidation sites are often time-consuming and expensive. In this study, we propose a computational method for predicting amidation sites by orchestrating amino acid sequence features. Three kinds of feature extraction methods are used to build a feature vector enabling to capture not only the physicochemical properties but also position related information of the amino acids. An extremely randomized trees algorithm is applied to choose the optimal features to remove redundancy and dependence among components of the feature vector by a supervised fashion. Finally the support vector machine classifier is used to label the amidation sites. When tested on an independent data set, it shows that the proposed method performs better than all the previous ones with the prediction accuracy of 0.962 at the Matthew's correlation coefficient of 0.89 and area under curve of 0.964.
Snyder-Mackler, Noah; Majoros, William H.; Yuan, Michael L.; Shaver, Amanda O.; Gordon, Jacob B.; Kopp, Gisela H.; Schlebusch, Stephen A.; Wall, Jeffrey D.; Alberts, Susan C.; Mukherjee, Sayan; Zhou, Xiang; Tung, Jenny
2016-01-01
Research on the genetics of natural populations was revolutionized in the 1990s by methods for genotyping noninvasively collected samples. However, these methods have remained largely unchanged for the past 20 years and lag far behind the genomics era. To close this gap, here we report an optimized laboratory protocol for genome-wide capture of endogenous DNA from noninvasively collected samples, coupled with a novel computational approach to reconstruct pedigree links from the resulting low-coverage data. We validated both methods using fecal samples from 62 wild baboons, including 48 from an independently constructed extended pedigree. We enriched fecal-derived DNA samples up to 40-fold for endogenous baboon DNA and reconstructed near-perfect pedigree relationships even with extremely low-coverage sequencing. We anticipate that these methods will be broadly applicable to the many research systems for which only noninvasive samples are available. The lab protocol and software (“WHODAD”) are freely available at www.tung-lab.org/protocols-and-software.html and www.xzlab.org/software.html, respectively. PMID:27098910
NASA Astrophysics Data System (ADS)
Turner, D.
2014-12-01
Understanding the potential economic and physical impacts of climate change on coastal resources involves evaluating a number of distinct adaptive responses. This paper presents a tool for such analysis, a spatially-disaggregated optimization model for adaptation to sea level rise (SLR) and storm surge, the Coastal Impact and Adaptation Model (CIAM). This decision-making framework fills a gap between very detailed studies of specific locations and overly aggregate global analyses. While CIAM is global in scope, the optimal adaptation strategy is determined at the local level, evaluating over 12,000 coastal segments as described in the DIVA database (Vafeidis et al. 2006). The decision to pursue a given adaptation measure depends on local socioeconomic factors like income, population, and land values and how they develop over time, relative to the magnitude of potential coastal impacts, based on geophysical attributes like inundation zones and storm surge. For example, the model's decision to protect or retreat considers the costs of constructing and maintaining coastal defenses versus those of relocating people and capital to minimize damages from land inundation and coastal storms. Uncertain storm surge events are modeled with a generalized extreme value distribution calibrated to data on local surge extremes. Adaptation is optimized for the near-term outlook, in an "act then learn then act" framework that is repeated over the model time horizon. This framework allows the adaptation strategy to be flexibly updated, reflecting the process of iterative risk management. CIAM provides new estimates of the economic costs of SLR; moreover, these detailed results can be compactly represented in a set of adaptation and damage functions for use in integrated assessment models. Alongside the optimal result, CIAM evaluates suboptimal cases and finds that global costs could increase by an order of magnitude, illustrating the importance of adaptive capacity and coastal policy.
Optimality of affine control system of several species in competition on a sequential batch reactor
NASA Astrophysics Data System (ADS)
Rodríguez, J. C.; Ramírez, H.; Gajardo, P.; Rapaport, A.
2014-09-01
In this paper, we analyse the optimality of affine control system of several species in competition for a single substrate on a sequential batch reactor, with the objective being to reach a given (low) level of the substrate. We allow controls to be bounded measurable functions of time plus possible impulses. A suitable modification of the dynamics leads to a slightly different optimal control problem, without impulsive controls, for which we apply different optimality conditions derived from Pontryagin principle and the Hamilton-Jacobi-Bellman equation. We thus characterise the singular trajectories of our problem as the extremal trajectories keeping the substrate at a constant level. We also establish conditions for which an immediate one impulse (IOI) strategy is optimal. Some numerical experiences are then included in order to illustrate our study and show that those conditions are also necessary to ensure the optimality of the IOI strategy.
Minimum-fuel turning climbout and descent guidance of transport jets
NASA Technical Reports Server (NTRS)
Neuman, F.; Kreindler, E.
1983-01-01
The complete flightpath optimization problem for minimum fuel consumption from takeoff to landing including the initial and final turns from and to the runway heading is solved. However, only the initial and final segments which contain the turns are treated, since the straight-line climbout, cruise, and descent problems have already been solved. The paths are derived by generating fields of extremals, using the necessary conditions of optimal control together with singular arcs and state constraints. Results show that the speed profiles for straight flight and turning flight are essentially identical except for the final horizontal accelerating or decelerating turns. The optimal turns require no abrupt maneuvers, and an approximation of the optimal turns could be easily integrated with present straight-line climb-cruise-descent fuel-optimization algorithms. Climbout at the optimal IAS rather than the 250-knot terminal-area speed limit would save 36 lb of fuel for the 727-100 aircraft.
Kimura, Atsuomi; Narazaki, Michiko; Kanazawa, Yoko; Fujiwara, Hideaki
2004-07-01
The tissue distribution of perfluorooctanoic acid (PFOA), which is known to show unique biological responses, has been visualized in female mice by (19)F magnetic resonance imaging (MRI) incorporated with the recent advances in microimaging technique. The chemical shift selected fast spin-echo method was applied to acquire in vivo (19)F MR images of PFOA. The in vivo T(1) and T(2) relaxation times of PFOA were proven to be extremely short, which were 140 (+/- 20) ms and 6.3 (+/- 2.2) ms, respectively. To acquire the in vivo (19)F MR images of PFOA, it was necessary to optimize the parameters of signal selection and echo train length. The chemical shift selection was effectively performed by using the (19)F NMR signal of CF(3) group of PFOA without the signal overlapping because the chemical shift difference between the CF(3) and neighbor signals reaches to 14 kHz. The most optimal echo train length to obtain (19)F images efficiently was determined so that the maximum echo time (TE) value in the fast spin-echo sequence was comparable to the in vivo T(2) value. By optimizing these parameters, the in vivo (19)F MR image of PFOA was enabled to obtain efficiently in 12 minutes. As a result, the time course of the accumulation of PFOA into the mouse liver was clearly pursued in the (19)F MR images. Thus, it was concluded that the (19)F MRI becomes the effective method toward the future pharmacological and toxicological studies of perfluorocarboxilic acids.
A nonparametric multiple imputation approach for missing categorical data.
Zhou, Muhan; He, Yulei; Yu, Mandi; Hsu, Chiu-Hsieh
2017-06-06
Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness) probabilities. We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model) and the other fits a logistic regression for predicting missingness probabilities (the missingness model). A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with more than two levels for assessing the distribution of the outcome. In terms of the choices for the working models, we suggest a multinomial logistic regression for predicting the missing outcome and a binary logistic regression for predicting the missingness probability.
Optimal phase estimation with arbitrary a priori knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal
2011-06-15
The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
3D surface voxel tracing corrector for accurate bone segmentation.
Guo, Haoyan; Song, Sicong; Wang, Jinke; Guo, Maozu; Cheng, Yuanzhi; Wang, Yadong; Tamura, Shinichi
2018-06-18
For extremely close bones, their boundaries are weak and diffused due to strong interaction between adjacent surfaces. These factors prevent the accurate segmentation of bone structure. To alleviate these difficulties, we propose an automatic method for accurate bone segmentation. The method is based on a consideration of the 3D surface normal direction, which is used to detect the bone boundary in 3D CT images. Our segmentation method is divided into three main stages. Firstly, we consider a surface tracing corrector combined with Gaussian standard deviation [Formula: see text] to improve the estimation of normal direction. Secondly, we determine an optimal value of [Formula: see text] for each surface point during this normal direction correction. Thirdly, we construct the 1D signal and refining the rough boundary along the corrected normal direction. The value of [Formula: see text] is used in the first directional derivative of the Gaussian to refine the location of the edge point along accurate normal direction. Because the normal direction is corrected and the value of [Formula: see text] is optimized, our method is robust to noise images and narrow joint space caused by joint degeneration. We applied our method to 15 wrists and 50 hip joints for evaluation. In the wrist segmentation, Dice overlap coefficient (DOC) of [Formula: see text]% was obtained by our method. In the hip segmentation, fivefold cross-validations were performed for two state-of-the-art methods. Forty hip joints were used for training in two state-of-the-art methods, 10 hip joints were used for testing and performing comparisons. The DOCs of [Formula: see text], [Formula: see text]%, and [Formula: see text]% were achieved by our method for the pelvis, the left femoral head and the right femoral head, respectively. Our method was shown to improve segmentation accuracy for several specific challenging cases. The results demonstrate that our approach achieved a superior accuracy over two state-of-the-art methods.
Upper Extremity Regional Anesthesia
Neal, Joseph M.; Gerancher, J.C.; Hebl, James R.; Ilfeld, Brian M.; McCartney, Colin J.L.; Franco, Carlo D.; Hogan, Quinn H.
2009-01-01
Brachial plexus blockade is the cornerstone of the peripheral nerve regional anesthesia practice of most anesthesiologists. As part of the American Society of Regional Anesthesia and Pain Medicine’s commitment to providing intensive evidence-based education related to regional anesthesia and analgesia, this article is a complete update of our 2002 comprehensive review of upper extremity anesthesia. The text of the review focuses on (1) pertinent anatomy, (2) approaches to the brachial plexus and techniques that optimize block quality, (4) local anesthetic and adjuvant pharmacology, (5) complications, (6) perioperative issues, and (6) challenges for future research. PMID:19282714
Zwaan, Eva M; IJsselmuiden, Alexander J J; van Rosmalen, Joost; van Geuns, Robert-Jan M; Amoroso, Giovanni; Moerman, Esther; Ritt, Marco J P F; Schreuders, Ton A R; Kofflard, Marcel J M; Holtzer, Carlo A J
2016-12-01
The aim of this study is to provide a complete insight in the access-site morbidity and upper extremity function after Transradial Percutaneous Coronary Intervention (TR-PCI). In percutaneous coronary intervention the Transradial Approach (TRA) is gaining popularity as a default technique. It is a very promising technique with respect to post-procedure complications, but the exact effects of TRA on upper extremity function are unknown. The effects of trAnsRadial perCUtaneouS coronary intervention on upper extremity function (ARCUS) trial is a multicenter prospective cohort study that will be conducted in all patients admitted for TR-PCI. Clinical outcomes will be monitored during a follow-up of 6 months, with its primary endpoint at two weeks of follow-up. To investigate the complete upper extremity function, a combination of physical examinations and validated questionnaires will be used to provide information on anatomical integrity, strength, range of motion (ROM), coordination, sensibility, pain, and functioning in everyday life. Procedural and material specifications will be registered in order to include all possible aspects influencing upper extremity function. Results from this study will elucidate the effect of TR-PCI on upper extremity function. This creates the opportunity to further optimize TR-PCI, to make improvements in functional outcome and to prevent morbidity regarding full upper extremity function. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Large-scale fabrication of micro-lens array by novel end-fly-cutting-servo diamond machining.
Zhu, Zhiwei; To, Suet; Zhang, Shaojian
2015-08-10
Fast/slow tool servo (FTS/STS) diamond turning is a very promising technique for the generation of micro-lens array (MLA). However, it is still a challenge to process MLA in large scale due to certain inherent limitations of this technique. In the present study, a novel ultra-precision diamond cutting method, as the end-fly-cutting-servo (EFCS) system, is adopted and investigated for large-scale generation of MLA. After a detailed discussion of the characteristic advantages for processing MLA, the optimal toolpath generation strategy for the EFCS is developed with consideration of the geometry and installation pose of the diamond tool. A typical aspheric MLA over a large area is experimentally fabricated, and the resulting form accuracy, surface micro-topography and machining efficiency are critically investigated. The result indicates that the MLA with homogeneous quality over the whole area is obtained. Besides, high machining efficiency, extremely small volume of control points for the toolpath, and optimal usage of system dynamics of the machine tool during the whole cutting can be simultaneously achieved.
Li, Hongyu; Walker, David; Yu, Guoyu; Sayle, Andrew; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony
2013-01-14
Edge mis-figure is regarded as one of the most difficult technical issues for manufacturing the segments of extremely large telescopes, which can dominate key aspects of performance. A novel edge-control technique has been developed, based on 'Precessions' polishing technique and for which accurate and stable edge tool influence functions (TIFs) are crucial. In the first paper in this series [D. Walker Opt. Express 20, 19787-19798 (2012)], multiple parameters were experimentally optimized using an extended set of experiments. The first purpose of this new work is to 'short circuit' this procedure through modeling. This also gives the prospect of optimizing local (as distinct from global) polishing for edge mis-figure, now under separate development. This paper presents a model that can predict edge TIFs based on surface-speed profiles and pressure distributions over the polishing spot at the edge of the part, the latter calculated by finite element analysis and verified by direct force measurement. This paper also presents a hybrid-measurement method for edge TIFs to verify the simulation results. Experimental and simulation results show good agreement.
A Robust Kalman Framework with Resampling and Optimal Smoothing
Kautz, Thomas; Eskofier, Bjoern M.
2015-01-01
The Kalman filter (KF) is an extremely powerful and versatile tool for signal processing that has been applied extensively in various fields. We introduce a novel Kalman-based analysis procedure that encompasses robustness towards outliers, Kalman smoothing and real-time conversion from non-uniformly sampled inputs to a constant output rate. These features have been mostly treated independently, so that not all of their benefits could be exploited at the same time. Here, we present a coherent analysis procedure that combines the aforementioned features and their benefits. To facilitate utilization of the proposed methodology and to ensure optimal performance, we also introduce a procedure to calculate all necessary parameters. Thereby, we substantially expand the versatility of one of the most widely-used filtering approaches, taking full advantage of its most prevalent extensions. The applicability and superior performance of the proposed methods are demonstrated using simulated and real data. The possible areas of applications for the presented analysis procedure range from movement analysis over medical imaging, brain-computer interfaces to robot navigation or meteorological studies. PMID:25734647
Foltz, Ian N; Gunasekaran, Kannan; King, Chadwick T
2016-03-01
Since the late 1990s, the use of transgenic animal platforms has transformed the discovery of fully human therapeutic monoclonal antibodies. The first approved therapy derived from a transgenic platform--the epidermal growth factor receptor antagonist panitumumab to treat advanced colorectal cancer--was developed using XenoMouse(®) technology. Since its approval in 2006, the science of discovering and developing therapeutic monoclonal antibodies derived from the XenoMouse(®) platform has advanced considerably. The emerging array of antibody therapeutics developed using transgenic technologies is expected to include antibodies and antibody fragments with novel mechanisms of action and extreme potencies. In addition to these impressive functional properties, these antibodies will be designed to have superior biophysical properties that enable highly efficient large-scale manufacturing methods. Achieving these new heights in antibody drug discovery will ultimately bring better medicines to patients. Here, we review best practices for the discovery and bio-optimization of monoclonal antibodies that fit functional design goals and meet high manufacturing standards. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Liu, Yu; Holmstrom, Erik; Yu, Ping; Tan, Kemin; Zuo, Xiaobing; Nesbitt, David J; Sousa, Rui; Stagno, Jason R; Wang, Yun-Xing
2018-05-01
Site-specific incorporation of labeled nucleotides is an extremely useful synthetic tool for many structural studies (e.g., NMR, electron paramagnetic resonance (EPR), fluorescence resonance energy transfer (FRET), and X-ray crystallography) of RNA. However, specific-position-labeled RNAs >60 nt are not commercially available on a milligram scale. Position-selective labeling of RNA (PLOR) has been applied to prepare large RNAs labeled at desired positions, and all the required reagents are commercially available. Here, we present a step-by-step protocol for the solid-liquid hybrid phase method PLOR to synthesize 71-nt RNA samples with three different modification applications, containing (i) a 13 C 15 N-labeled segment; (ii) discrete residues modified with Cy3, Cy5, or biotin; or (iii) two iodo-U residues. The flexible procedure enables a wide range of downstream biophysical analyses using precisely localized functionalized nucleotides. All three RNAs were obtained in <2 d, excluding time for preparing reagents and optimizing experimental conditions. With optimization, the protocol can be applied to other RNAs with various labeling schemes, such as ligation of segmentally labeled fragments.
Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing
2012-01-01
In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.
Podlewska, Sabina; Czarnecki, Wojciech M; Kafel, Rafał; Bojarski, Andrzej J
2017-02-27
The growing computational abilities of various tools that are applied in the broadly understood field of computer-aided drug design have led to the extreme popularity of virtual screening in the search for new biologically active compounds. Most often, the source of such molecules consists of commercially available compound databases, but they can also be searched for within the libraries of structures generated in silico from existing ligands. Various computational combinatorial approaches are based solely on the chemical structure of compounds, using different types of substitutions for new molecules formation. In this study, the starting point for combinatorial library generation was the fingerprint referring to the optimal substructural composition in terms of the activity toward a considered target, which was obtained using a machine learning-based optimization procedure. The systematic enumeration of all possible connections between preferred substructures resulted in the formation of target-focused libraries of new potential ligands. The compounds were initially assessed by machine learning methods using a hashed fingerprint to represent molecules; the distribution of their physicochemical properties was also investigated, as well as their synthetic accessibility. The examination of various fingerprints and machine learning algorithms indicated that the Klekota-Roth fingerprint and support vector machine were an optimal combination for such experiments. This study was performed for 8 protein targets, and the obtained compound sets and their characterization are publically available at http://skandal.if-pan.krakow.pl/comb_lib/ .
FastChem: An ultra-fast equilibrium chemistry
NASA Astrophysics Data System (ADS)
Kitzmann, Daniel; Stock, Joachim
2018-04-01
FastChem is an equilibrium chemistry code that calculates the chemical composition of the gas phase for given temperatures and pressures. Written in C++, it is based on a semi-analytic approach, and is optimized for extremely fast and accurate calculations.
Optimal bridge retrofit strategy to enhance disaster resilience of highway transportation systems.
DOT National Transportation Integrated Search
2014-07-01
This study evaluated the resilience of highway bridges under the multihazard scenario of earthquake in the presence of : flood-induced scour. To mitigate losses incurred from bridge damage during extreme events, bridge retrofit strategies are : selec...
Wind data for wind driven plant. [site selection for optimal performance
NASA Technical Reports Server (NTRS)
Stodhart, A. H.
1973-01-01
Simple, averaged wind velocity data provide information on energy availability, facilitate generator site selection and enable appropriate operating ranges to be established for windpowered plants. They also provide a basis for the prediction of extreme wind speeds.
NASA Astrophysics Data System (ADS)
Montereale Gavazzi, G.; Madricardo, F.; Janowski, L.; Kruss, A.; Blondel, P.; Sigovini, M.; Foglini, F.
2016-03-01
Recent technological developments of multibeam echosounder systems (MBES) allow mapping of benthic habitats with unprecedented detail. MBES can now be employed in extremely shallow waters, challenging data acquisition (as these instruments were often designed for deeper waters) and data interpretation (honed on datasets with resolution sometimes orders of magnitude lower). With extremely high-resolution bathymetry and co-located backscatter data, it is now possible to map the spatial distribution of fine scale benthic habitats, even identifying the acoustic signatures of single sponges. In this context, it is necessary to understand which of the commonly used segmentation methods is best suited to account for such level of detail. At the same time, new sampling protocols for precisely geo-referenced ground truth data need to be developed to validate the benthic environmental classification. This study focuses on a dataset collected in a shallow (2-10 m deep) tidal channel of the Lagoon of Venice, Italy. Using 0.05-m and 0.2-m raster grids, we compared a range of classifications, both pixel-based and object-based approaches, including manual, Maximum Likelihood Classifier, Jenks Optimization clustering, textural analysis and Object Based Image Analysis. Through a comprehensive and accurately geo-referenced ground truth dataset, we were able to identify five different classes of the substrate composition, including sponges, mixed submerged aquatic vegetation, mixed detritic bottom (fine and coarse) and unconsolidated bare sediment. We computed estimates of accuracy (namely Overall, User, Producer Accuracies and the Kappa statistic) by cross tabulating predicted and reference instances. Overall, pixel based segmentations produced the highest accuracies and the accuracy assessment is strongly dependent on the number of classes chosen for the thematic output. Tidal channels in the Venice Lagoon are extremely important in terms of habitats and sediment distribution, particularly within the context of the new tidal barrier being built. However, they had remained largely unexplored until now, because of the surveying challenges. The application of this remote sensing approach, combined with targeted sampling, opens a new perspective in the monitoring of benthic habitats in view of a knowledge-based management of natural resources in shallow coastal areas.
Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a framework to build a survival/risk bump hunting model with a censored time-to-event response. Our Survival Bump Hunting (SBH) method is based on a recursive peeling procedure that uses a specific survival peeling criterion derived from non/semi-parametric statistics such as the hazards-ratio, the log-rank test or the Nelson--Aalen estimator. To optimize the tuning parameter of the model and validate it, we introduce an objective function based on survival or prediction-error statistics, such as the log-rank test and the concordance error rate. We also describe two alternative cross-validation techniques adapted to the joint task of decision-rule making by recursive peeling and survival estimation. Numerical analyses show the importance of replicated cross-validation and the differences between criteria and techniques in both low and high-dimensional settings. Although several non-parametric survival models exist, none addresses the problem of directly identifying local extrema. We show how SBH efficiently estimates extreme survival/risk subgroups unlike other models. This provides an insight into the behavior of commonly used models and suggests alternatives to be adopted in practice. Finally, our SBH framework was applied to a clinical dataset. In it, we identified subsets of patients characterized by clinical and demographic covariates with a distinct extreme survival outcome, for which tailored medical interventions could be made. An R package PRIMsrc (Patient Rule Induction Method in Survival, Regression and Classification settings) is available on CRAN (Comprehensive R Archive Network) and GitHub. PMID:27034730
NASA Astrophysics Data System (ADS)
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2014-11-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z - R) and radar reflectivity-specific attenuation (Z - k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.
Habitat Design Optimization and Analysis
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.
2006-01-01
Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.
Lee, Seung-Heon; Lu, Jian; Lee, Seung-Jun; Han, Jae-Hyun; Jeong, Chan-Uk; Lee, Seung-Chul; Li, Xian; Jazbinšek, Mojca; Yoon, Woojin; Yun, Hoseop; Kang, Bong Joo; Rotermund, Fabian; Nelson, Keith A; Kwon, O-Pil
2017-08-01
Highly efficient nonlinear optical organic crystals are very attractive for various photonic applications including terahertz (THz) wave generation. Up to now, only two classes of ionic crystals based on either pyridinium or quinolinium with extremely large macroscopic optical nonlinearity have been developed. This study reports on a new class of organic nonlinear optical crystals introducing electron-accepting benzothiazolium, which exhibit higher electron-withdrawing strength than pyridinium and quinolinium in benchmark crystals. The benzothiazolium crystals consisting of new acentric core HMB (2-(4-hydroxy-3-methoxystyryl)-3-methylbenzo[d]thiazol-3-ium) exhibit extremely large macroscopic optical nonlinearity with optimal molecular ordering for maximizing the diagonal second-order nonlinearity. HMB-based single crystals prepared by simple cleaving method satisfy all required crystal characteristics for intense THz wave generation such as large crystal size with parallel surfaces, moderate thickness and high optical quality with large optical transparency range (580-1620 nm). Optical rectification of 35 fs pulses at the technologically very important wavelength of 800 nm in 0.26 mm thick HMB crystal leads to one order of magnitude higher THz wave generation efficiency with remarkably broader bandwidth compared to standard inorganic 0.5 mm thick ZnTe crystal. Therefore, newly developed HMB crystals introducing benzothiazolium with extremely large macroscopic optical nonlinearity are very promising materials for intense broadband THz wave generation and other nonlinear optical applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Design of materials with prescribed nonlinear properties
NASA Astrophysics Data System (ADS)
Wang, F.; Sigmund, O.; Jensen, J. S.
2014-09-01
We systematically design materials using topology optimization to achieve prescribed nonlinear properties under finite deformation. Instead of a formal homogenization procedure, a numerical experiment is proposed to evaluate the material performance in longitudinal and transverse tensile tests under finite deformation, i.e. stress-strain relations and Poissons ratio. By minimizing errors between actual and prescribed properties, materials are tailored to achieve the target. Both two dimensional (2D) truss-based and continuum materials are designed with various prescribed nonlinear properties. The numerical examples illustrate optimized materials with rubber-like behavior and also optimized materials with extreme strain-independent Poissons ratio for axial strain intervals of εi∈[0.00, 0.30].
Optimization of optical systems.
Champagne, E B
1966-11-01
The power signal-to-noise ratios for coherent and noncoherent optical detection are presented, with the expression for noncoherent detection being examined in detail. It is found that for the long range optical system to compete with its microwave counterpart it is necessary to optimize the optical system. The optical system may be optimized by using coherent detection, or noncoherent detection if the signal is the dominate noise factor. A design procedure is presented which, in principle, always allows one to obtain signal shot-noise limited operation with noncoherent detection if pulsed operation is used. The technique should make reasonable extremely long range, high data rate systems of relatively simple design.
Xi, Jinxiang; Zhang, Ze; Si, Xiuhua A
2015-01-01
Background Although direct nose-to-brain drug delivery has multiple advantages, its application is limited by the extremely low delivery efficiency (<1%) to the olfactory region where drugs can enter the brain. It is crucial to developing new methods that can deliver drug particles more effectively to the olfactory region. Materials and methods We introduced a delivery method that used magnetophoresis to improve olfactory delivery efficiency. The performance of the proposed method was assessed numerically in an image-based human nose model. Influences of the magnet layout, magnet strength, drug-release position, and particle diameter on the olfactory dosage were examined. Results and discussion Results showed that particle diameter was a critical factor in controlling the motion of nasally inhaled ferromagnetic drug particles. The optimal particle size was found to be approximately 15 μm for effective magnetophoretic guidance while avoiding loss of particles to the walls in the anterior nose. Olfactory delivery efficiency was shown to be sensitive to the position and strength of magnets and the release position of drug particles. The results of this study showed that clinically significant olfactory doses (up to 45%) were feasible using the optimal combination of magnet layout, selective drug release, and microsphere-carrier diameter. A 64-fold-higher delivery of dosage was predicted in the magnetized nose compared to the control case, which did not have a magnetic field. However, the sensitivity of olfactory dosage to operating conditions and the unstable nature of magnetophoresis make controlled guidance of nasally inhaled aerosols still highly challenging. PMID:25709443
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Caroline; Lischeske, James J.; Sievers, David A.
2015-11-03
One viable treatment method for conversion of lignocellulosic biomass to biofuels begins with saccharification (thermochemical pretreatment and enzymatic hydrolysis), followed by fermentation or catalytic upgrading to fuels such as ethanol, butanol, or other hydrocarbons. The post-hydrolysis slurry is typically 4-8 percent insoluble solids, predominantly consisting of lignin. Suspended solids are known to inhibit fermentation as well as poison catalysts and obstruct flow in catalyst beds. Thus a solid-liquid separation following enzymatic hydrolysis would be highly favorable for process economics, however the material is not easily separated by filtration or gravimetric methods. Use of a polyacrylamide flocculant to bind the suspendedmore » particles in a corn stover hydrolyzate slurry into larger flocs (1-2mm diameter) has been found to be extremely helpful in improving separation. Recent and ongoing research on novel pretreatment methods yields hydrolyzate material with diverse characteristics. Therefore, we need a thorough understanding of rapid and successful flocculation design in order to quickly achieve process design goals. In this study potential indicators of flocculation performance were investigated in order to develop a rapid analysis method for flocculation procedure in the context of a novel hydrolyzate material. Flocculation conditions were optimized on flocculant type and loading, pH, and mixing time. Filtration flux of the hydrolyzate slurry was improved 170-fold using a cationic polyacrylamide flocculant with a dosing of approximately 22 mg flocculant/g insoluble solids at an approximate pH of 3. With cake washing, sugar recovery exceeded 90 percent with asymptotic yield at 15 L wash water/kg insoluble solids.« less
SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.
Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf
2015-08-01
RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.
Localized Cell and Drug Delivery for Auditory Prostheses
Hendricks, Jeffrey L.; Chikar, Jennifer A.; Crumling, Mark A.; Raphael, Yehoash; Martin, David C.
2011-01-01
Localized cell and drug delivery to the cochlea and central auditory pathway can improve the safety and performance of implanted auditory prostheses (APs). While generally successful, these devices have a number of limitations and adverse effects including limited tonal and dynamic ranges, channel interactions, unwanted stimulation of non-auditory nerves, immune rejection, and infections including meningitis. Many of these limitations are associated with the tissue reactions to implanted auditory prosthetic devices and the gradual degeneration of the auditory system following deafness. Strategies to reduce the insertion trauma, degeneration of target neurons, fibrous and bony tissue encapsulation, and immune activation can improve the viability of tissue required for AP function as well as improve the resolution of stimulation for reduced channel interaction and improved place-pitch and level discrimination. Many pharmaceutical compounds have been identified that promote the viability of auditory tissue and prevent inflammation and infection. Cell delivery and gene therapy have provided promising results for treating hearing loss and reversing degeneration. Currently, many clinical and experimental methods can produce extremely localized and sustained drug delivery to address AP limitations. These methods provide better control over drug concentrations while eliminating the adverse effects of systemic delivery. Many of these drug delivery techniques can be integrated into modern auditory prosthetic devices to optimize the tissue response to the implanted device and reduce the risk of infection or rejection. Together, these methods and pharmaceutical agents can be used to optimize the tissue-device interface for improved AP safety and effectiveness. PMID:18573323
[Prevention of venous thromboembolic disease in general surgery].
Arcelus, Juan Ignacio; Lozano, Francisco S; Ramos, José L; Alós, Rafael; Espín, Eloy; Rico, Pedro; Ros, Eduardo
2009-06-01
Postoperative venous thromboembolic disease (VTED) affects approximately one in four general surgery patients who do not receive preventive measures. In addition to the risk of pulmonary embolism, which is often fatal, patients with VTED may develop long-term complications such as post-thrombotic syndrome or chronic pulmonary hypertension. In addition, postoperative VTED is usually asymptomatic or produces clinical manifestations that are attributed to other processes and consequently this complication is often unnoticed by the surgeon who performed the procedure. Thus, the most effective strategy consists of effective prevention of VTED using the most appropriate prophylactic measures against the patient's thromboembolic risk. There is sufficient evidence that VTED can be prevented by pharmacological methods, especially heparin and its derivatives and with mechanical methods such as support tights or intermittent pneumatic compression of the lower extremities. To reduce the incidence of VTED as far as possible, strategies have been proposed that include a combination of drugs and mechanical methods, new antithrombotic drugs, or prolonging the duration of prophylaxis in patients at very high risk, such as those who have undergone surgery for cancer. Another important aspect is the optimal moment to initiate prophylaxis with anticoagulant drugs with the aim of achieving an adequate equilibrium between antithrombotic efficacy and the risk of hemorrhagic complications. The present article reviews the available evidence to attempt to optimize prevention of VTED in general surgery and in some special groups, such as laparoscopic surgery, short-stay surgery and obesity.
Ding, Zhen; Xia, Weiwen; Zheng, Hao; Xia, Yuting; Chen, Xiaodong
2013-01-01
Geosmin and 2-MIB are responsible for the majority of earthy and musty events related to the drinking water. These two odorants have extremely low odor threshold concentrations at ng L−1 level in the water, so a simple and sensitive method for the analysis of such trace levels was developed by headspace solid-phase microextraction coupled to gas chromatography/mass spectrometry. In this study, the orthogonal experiment design L32 (49) was applied to arrange and optimize experimental conditions. The optimum was the following: temperatures of extraction and desorption, 65°C and 260°C, respectively; times of extraction and desorption, 30 min and 5 min, respectively; ionic strength, 25% (w/v); rotate-speed, 600 rpm; solution pH, 5.0. Under the optimized conditions, limits of detection (S/N = 3) were 0.04 and 0.13 ng L−1 for geosmin and 2-MIB, respectively. Calculated calibration curves gave high levels of linearity with a correlation coefficient value of 0.9999 for them. Finally, the proposed method was applied to water samples, which were previously analyzed and confirmed to be free of target analytes. Besides, the proposal method was applied to test environmental water samples. The RSDs were 2.75%~3.80% and 4.35%~7.6% for geosmin and 2-MIB, respectively, and the recoveries were 91%~107% and 91%~104% for geosmin and 2-MIB, respectively. PMID:24000317
NASA Astrophysics Data System (ADS)
Yang, Xiaoqing; Li, Chengfei; Fu, Ruowen
2016-07-01
As one of the most potential electrode materials for supercapacitors, nitrogen-enriched nanocarbons are still facing challenge of constructing developed mesoporosity for rapid mass transportation and tailoring their pore size for performance optimization and expanding their application scopes. Herein we develop a series of nitrogen-enriched mesoporous carbon (NMC) with extremely high mesoporosity and tunable mesopore size by a two-step method using silica gel as template. In our approach, mesopore size can be easily tailored from 4.7 to 35 nm by increasing the HF/TEOS volume ratio from 1/100 to 1/4. The NMC with mesopores of 6.2 nm presents the largest mesopore volume, surface area and mesopore ratio of 2.56 cm3 g-1, 1003 m2 g-1 and 97.7%, respectively. As a result, the highest specific capacitance of 325 F g-1 can be obtained at the current density of 0.1 A g-1, which can stay over 88% (286 F g-1) as the current density increases by 100 times (10 A g-1). This approach may open the doors for preparation of nitrogen-enriched nanocarbons with desired nanostructure for numerous applications.
Enabling Resiliency Operations across Multiple Microgrids with Grid Friendly Appliance Controllers
Schneider, Kevin P.; Tuffner, Frank K.; Elizondo, Marcelo A.; ...
2017-02-16
Changes in economic, technological, and environmental policies are resulting in a re-evaluation of the dependence on large central generation facilities and their associated transmission networks. Emerging concepts of smart communities/cities are examining the potential to leverage cleaner sources of generation, as well as integrating electricity generation with other municipal functions. When grid connected, these generation assets can supplement the existing interconnections with the bulk transmission system, and in the event of an extreme event, they can provide power via a collection of microgrids. To achieve the highest level of resiliency, it may be necessary to conduct switching operations to interconnectmore » individual microgrids. While the interconnection of multiple microgrids can increase the resiliency of the system, the associated switching operations can cause large transients in low inertia microgrids. The combination of low system inertia and IEEE 1547 and 1547a-compliant inverters can prevent multiple microgrids from being interconnected during extreme weather events. This study will present a method of using end-use loads equipped with Grid Friendly™ Appliance controllers to facilitate the switching operations between multiple microgrids; operations that are necessary for optimal operations when islanded for resiliency.« less
NASA Astrophysics Data System (ADS)
O'Connor, Thomas; Robbins, Mark
Glassy polymers are a ubiquitous part of modern life, but much about their mechanical properties remains poorly understood. Since chains in glassy states are hindered from exploring their conformational entropy, they can't be understood with common entropic network models. Additionally, glassy states are highly sensitive to material history and nonequilibrium distributions of chain alignment and entanglement can be produced during material processing. Understanding how these far-from equilibrium states impact mechanical properties is analytically challenging but essential to optimizing processing methods. We use molecular dynamics simulations to study the yield and strain hardening of glassy polymers as separate functions of the degree of molecular alignment and inter-chain entanglement. We vary chain alignment and entanglement with three different preparation protocols that mimic common processing conditions in and out of solution. We compare our results to common mechanical models of amorphous polymers and assess their applicability to different experimental processing conditions. This research was performed within the Center for Materials in Extreme Dynamic Environments (CMEDE) under the Hopkins Extreme Materials Institute at Johns Hopkins University. Financial support was provided by Grant W911NF-12-2-0022.
Enabling Resiliency Operations across Multiple Microgrids with Grid Friendly Appliance Controllers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Kevin P.; Tuffner, Frank K.; Elizondo, Marcelo A.
Changes in economic, technological, and environmental policies are resulting in a re-evaluation of the dependence on large central generation facilities and their associated transmission networks. Emerging concepts of smart communities/cities are examining the potential to leverage cleaner sources of generation, as well as integrating electricity generation with other municipal functions. When grid connected, these generation assets can supplement the existing interconnections with the bulk transmission system, and in the event of an extreme event, they can provide power via a collection of microgrids. To achieve the highest level of resiliency, it may be necessary to conduct switching operations to interconnectmore » individual microgrids. While the interconnection of multiple microgrids can increase the resiliency of the system, the associated switching operations can cause large transients in low inertia microgrids. The combination of low system inertia and IEEE 1547 and 1547a-compliant inverters can prevent multiple microgrids from being interconnected during extreme weather events. This study will present a method of using end-use loads equipped with Grid Friendly™ Appliance controllers to facilitate the switching operations between multiple microgrids; operations that are necessary for optimal operations when islanded for resiliency.« less
Toda, Haruki; Nagano, Akinori; Luo, Zhiwei
2016-01-01
[Purpose] This study examined age-related differences in muscle control for support and propulsion during walking in both males and females in order to develop optimal exercise regimens for muscle control. [Subjects and Methods] Twenty elderly people and 20 young people participated in this study. Coordinates of anatomical landmarks and ground reaction force during walking were obtained using a 3D motion analysis system and force plates. Muscle forces during walking were estimated using OpenSim. Muscle modules were obtained by using non-negative matrix factorization analysis. A two-way analysis of covariance was performed to examine the difference between the elderly and the young in muscle weightings using walking speed as a covariate. The similarities in activation timing profiles between the elderly and the young were analyzed by cross-correlation analysis in males and females. [Results] In the elderly, there was a change in the coordination of muscles around the ankle, and muscles of the lower extremity exhibited co-contraction in late stance. Timing and shape of these modules were similar between elderly and young people. [Conclusion] Our results suggested that age-related alteration of muscle control was associated with support and propulsion during walking. PMID:27134360
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, S.
Testing the behavior of metals in extreme environments is not always feasible, so material scientists use models to try and predict the behavior. To achieve accurate results it is necessary to use the appropriate model and material-specific parameters. This research evaluated the performance of six material models available in the MIDAS database [1] to determine at which temperatures and strain-rates they perform best, and to determine to which experimental data their parameters were optimized. Additionally, parameters were optimized for the Johnson-Cook model using experimental data from Lassila et al [2].
Application of short-data methods on extreme surge levels
NASA Astrophysics Data System (ADS)
Feng, X.
2014-12-01
Tropical cyclone-induced storm surges are among the most destructive natural hazards that impact the United States. Unfortunately for academic research, the available time series for extreme surge analysis are very short. The limited data introduces uncertainty and affects the accuracy of statistical analyses of extreme surge levels. This study deals with techniques applicable to data sets less than 20 years, including simulation modelling and methods based on the parameters of the parent distribution. The verified water levels from water gauges spread along the Southwest and Southeast Florida Coast, as well as the Florida Keys, are used in this study. Methods to calculate extreme storm surges are described and reviewed, including 'classical' methods based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD), and approaches designed specifically to deal with short data sets. Incorporating global-warming influence, the statistical analysis reveals enhanced extreme surge magnitudes and frequencies during warm years, while reduced levels of extreme surge activity are observed in the same study domain during cold years. Furthermore, a non-stationary GEV distribution is applied to predict the extreme surge levels with warming sea surface temperatures. The non-stationary GEV distribution indicates that with 1 Celsius degree warming in sea surface temperature from the baseline climate, the 100-year return surge level in Southwest and Southeast Florida will increase by up to 40 centimeters. The considered statistical approaches for extreme surge estimation based on short data sets will be valuable to coastal stakeholders, including urban planners, emergency managers, and the hurricane and storm surge forecasting and warning system.
Binary optimization for source localization in the inverse problem of ECG.
Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf
2014-09-01
The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.
Taguchi Approach to Design Optimization for Quality and Cost: An Overview
NASA Technical Reports Server (NTRS)
Unal, Resit; Dean, Edwin B.
1990-01-01
Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
NASA Astrophysics Data System (ADS)
Yin, Yixing; Chen, Haishan; Xu, Chong-Yu; Xu, Wucheng; Chen, Changchun; Sun, Shanlei
2016-05-01
The regionalization methods, which "trade space for time" by pooling information from different locations in the frequency analysis, are efficient tools to enhance the reliability of extreme quantile estimates. This paper aims at improving the understanding of the regional frequency of extreme precipitation by using regionalization methods, and providing scientific background and practical assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region. To achieve the main goals, L-moment-based index-flood (LMIF) method, one of the most popular regionalization methods, is used in the regional frequency analysis of extreme precipitation with special attention paid to inter-site dependence and its influence on the accuracy of quantile estimates, which has not been considered by most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence, and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, generalized extreme-value (GEV) and generalized normal (GNO) distributions were identified as the best fitted distributions for most of the sub-regions, and estimated quantiles for each region were obtained. Monte Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root-mean-square errors (RMSEs) were bigger and the 90 % error bounds were wider with inter-site dependence than those without inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with a return period of 100 years were finally obtained which indicated that there are two regions with highest precipitation extremes and a large region with low precipitation extremes. However, the regions with low precipitation extremes are the most developed and densely populated regions of the country, and floods will cause great loss of human life and property damage due to the high vulnerability. The study methods and procedure demonstrated in this paper will provide useful reference for frequency analysis of precipitation extremes in large regions, and the findings of the paper will be beneficial in flood control and management in the study area.
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.
2016-10-01
Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Bo, E-mail: luboufl@gmail.com; Park, Justin C.; Fan, Qiyong
Purpose: Accurately localizing lung tumor localization is essential for high-precision radiation therapy techniques such as stereotactic body radiation therapy (SBRT). Since direct monitoring of tumor motion is not always achievable due to the limitation of imaging modalities for treatment guidance, placement of fiducial markers on the patient’s body surface to act as a surrogate for tumor position prediction is a practical alternative for tracking lung tumor motion during SBRT treatments. In this work, the authors propose an innovative and robust model to solve the multimarker position optimization problem. The model is able to overcome the major drawbacks of the sparsemore » optimization approach (SOA) model. Methods: The principle-component-analysis (PCA) method was employed as the framework to build the authors’ statistical prediction model. The method can be divided into two stages. The first stage is to build the surrogate tumor matrix and calculate its eigenvalues and associated eigenvectors. The second stage is to determine the “best represented” columns of the eigenvector matrix obtained from stage one and subsequently acquire the optimal marker positions as well as numbers. Using 4-dimensional CT (4DCT) and breath hold CT imaging data, the PCA method was compared to the SOA method with respect to calculation time, average prediction accuracy, prediction stability, noise resistance, marker position consistency, and marker distribution. Results: The PCA and SOA methods which were both tested were on all 11 patients for a total of 130 cases including 4DCT and breath-hold CT scenarios. The maximum calculation time for the PCA method was less than 1 s with 64 752 surface points, whereas the average calculation time for the SOA method was over 12 min with 400 surface points. Overall, the tumor center position prediction errors were comparable between the two methods, and all were less than 1.5 mm. However, for the extreme scenarios (breath hold), the prediction errors for the PCA method were not only smaller, but were also more stable than for the SOA method. Results obtained by imposing a series of random noises to the surrogates indicated that the PCA method was much more noise resistant than the SOA method. The marker position consistency tests using various combinations of 4DCT phases to construct the surrogates suggested that the marker position predictions of the PCA method were more consistent than those of the SOA method, in spite of surrogate construction. Marker distribution tests indicated that greater than 80% of the calculated marker positions fell into the high cross correlation and high motion magnitude regions for both of the algorithms. Conclusions: The PCA model is an accurate, efficient, robust, and practical model for solving the multimarker position optimization problem to predict lung tumor motion during SBRT treatments. Due to its generality, PCA model can also be applied to other imaging guidance system whichever using surface motion as the surrogates.« less
Multiwalled carbon nanotubes for stray light suppression in space flight instruments
NASA Astrophysics Data System (ADS)
Hagopian, John G.; Getty, Stephanie A.; Quijada, Manuel; Tveekrem, June; Shiri, Ron; Roman, Patrick; Butler, James; Georgiev, Georgi; Livas, Jeff; Hunt, Cleophus; Maldonado, Alejandro; Talapatra, Saikat; Zhang, Xianfeng; Papadakis, Stergios J.; Monica, Andrew H.; Deglau, David
2010-08-01
Observations of the Earth are extremely challenging; its large angular extent floods scientific instruments with high flux within and adjacent to the desired field of view. This bright light diffracts from instrument structures, rattles around and invariably contaminates measurements. Astrophysical observations also are impacted by stray light that obscures very dim objects and degrades signal to noise in spectroscopic measurements. Stray light is controlled by utilizing low reflectance structural surface treatments and by using baffles and stops to limit this background noise. In 2007 GSFC researchers discovered that Multiwalled Carbon Nanotubes (MWCNTs) are exceptionally good absorbers, with potential to provide order-of-magnitude improvement over current surface treatments and a resulting factor of 10,000 reduction in stray light when applied to an entire optical train. Development of this technology will provide numerous benefits including: a.) simplification of instrument stray light controls to achieve equivalent performance, b.) increasing observational efficiencies by recovering currently unusable scenes in high contrast regions, and c.) enabling low-noise observations that are beyond current capabilities. Our objective was to develop and apply MWCNTs to instrument components to realize these benefits. We have addressed the technical challenges to advance the technology by tuning the MWCNT geometry using a variety of methods to provide a factor of 10 improvement over current surface treatments used in space flight hardware. Techniques are being developed to apply the optimized geometry to typical instrument components such as spiders, baffles and tubes. Application of the nanostructures to alternate materials (or by contact transfer) is also being investigated. In addition, candidate geometries have been tested and optimized for robustness to survive integration, testing, launch and operations associated with space flight hardware. The benefits of this technology extend to space science where observations of extremely dim objects require suppression of stray light.
Increased coronary heart disease and stroke hospitalisations from ambient temperatures in Ontario
Bai, Li; Li, Qiongsi; Wang, Jun; Lavigne, Eric; Gasparrini, Antonio; Copes, Ray; Yagouti, Abderrahmane; Burnett, Richard T; Goldberg, Mark S; Cakmak, Sabit; Chen, Hong
2018-01-01
Objective To assess the associations between ambient temperatures and hospitalisations for coronary heart disease (CHD) and stroke. Methods Our study comprised all residents living in Ontario, Canada, 1996–2013. For each of 14 health regions, we fitted a distributed lag non-linear model to estimate the cold and heat effects on hospitalisations from CHD, acute myocardial infarction (AMI), stroke and ischaemic stroke, respectively. These effects were pooled using a multivariate meta-analysis. We computed attributable hospitalisations for cold and heat, defined as temperatures above and below the optimum temperature (corresponding to the temperature of minimum morbidity) and for moderate and extreme temperatures, defined using cut-offs at the 2.5th and 97.5th temperature percentiles. Results Between 1996 and 2013, we identified 1.4 million hospitalisations from CHD and 355 837 from stroke across Ontario. On cold days with temperature corresponding to the 1st percentile of temperature distribution, we found a 9% increase in daily hospitalisations for CHD (95% CI 1% to 16%), 29% increase for AMI (95% CI 15% to 45%) and 11% increase for stroke (95% CI 1% to 22%) relative to days with an optimal temperature. High temperatures (the 99th percentile) also increased CHD hospitalisations by 6% (95% CI 1% to 11%) relative to the optimal temperature. These estimates translate into 2.49% of CHD hospitalisations attributable to cold and 1.20% from heat. Additionally, 1.71% of stroke hospitalisations were attributable to cold. Importantly, moderate temperatures, rather than extreme temperatures, yielded the most of the cardiovascular burdens from temperatures. Conclusions Ambient temperatures, especially in moderate ranges, may be an important risk factor for cardiovascular-related hospitalisations. PMID:29101264
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millis, Andrew
Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21 st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomesmore » included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.« less
Zhu, Qingxia; Cao, Yongbing; Cao, Yingying; Chai, Yifeng; Lu, Feng
2014-03-01
A novel facile method has been established for rapid on-site detection of antidiabetes chemicals used to adulterate botanical dietary supplements (BDS) for diabetes. Analytes and components of pharmaceutical matrices were separated by thin-layer chromatography (TLC) then surface-enhanced Raman spectroscopy (SERS) was used for qualitative identification of trace substances on the HPTLC plate. Optimization and standardization of the experimental conditions, for example the method used for preparation of silver colloids, the mobile phase, and the concentration of colloidal silver, resulted in a very robust and highly sensitive method which enabled successful detection when the amount of adulteration was as low as 0.001 % (w/w). The method was also highly selective, enabling successful identification of some chemicals in extremely complex herbal matrices. The established TLC-SERS method was used for analysis of real BDS used to treat diabetes, and the results obtained were verified by liquid chromatography-triple quadrupole mass spectrometry (LC-MS-MS). The study showed that TLC-SERS could be used for effective separation and detection of four chemicals used to adulterate BDS, and would have good prospects for on-site qualitative screening of BDS for adulterants.
A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code
NASA Technical Reports Server (NTRS)
Sandlin, Doral R.; Davis, Paul Christopher
1992-01-01
A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.
Tunable, Flexible, and Efficient Optimization of Control Pulses for Practical Qubits
NASA Astrophysics Data System (ADS)
Machnes, Shai; Assémat, Elie; Tannor, David; Wilhelm, Frank K.
2018-04-01
Quantum computation places very stringent demands on gate fidelities, and experimental implementations require both the controls and the resultant dynamics to conform to hardware-specific constraints. Superconducting qubits present the additional requirement that pulses must have simple parameterizations, so they can be further calibrated in the experiment, to compensate for uncertainties in system parameters. Other quantum technologies, such as sensing, require extremely high fidelities. We present a novel, conceptually simple and easy-to-implement gradient-based optimal control technique named gradient optimization of analytic controls (GOAT), which satisfies all the above requirements, unlike previous approaches. To demonstrate GOAT's capabilities, with emphasis on flexibility and ease of subsequent calibration, we optimize fast coherence-limited pulses for two leading superconducting qubits architectures—flux-tunable transmons and fixed-frequency transmons with tunable couplers.
Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.
Higginson, J S; Neptune, R R; Anderson, F C
2005-09-01
Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.
Prediction of pilot-aircraft stability boundaries and performance contours
NASA Technical Reports Server (NTRS)
Stengel, R. F.; Broussard, J. R.
1977-01-01
Control-theoretic pilot models can provide important new insights regarding the stability and performance characteristics of the pilot-aircraft system. Optimal-control pilot models can be formed for a wide range of flight conditions, suggesting that the human pilot can maintain stability if he adapts his control strategy to the aircraft's changing dynamics. Of particular concern is the effect of sub-optimal pilot adaptation as an aircraft transitions from low to high angle-of-attack during rapid maneuvering, as the changes in aircraft stability and control response can be extreme. This paper examines the effects of optimal and sub-optimal effort during a typical 'high-g' maneuver, and it introduces the concept of minimum-control effort (MCE) adaptation. Limited experimental results tend to support the MCE adaptation concept.
A stable and accurate partitioned algorithm for conjugate heat transfer
NASA Astrophysics Data System (ADS)
Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
2017-09-01
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.
A stable and accurate partitioned algorithm for conjugate heat transfer
Meng, F.; Banks, J. W.; Henshaw, W. D.; ...
2017-04-25
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Harris, Wendy; Zhang, You; Yin, Fang-Fang; Ren, Lei
2017-01-01
Purpose To investigate the feasibility of using structural-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. Methods A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion-model extracted by global PCA and free-form deformation (GMM-FD) technique, using a data fidelity constraint and deformation energy minimization. In this study, a new structural-PCA method was developed to build a structural motion-model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume to evaluate the method. The estimation accuracy was evaluated by the Volume-Percent-Difference (VPD)/Center-of-Mass-Shift (COMS) between lesions in the estimated and “ground-truth” on board 4D-CBCT. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. The method was also evaluated against 3 lung patients. Results The SMM-WFD method achieved substantially better accuracy than the GMM-FD method for CBCT estimation using extremely small scan angles or projections. Using orthogonal 15° scanning angles, the VPD/COMS were 3.47±2.94% and 0.23±0.22mm for SMM-WFD and 25.23±19.01% and 2.58±2.54mm for GMM-FD among all 8 XCAT scenarios. Compared to GMM-FD, SMM-WFD was more robust against reduction of the scanning angles down to orthogonal 10° with VPD/COMS of 6.21±5.61% and 0.39±0.49mm, and more robust against reduction of projection numbers down to only 8 projections in total for both orthogonal-view 30° and orthogonal-view 15° scan angles. SMM-WFD method was also more robust than the GMM-FD method against increasing levels of noise in the projection images. Additionally, the SMM-WFD technique provided better tumor estimation for all three lung patients compared to the GMM-FD technique. Conclusion Compared to the GMM-FD technique, the SMM-WFD technique can substantially improve the 4D-CBCT estimation accuracy using extremely small scan angles and low number of projections to provide fast low dose 4D target verification. PMID:28079267
Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT).
Men, Chunhua; Romeijn, H Edwin; Jia, Xun; Jiang, Steve B
2010-11-01
To develop a novel aperture-based algorithm for volumetric modulated are therapy (VMAT) treatment plan optimization with high quality and high efficiency. The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequential way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.
Silva, Aleidy; Lee, Bai-Yu; Clemens, Daniel L; Kee, Theodore; Ding, Xianting; Ho, Chih-Ming; Horwitz, Marcus A
2016-04-12
Tuberculosis (TB) remains a major global public health problem, and improved treatments are needed to shorten duration of therapy, decrease disease burden, improve compliance, and combat emergence of drug resistance. Ideally, the most effective regimen would be identified by a systematic and comprehensive combinatorial search of large numbers of TB drugs. However, optimization of regimens by standard methods is challenging, especially as the number of drugs increases, because of the extremely large number of drug-dose combinations requiring testing. Herein, we used an optimization platform, feedback system control (FSC) methodology, to identify improved drug-dose combinations for TB treatment using a fluorescence-based human macrophage cell culture model of TB, in which macrophages are infected with isopropyl β-D-1-thiogalactopyranoside (IPTG)-inducible green fluorescent protein (GFP)-expressing Mycobacterium tuberculosis (Mtb). On the basis of only a single screening test and three iterations, we identified highly efficacious three- and four-drug combinations. To verify the efficacy of these combinations, we further evaluated them using a methodologically independent assay for intramacrophage killing of Mtb; the optimized combinations showed greater efficacy than the current standard TB drug regimen. Surprisingly, all top three- and four-drug optimized regimens included the third-line drug clofazimine, and none included the first-line drugs isoniazid and rifampin, which had insignificant or antagonistic impacts on efficacy. Because top regimens also did not include a fluoroquinolone or aminoglycoside, they are potentially of use for treating many cases of multidrug- and extensively drug-resistant TB. Our study shows the power of an FSC platform to identify promising previously unidentified drug-dose combinations for treatment of TB.
Jin, Cheng; Stein, Gregory J; Hong, Kyung-Han; Lin, C D
2015-07-24
We investigate the efficient generation of low-divergence high-order harmonics driven by waveform-optimized laser pulses in a gas-filled hollow waveguide. The drive waveform is obtained by synthesizing two-color laser pulses, optimized such that highest harmonic yields are emitted from each atom. Optimization of the gas pressure and waveguide configuration has enabled us to produce bright and spatially coherent harmonics extending from the extreme ultraviolet to soft x rays. Our study on the interplay among waveguide mode, atomic dispersion, and plasma effect uncovers how dynamic phase matching is accomplished and how an optimized waveform is maintained when optimal waveguide parameters (radius and length) and gas pressure are identified. Our analysis should help laboratory development in the generation of high-flux bright coherent soft x rays as tabletop light sources for applications.
Woo, Karen; Lok, Charmaine E
2016-08-08
Optimal vascular access planning begins when the patient is in the predialysis stages of CKD. The choice of optimal vascular access for an individual patient and determining timing of access creation are dependent on a multitude of factors that can vary widely with each patient, including demographics, comorbidities, anatomy, and personal preferences. It is important to consider every patient's ESRD life plan (hence, their overall dialysis access life plan for every vascular access creation or placement). Optimal access type and timing of access creation are also influenced by factors external to the patient, such as surgeon experience and processes of care. In this review, we will discuss the key determinants in optimal access type and timing of access creation for upper extremity arteriovenous fistulas and grafts. Copyright © 2016 by the American Society of Nephrology.
Extreme ultraviolet patterning of tin-oxo cages
NASA Astrophysics Data System (ADS)
Haitjema, Jarich; Zhang, Yu; Vockenhuber, Michaela; Kazazis, Dimitrios; Ekinci, Yasin; Brouwer, Albert M.
2017-07-01
We report on the extreme ultraviolet (EUV) patterning performance of tin-oxo cages. These cage molecules were already known to function as a negative tone photoresist for EUV radiation, but in this work, we significantly optimized their performance. Our results show that sensitivity and resolution are only meaningful photoresist parameters if the process conditions are optimized. We focus on contrast curves of the materials using large area EUV exposures and patterning of the cages using EUV interference lithography. It is shown that baking steps, such as postexposure baking, can significantly affect both the sensitivity and contrast in the open-frame experiments as well as the patterning experiments. A layer thickness increase reduced the necessary dose to induce a solubility change but decreased the patterning quality. The patterning experiments were affected by minor changes in processing conditions such as an increased rinsing time. In addition, we show that the anions of the cage can influence the sensitivity and quality of the patterning, probably through their effect on physical properties of the materials.
Kinetic turbulence simulations at extreme scale on leadership-class systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
2013-01-01
Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peth, Christian; Kranzusch, Sebastian; Mann, Klaus
2004-10-01
A table top extreme ultraviolet (EUV)-source was developed at Laser-Laboratorium Goettingen for the characterization of optical components and sensoric devices in the wavelength region from 11 to 13 nm. EUV radiation is generated by focusing the beam of a Q-switched Nd:YAG laser into a pulsed xenon gas jet. Since a directed gas jet with a high number density is needed for an optimal performance of the source, conical nozzles with different cone angles were drilled with an excimer laser to produce a supersonic gas jet. The influence of the nozzle geometry on the gas jet was characterized with a Hartmann-Shackmore » wave front sensor. The deformation of a planar wave front after passing the gas jet was analyzed with this sensor, allowing a reconstruction of the gas density distribution. Thus, the gas jet was optimized resulting in an increase of EUV emission by a factor of two and a decrease of the plasma size at the same time.« less
Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide
Tang, William; Wang, Bei; Ethier, Stephane; ...
2016-11-01
The goal of the extreme scale plasma turbulence studies described in this paper is to expedite the delivery of reliable predictions on confinement physics in large magnetic fusion systems by using world-class supercomputers to carry out simulations with unprecedented resolution and temporal duration. This has involved architecture-dependent optimizations of performance scaling and addressing code portability and energy issues, with the metrics for multi-platform comparisons being 'time-to-solution' and 'energy-to-solution'. Realistic results addressing how confinement losses caused by plasma turbulence scale from present-day devices to the much larger $25 billion international ITER fusion facility have been enabled by innovative advances in themore » GTC-P code including (i) implementation of one-sided communication from MPI 3.0 standard; (ii) creative optimization techniques on Xeon Phi processors; and (iii) development of a novel performance model for the key kernels of the PIC code. Our results show that modeling data movement is sufficient to predict performance on modern supercomputer platforms.« less
Extremal optimization for Sherrington-Kirkpatrick spin glasses
NASA Astrophysics Data System (ADS)
Boettcher, S.
2005-08-01
Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.
Lieder, Falk; Griffiths, Thomas L; Hsu, Ming
2018-01-01
People's decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision makers should overrepresent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account, we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people's availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Integrated modeling for assessment of energy-water system resilience under changing climate
NASA Astrophysics Data System (ADS)
Yan, E.; Veselka, T.; Zhou, Z.; Koritarov, V.; Mahalik, M.; Qiu, F.; Mahat, V.; Betrie, G.; Clark, C.
2016-12-01
Energy and water systems are intrinsically interconnected. Due to an increase in climate variability and extreme weather events, interdependency between these two systems has been recently intensified resulting significant impacts on both systems and energy output. To address this challenge, an Integrated Water-Energy Systems Assessment Framework (IWESAF) is being developed to integrate multiple existing or developed models from various sectors. The IWESAF currently includes an extreme climate event generator to predict future extreme weather events, hydrologic and reservoir models, riverine temperature model, power plant water use simulator, and power grid operation and cost optimization model. The IWESAF can facilitate the interaction among the modeling systems and provide insights of the sustainability and resilience of the energy-water system under extreme climate events and economic consequence. The regional case demonstration in the Midwest region will be presented. The detailed information on some of individual modeling components will also be presented in several other abstracts submitted to AGU this year.
Kim, I Jong; Pae, Ki Hong; Kim, Chul Min; Kim, Hyung Taek; Yun, Hyeok; Yun, Sang Jae; Sung, Jae Hee; Lee, Seong Ku; Yoon, Jin Woo; Yu, Tae Jun; Jeong, Tae Moon; Nam, Chang Hee; Lee, Jongmin
2012-01-01
Coherent short-wavelength radiation from laser–plasma interactions is of increasing interest in disciplines including ultrafast biomolecular imaging and attosecond physics. Using solid targets instead of atomic gases could enable the generation of coherent extreme ultraviolet radiation with higher energy and more energetic photons. Here we present the generation of extreme ultraviolet radiation through coherent high-harmonic generation from self-induced oscillatory flying mirrors—a new-generation mechanism established in a long underdense plasma on a solid target. Using a 30-fs, 100-TW Ti:sapphire laser, we obtain wavelengths as short as 4.9 nm for an optimized level of amplified spontaneous emission. Particle-in-cell simulations show that oscillatory flying electron nanosheets form in a long underdense plasma, and suggest that the high-harmonic generation is caused by reflection of the laser pulse from electron nanosheets. We expect this extreme ultraviolet radiation to be valuable in realizing a compact X-ray instrument for research in biomolecular imaging and attosecond physics. PMID:23187631
A Method for Aircraft Concept Selection Using Multicriteria Interactive Genetic Algorithms
NASA Technical Reports Server (NTRS)
Buonanno, Michael; Mavris, Dimitri
2005-01-01
The problem of aircraft concept selection has become increasingly difficult in recent years as a result of a change from performance as the primary evaluation criteria of aircraft concepts to the current situation in which environmental effects, economics, and aesthetics must also be evaluated and considered in the earliest stages of the decision-making process. This has prompted a shift from design using historical data regression techniques for metric prediction to the use of physics-based analysis tools that are capable of analyzing designs outside of the historical database. The use of optimization methods with these physics-based tools, however, has proven difficult because of the tendency of optimizers to exploit assumptions present in the models and drive the design towards a solution which, while promising to the computer, may be infeasible due to factors not considered by the computer codes. In addition to this difficulty, the number of discrete options available at this stage may be unmanageable due to the combinatorial nature of the concept selection problem, leading the analyst to arbitrarily choose a sub-optimum baseline vehicle. These concept decisions such as the type of control surface scheme to use, though extremely important, are frequently made without sufficient understanding of their impact on the important system metrics because of a lack of computational resources or analysis tools. This paper describes a hybrid subjective/quantitative optimization method and its application to the concept selection of a Small Supersonic Transport. The method uses Genetic Algorithms to operate on a population of designs and promote improvement by varying more than sixty parameters governing the vehicle geometry, mission, and requirements. In addition to using computer codes for evaluation of quantitative criteria such as gross weight, expert input is also considered to account for criteria such as aeroelasticity or manufacturability which may be impossible or too computationally expensive to consider explicitly in the analysis. Results indicate that concepts resulting from the use of this method represent designs which are promising to both the computer and the analyst, and that a mapping between concepts and requirements that would not otherwise be apparent is revealed.
De Kleijn, P; Fischer, K; Vogely, H Ch; Hendriks, C; Lindeman, E
2011-11-01
This project aimed to develop guidelines for use during in-hospital rehabilitation after combinations of multiple joint procedures (MJP) of the lower extremities in persons with haemophilia (PWH). MJP are defined as surgical procedures on the ankles, knees and hips, performed in any combination, staged, or during a single session. MJP that we studied included total knee arthroplasty, total hip arthroplasty and ankle arthrodesis. Literature on rheumatoid arthritis demonstrated promising functional results, fewer hospitalization days and days lost from work. However, the complication rate is higher and rehabilitation needs optimal conditions. Since 1995, at the Van Creveldkliniek, 54 PWH have undergone MJP. During the rehabilitation in our hospital performed by experienced physical therapists, regular guidelines seemed useless. Guidelines will guarantee an optimal physical recovery and maximum benefit from this enormous investment. This will lead to an optimal functional capability and optimal quality of life for this elderly group of PWH. There are no existing guidelines for MJP, in haemophilia, revealed through a review of the literature. Therefore, a working group was formed to develop and implement such guidelines and the procedure is explained. The total group of PWH who underwent MJP is described, subdivided into combinations of joints. For these subgroups, the number of days in hospital, complications and profile at discharge, as well as a guideline on the clinical rehabilitation, are given. It contains a general part and a part for each specific subgroup. © 2011 Blackwell Publishing Ltd.
How to deal with climate change uncertainty in the planning of engineering systems
NASA Astrophysics Data System (ADS)
Spackova, Olga; Dittes, Beatrice; Straub, Daniel
2016-04-01
The effect of extreme events such as floods on the infrastructure and built environment is associated with significant uncertainties: These include the uncertain effect of climate change, uncertainty on extreme event frequency estimation due to limited historic data and imperfect models, and, not least, uncertainty on future socio-economic developments, which determine the damage potential. One option for dealing with these uncertainties is the use of adaptable (flexible) infrastructure that can easily be adjusted in the future without excessive costs. The challenge is in quantifying the value of adaptability and in finding the optimal sequence of decision. Is it worth to build a (potentially more expensive) adaptable system that can be adjusted in the future depending on the future conditions? Or is it more cost-effective to make a conservative design without counting with the possible future changes to the system? What is the optimal timing of the decision to build/adjust the system? We develop a quantitative decision-support framework for evaluation of alternative infrastructure designs under uncertainties, which: • probabilistically models the uncertain future (trough a Bayesian approach) • includes the adaptability of the systems (the costs of future changes) • takes into account the fact that future decisions will be made under uncertainty as well (using pre-posterior decision analysis) • allows to identify the optimal capacity and optimal timing to build/adjust the infrastructure. Application of the decision framework will be demonstrated on an example of flood mitigation planning in Bavaria.
Xu, Kui; Ma, Chao; Lian, Jijian; Bin, Lingling
2014-01-01
Catastrophic flooding resulting from extreme meteorological events has occurred more frequently and drawn great attention in recent years in China. In coastal areas, extreme precipitation and storm tide are both inducing factors of flooding and therefore their joint probability would be critical to determine the flooding risk. The impact of storm tide or changing environment on flooding is ignored or underestimated in the design of drainage systems of today in coastal areas in China. This paper investigates the joint probability of extreme precipitation and storm tide and its change using copula-based models in Fuzhou City. The change point at the year of 1984 detected by Mann-Kendall and Pettitt’s tests divides the extreme precipitation series into two subsequences. For each subsequence the probability of the joint behavior of extreme precipitation and storm tide is estimated by the optimal copula. Results show that the joint probability has increased by more than 300% on average after 1984 (α = 0.05). The design joint return period (RP) of extreme precipitation and storm tide is estimated to propose a design standard for future flooding preparedness. For a combination of extreme precipitation and storm tide, the design joint RP has become smaller than before. It implies that flooding would happen more often after 1984, which corresponds with the observation. The study would facilitate understanding the change of flood risk and proposing the adaption measures for coastal areas under a changing environment. PMID:25310006
Xu, Kui; Ma, Chao; Lian, Jijian; Bin, Lingling
2014-01-01
Catastrophic flooding resulting from extreme meteorological events has occurred more frequently and drawn great attention in recent years in China. In coastal areas, extreme precipitation and storm tide are both inducing factors of flooding and therefore their joint probability would be critical to determine the flooding risk. The impact of storm tide or changing environment on flooding is ignored or underestimated in the design of drainage systems of today in coastal areas in China. This paper investigates the joint probability of extreme precipitation and storm tide and its change using copula-based models in Fuzhou City. The change point at the year of 1984 detected by Mann-Kendall and Pettitt's tests divides the extreme precipitation series into two subsequences. For each subsequence the probability of the joint behavior of extreme precipitation and storm tide is estimated by the optimal copula. Results show that the joint probability has increased by more than 300% on average after 1984 (α = 0.05). The design joint return period (RP) of extreme precipitation and storm tide is estimated to propose a design standard for future flooding preparedness. For a combination of extreme precipitation and storm tide, the design joint RP has become smaller than before. It implies that flooding would happen more often after 1984, which corresponds with the observation. The study would facilitate understanding the change of flood risk and proposing the adaption measures for coastal areas under a changing environment.
NASA Astrophysics Data System (ADS)
Zhang, J.; Yang, J.; Pan, S.; Tian, H.
2016-12-01
China is not only one of the major agricultural production countries with the largest population in the world, but it is also the most susceptible to climate change and extreme events. Much concern has been raised about how extreme climate has affected crop yield, which is crucial for China's food supply security. However, the quantitative assessment of extreme heat and drought impacts on crop yield in China has rarely been investigated. By using the Dynamic Land Ecosystem Model (DLEM-AG2), a highly integrated process-based ecosystem model with crop-specific simulation, here we quantified spatial and temporal patterns of extreme climatic heat and drought stress and their impacts on the yields of major food crops (rice, wheat, maize, and soybean) across China during 1981-2015, and further investigated the underlying mechanisms. Simulated results showed that extreme heat and drought stress significantly reduced national cereal production and increased the yield gaps between potential yield and rain-fed yield. The drought stress was the primary factor to reduce crop yields in the semi-arid and arid regions, and extreme heat stress slightly aggravated the yield loss. The yield gap between potential yield and rain-fed yield was larger at locations with lower precipitation. Our results suggest that a large exploitable yield gap in response to extreme climatic heat-drought stress offers an opportunity to increase productivity in China by optimizing agronomic practices, such as irrigation, fertilizer use, sowing density, and sowing date.
The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-01-01
Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.
NASA Astrophysics Data System (ADS)
Ciofani, Gianni; Danti, Serena; D'Alessandro, Delfo; Moscato, Stefania; Petrini, Mario; Menciassi, Arianna
2010-07-01
In the latest years, innovative nanomaterials have attracted a dramatic and exponentially increasing interest, in particular for their potential applications in the biomedical field. In this paper, we reported our findings on the cytocompatibility of barium titanate nanoparticles (BTNPs), an extremely interesting ceramic material. A rational and systematic study of BTNP cytocompatibility was performed, using a dispersion method based on a non-covalent binding to glycol-chitosan, which demonstrated the optimal cytocompatibility of this nanomaterial even at high concentration (100 μg/ml). Moreover, we showed that the efficiency of doxorubicin, a widely used chemotherapy drug, is highly enhanced following the complexation with BTNPs. Our results suggest that innovative ceramic nanomaterials such as BTNPs can be realistically exploited as alternative cellular nanovectors.
GOES-R SUVI EUV Flatfields Generated Using Boustrophedon Scans
NASA Astrophysics Data System (ADS)
Shing, L.; Edwards, C.; Mathur, D.; Vasudevan, G.; Shaw, M.; Nwachuku, C.
2017-12-01
The Solar Ultraviolet Imager (SUVI) is mounted on the Solar Pointing Platform (SPP) of the Geostationary Operational Environmental Satellite, GOES-R. SUVI is a Generalized Cassegrain telescope with a large field of view that employs multilayer coatings optimized to operate in six extreme ultraviolet (EUV) narrow bandpasses centered at 9.4, 13.1, 17.1, 19.5, 28.4 and 30.4 nm. The SUVI CCD flatfield response was determined using two different techniques; The Kuhn-Lin-Lorentz (KLL) Raster and a new technique called, Dynamic Boustrophedon Scans. The new technique requires less time to collect the data and is also less sensitive to Solar features compared with the KLL method. This paper presents the flatfield results of the SUVI using this technique during Post Launch Testing (PLT).