Size principle and information theory.
Senn, W; Wyler, K; Clamann, H P; Kleinle, J; Lüscher, H R; Müller, L
1997-01-01
The motor units of a skeletal muscle may be recruited according to different strategies. From all possible recruitment strategies nature selected the simplest one: in most actions of vertebrate skeletal muscles the recruitment of its motor units is by increasing size. This so-called size principle permits a high precision in muscle force generation since small muscle forces are produced exclusively by small motor units. Larger motor units are activated only if the total muscle force has already reached certain critical levels. We show that this recruitment by size is not only optimal in precision but also optimal in an information theoretical sense. We consider the motoneuron pool as an encoder generating a parallel binary code from a common input to that pool. The generated motoneuron code is sent down through the motoneuron axons to the muscle. We establish that an optimization of this motoneuron code with respect to its information content is equivalent to the recruitment of motor units by size. Moreover, maximal information content of the motoneuron code is equivalent to a minimal expected error in muscle force generation.
Motor unit recruitment by size does not provide functional advantages for motor performance
Dideriksen, Jakob L; Farina, Dario
2013-01-01
It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers. PMID:24144879
Motor unit recruitment by size does not provide functional advantages for motor performance.
Dideriksen, Jakob L; Farina, Dario
2013-12-15
It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers.
Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data
NASA Astrophysics Data System (ADS)
Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad
2018-01-01
The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.
Ranked set sampling: cost and optimal set size.
Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying
2002-12-01
McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.
NASA Astrophysics Data System (ADS)
Sundaramoorthy, Kumaravel
2017-02-01
The hybrid energy systems (HESs) based electricity generation system has become a more attractive solution for rural electrification nowadays. Economically feasible and technically reliable HESs are solidly based on an optimisation stage. This article discusses about the optimal unit sizing model with the objective function to minimise the total cost of the HES. Three typical rural sites from southern part of India have been selected for the application of the developed optimisation methodology. Feasibility studies and sensitivity analysis on the optimal HES are discussed elaborately in this article. A comparison has been carried out with the Hybrid Optimization Model for Electric Renewable optimisation model for three sites. The optimal HES is found with less total net present rate and rate of energy compared with the existing method
[Calculating the optimum size of a hemodialysis unit based on infrastructure potential].
Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis
2010-01-01
To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.
Mathematical model of parking space unit for triangular parking area
NASA Astrophysics Data System (ADS)
Syahrini, Intan; Sundari, Teti; Iskandar, Taufiq; Halfiani, Vera; Munzir, Said; Ramli, Marwan
2018-01-01
Parking space unit (PSU) is an effective measure for the area size of a vehicle, including the free space and the width of the door opening of the vehicle (car). This article discusses a mathematical model for parking space of vehicles in triangular shape area. An optimization model for triangular parking lot is developed. Integer Linear Programming (ILP) method is used to determine the maximum number of the PSU. The triangular parking lot is in isosceles and equilateral triangles shape and implements four possible rows and five possible angles for each field. The vehicles which are considered are cars and motorcycles. The results show that the isosceles triangular parking area has 218 units of optimal PSU, which are 84 units of PSU for cars and 134 units of PSU for motorcycles. Equilateral triangular parking area has 688 units of optimal PSU, which are 175 units of PSU for cars and 513 units of PSU for motorcycles.
Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R
2015-03-01
Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Resource Allocation and Seed Size Selection in Perennial Plants under Pollen Limitation.
Huang, Qiaoqiao; Burd, Martin; Fan, Zhiwei
2017-09-01
Pollen limitation may affect resource allocation patterns in plants, but its role in the selection of seed size is not known. Using an evolutionarily stable strategy model of resource allocation in perennial iteroparous plants, we show that under density-independent population growth, pollen limitation (i.e., a reduction in ovule fertilization rate) should increase the optimal seed size. At any level of pollen limitation (including none), the optimal seed size maximizes the ratio of juvenile survival rate to the resource investment needed to produce one seed (including both ovule production and seed provisioning); that is, the optimum maximizes the fitness effect per unit cost. Seed investment may affect allocation to postbreeding adult survival. In our model, pollen limitation increases individual seed size but decreases overall reproductive allocation, so that pollen limitation should also increase the optimal allocation to postbreeding adult survival. Under density-dependent population growth, the optimal seed size is inversely proportional to ovule fertilization rate. However, pollen limitation does not affect the optimal allocation to postbreeding adult survival and ovule production. These results highlight the importance of allocation trade-offs in the effect pollen limitation has on the ecology and evolution of seed size and postbreeding adult survival in perennial plants.
Design optimization of large-size format edge-lit light guide units
NASA Astrophysics Data System (ADS)
Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.
2016-04-01
In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.
Xu, Yangli; Zhang, Dongyun; Zhou, Yan; Wang, Weidong; Cao, Xuanyang
2017-01-01
The combination of topology optimization (TOP) and selective laser melting (SLM) provides the possibility of fabricating the complex, lightweight and high performance geometries overcoming the traditional manufacturing “bottleneck”. This paper evaluates the biomechanical properties of porous structures with porosity from 40% to 80% and unit cell size from 2 to 8 mm, which are designed by TOP and manufactured by SLM. During manufacturability exploration, three typical structures including spiral structure, arched bridge structure and structures with thin walls and small holes are abstracted and investigated, analyzing their manufacturing limits and forming reason. The property tests show that dynamic elastic modulus and compressive strength of porous structures decreases with increases of porosity (constant unit cell size) or unit cell size (constant porosity). Based on the Gibson-Ashby model, three failure models are proposed to describe their compressive behavior, and the structural parameter λ is used to evaluate the stability of the porous structure. Finally, a numerical model for the correlation between porous structural parameters (unit cell size and porosity) and elastic modulus is established, which provides a theoretical reference for matching the elastic modulus of human bones from different age, gender and skeletal sites during innovative medical implant design and manufacturing. PMID:28880229
Xu, Yangli; Zhang, Dongyun; Zhou, Yan; Wang, Weidong; Cao, Xuanyang
2017-09-07
The combination of topology optimization (TOP) and selective laser melting (SLM) provides the possibility of fabricating the complex, lightweight and high performance geometries overcoming the traditional manufacturing "bottleneck". This paper evaluates the biomechanical properties of porous structures with porosity from 40% to 80% and unit cell size from 2 to 8 mm, which are designed by TOP and manufactured by SLM. During manufacturability exploration, three typical structures including spiral structure, arched bridge structure and structures with thin walls and small holes are abstracted and investigated, analyzing their manufacturing limits and forming reason. The property tests show that dynamic elastic modulus and compressive strength of porous structures decreases with increases of porosity (constant unit cell size) or unit cell size (constant porosity). Based on the Gibson-Ashby model, three failure models are proposed to describe their compressive behavior, and the structural parameter λ is used to evaluate the stability of the porous structure. Finally, a numerical model for the correlation between porous structural parameters (unit cell size and porosity) and elastic modulus is established, which provides a theoretical reference for matching the elastic modulus of human bones from different age, gender and skeletal sites during innovative medical implant design and manufacturing.
Fetisova, Z G
2004-01-01
In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.
Optimal placement and sizing of wind / solar based DG sources in distribution system
NASA Astrophysics Data System (ADS)
Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng
2017-06-01
Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.
The Community Line Source (C-LINE) modeling system estimates emissions and dispersion of toxic air pollutants for roadways within the continental United States. It accesses publicly available traffic and meteorological datasets, and is optimized for use on community-sized areas (...
AN EXPERIMENTAL ASSESSMENT OF MINIMUM MAPPING UNIT SIZE
Land-cover (LC) maps derived from remotely sensed data are often presented using a minimum mapping unit (MMU). The choice of a MMU that is appropriate for the projected use of a classification is important. The objective of this experiment was to determine the optimal MMU of a L...
Unit bias. A new heuristic that helps explain the effect of portion size on food intake.
Geier, Andrew B; Rozin, Paul; Doros, Gheorghe
2006-06-01
People seem to think that a unit of some entity (with certain constraints) is the appropriate and optimal amount. We refer to this heuristic as unit bias. We illustrate unit bias by demonstrating large effects of unit segmentation, a form of portion control, on food intake. Thus, people choose, and presumably eat, much greater weights of Tootsie Rolls and pretzels when offered a large as opposed to a small unit size (and given the option of taking as many units as they choose at no monetary cost). Additionally, they consume substantially more M&M's when the candies are offered with a large as opposed to a small spoon (again with no limits as to the number of spoonfuls to be taken). We propose that unit bias explains why small portion sizes are effective in controlling consumption; in some cases, people served small portions would simply eat additional portions if it were not for unit bias. We argue that unit bias is a general feature in human choice and discuss possible origins of this bias, including consumption norms.
Large-area landslide susceptibility with optimized slope-units
NASA Astrophysics Data System (ADS)
Alvioli, Massimiliano; Marchesini, Ivan; Reichenbach, Paola; Rossi, Mauro; Ardizzone, Francesca; Fiorucci, Federica; Guzzetti, Fausto
2017-04-01
A Slope-Unit (SU) is a type of morphological terrain unit bounded by drainage and divide lines that maximize the within-unit homogeneity and the between-unit heterogeneity across distinct physical and geographical boundaries [1]. Compared to other terrain subdivisions, SU are morphological terrain unit well related to the natural (i.e., geological, geomorphological, hydrological) processes that shape and characterize natural slopes. This makes SU easily recognizable in the field or in topographic base maps, and well suited for environmental and geomorphological analysis, in particular for landslide susceptibility (LS) modelling. An optimal subdivision of an area into a set of SU depends on multiple factors: size and complexity of the study area, quality and resolution of the available terrain elevation data, purpose of the terrain subdivision, scale and resolution of the phenomena for which SU are delineated. We use the recently developed r.slopeunits software [2,3] for the automatic, parametric delineation of SU within the open source GRASS GIS based on terrain elevation data and a small number of user-defined parameters. The software provides subdivisions consisting of SU with different shapes and sizes, as a function of the input parameters. In this work, we describe a procedure for the optimal selection of the user parameters through the production of a large number of realizations of the LS model. We tested the software and the optimization procedure in a 2,000 km2 area in Umbria, Central Italy. For LS zonation we adopt a logistic regression model implemented in an well-known software [4,5], using about 50 independent variables. To select the optimal SU partition for LS zonation, we want to define a metric which is able to quantify simultaneously: (i) slope-unit internal homogeneity (ii) slope-unit external heterogeneity (iii) landslide susceptibility model performance. To this end, we define a comprehensive objective function S, as the product of three normalized objective functions dealing with the points (i)-(ii)-(iii) independently. We use an intra-segment variance function V, the Moran's autocorrelation index I and the AUCROC function R arising from the application of the logistic regression model. Maximization of the objective function S = f(I,V,R) as a function of the r.slopeunits input parameters provides an objective and reproducible way to select the optimal parameter combination for a proper SU subdivision for LS modelling. We further perform an analysis of the statistical significance of the LS models as a function of the r.slopeunits input parameters, focusing on the degree of coarseness of each subdivision. We find that the LRM, when applied to subdivisions with large average SU size, has a very poor statistical significance, resulting in only few (5%, typically lithological) variables being used in the regression due to the large heterogeneity of all variables within each unit, while up to 35% of the variables are used when SU are very small. This behavior was largely expected and provides further evidence that an objective method to select SU size is highly desirable. [1] Guzzetti, F. et al., Geomorphology 31, (1999) 181-216 [2] Alvioli, M. et al., Geoscientific Model Development 9 (2016), 3975-3991 [3] http://geomorphology.irpi.cnr.it/tools/slope-units [4] Rossi, M. et al., Geomorphology 114, (2010) 129-142 [5] Rossi, M. and Reichenbach, P., Geoscientific Model Development 9 (2016), 3533-3543
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Li; Kok, Jasper F.; Henze, Daven
2013-06-28
To improve estimates of remote contributions of dust to fine particulate matter (PM2.5) in the western United States, new dust particle size distributions (PSDs) based upon scale-invariant fragmentation theory (Kok_PSD) with constraints from in situ measurements (IMP_PSD) are implemented in a chemical transport model (GEOS-Chem). Compared to initial simulations, this leads to reductions in the mass of emitted dust particles with radii <1.8 mm by 40%-60%. Consequently, the root-mean-square error in simulated fine dust concentrations compared to springtime surface observations in the western United States is reduced by 67%-81%. The ratio of simulated fine to coarse PM mass is alsomore » improved, which is not achievable by reductions in total dust emissions. The IMP_PSD best represents the PSD of dust transported from remote sources and reduces modeled PM2.5 concentrations up to 5 mg/m3 over the western United States, which is important when considering sources contributing to nonattainment of air quality standards. Citation: Zhang, L., J. F. Kok, D. K. Henze, Q. Li, and C. Zhao (2013), Improving simulations of fine dust surface concentrations over the western United States by optimizing the particle size distribution, Geophys. Res. Lett., 40, 3270-3275, doi:10.1002/grl.50591.« less
Economic Analysis and Optimal Sizing for behind-the-meter Battery Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Kintner-Meyer, Michael CW; Yang, Tao
This paper proposes methods to estimate the potential benefits and determine the optimal energy and power capacity for behind-the-meter BSS. In the proposed method, a linear programming is first formulated only using typical load profiles, energy/demand charge rates, and a set of battery parameters to determine the maximum saving in electric energy cost. The optimization formulation is then adapted to include battery cost as a function of its power and energy capacity in order to capture the trade-off between benefits and cost, and therefore to determine the most economic battery size. Using the proposed methods, economic analysis and optimal sizingmore » have been performed for a few commercial buildings and utility rate structures that are representative of those found in the various regions of the Continental United States. The key factors that affect the economic benefits and optimal size have been identified. The proposed methods and case study results cannot only help commercial and industrial customers or battery vendors to evaluate and size the storage system for behind-the-meter application, but can also assist utilities and policy makers to design electricity rate or subsidies to promote the development of energy storage.« less
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
NASA Astrophysics Data System (ADS)
Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei
2018-05-01
A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.
Hamdan, Sadeque; Cheaitou, Ali
2017-08-01
This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.
Querol, Sergio; Mufti, Ghulam J; Marsh, Steven G E; Pagliuca, Antonio; Little, Ann-Margaret; Shaw, Bronwen E; Jeffery, Robert; Garcia, Joan; Goldman, John M; Madrigal, J Alejandro
2009-04-01
A stored cord blood donation may be a valuable source of hemopoietic stem cells for allogeneic transplantation when a matched sibling donor is not available. We carried out a study to define the optimal size of a national cord blood bank for the UK. We calculated the actual numbers of possible donors and the chance of finding at least one donor for 2,000 unselected and for 722 non-North Western European patients for whom searches had been initiated as a function of three levels of HLA matching (4, 5 and 6 out of 6 alleles by HLA-A, -B low and -DRB1 high resolution HLA typing) according to various donor bank sizes. With a bank size of 50,000, 80% of patients will have at least one donor unit available at the 5 out of 6 HLA allele match level (median 9 donors per patient), and 98% will have at least one donor at the 4 out of 6 allele match level (median 261). Doubling the size of the bank yields at least one donor for only an additional 6% of patients at the 5 of 6 allele match level. Moreover, for non-North Western European patients a 50,000 unit bank provides a donor for 50% at the 5 allele match level, and for 96% at the 4 allele match level. A bank containing 50,000 units is optimal for the UK and larger banks would only marginally increase the chance of finding suitable units.
The effect of laser unit on photodynamic therapy spot size.
Ansari-Shahrezaei, Siamak; Binder, Susanne; Stur, Michael
2011-01-01
To determine the effect of the laser unit on photodynamic therapy (PDT) spot size. A calibrated Gullstrand-type model eye was used for this study. The axial length of the model eye was set to different values ranging from 22.2 to 27.0 mm, and the actual spot size from the laser console was recorded for treating a spot of 4 mm in the center of the artificial fundus using two different laser units (Coherent Opal laser; Coherent Inc, Santa Clara, California, USA and Zeiss Visulas laser; Carl Zeiss Meditec Inc, Dublin, California, USA) and two indirect contact laser lenses (Volk PDT laser lens and Volk Area Centralis lens; Volk Optical Inc, Mentor, Ohio, USA). From myopia to hyperopia, the total deviation from the intended spot size was -22.5% to -7.5% (Opal laser and PDT laser lens), and -17.5% to +2.5% (Visulas laser and PDT laser lens), -12.5% to +7.5% (Opal laser and Area Centralis lens), and -7.5% to +10% (Visulas laser and Area Centralis lens). The used laser unit has a significant effect on PDT spot size in this model. These findings may be important for optimizing PDT of choroidal neovascular lesions.
The relationship between offspring size and fitness: integrating theory and empiricism.
Rollinson, Njal; Hutchings, Jeffrey A
2013-02-01
How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.
NASA Astrophysics Data System (ADS)
Szczepura, Katy; Tomkinson, David; Manning, David
2017-03-01
Tube current modulation is a method employed in the use of CT in an attempt to optimize radiation dose to the patient. The acceptable noise (noise index) can be varied, based on the level of optimization required; higher accepted noise reduces the patient dose. Recent research [1] suggests that measuring the conspicuity index (C.I.) of focal lesions within an image is more reflective of a clinical reader's ability to perceive focal lesions than traditional physical measures such as contrast to noise (CNR) and signal to noise ratio (SNR). Software has been developed and validated to calculate the C.I. in DICOM images. The aim of this work is assess the impact of tube current modulation on conspicuity index and CTDIvol, to indicate the benefits and limitations of tube current modulation on lesion detectability. Method An anthropomorphic chest phantom was used "Lungman" with inserted lesions of varying size and HU (see table below) a range of Hounsfield units and sizes were used to represent the variation in lesion Hounsfield units found. This meant some lesions had negative Hounsfield unit values.
NASA Astrophysics Data System (ADS)
Roy, P. C.; Majumder, A.; Chakraborty, N.
2010-10-01
An estimation of a stand-alone solar PV and wind hybrid system for distributed power generation has been made based on the resources available at Sagar island, a remote area distant to grid operation. Optimization and sensitivity analysis has been made to evaluate the feasibility and size of the power generation unit. A comparison of the different modes of hybrid system has been studied. It has been estimated that Solar PV-Wind-DG hybrid system provides lesser per unit electricity cost. Capital investment is observed to be lesser when the system run with Wind-DG compared to Solar PV-DG.
Toward a theory of energetically optimal body size in growing animals.
Hannon, B M; Murphy, M R
2016-06-01
Our objective was to formulate a general and useful model of the energy economy of the growing animal. We developed a theory that the respiratory energy per unit of size reaches a minimum at a particular point, when the marginal respiratory heat production rate is equal to the average rate. This occurs at what we defined as the energetically optimal size for the animal. The relationship between heat production rate and size was found to be well described by a cubic function in which heat production rate accelerates as the animal approaches and then exceeds its optimal size. Reanalysis of energetics data from the literature often detected cubic curvature in the relationship between heat production rate and body size of fish, rats, chickens, goats, sheep, swine, cattle, and horses. This finding was consistent with the theory for 13 of 17 data sets. The bias-corrected Akaike information criterion indicated that the cubic equation modeled the influence of the size of a growing animal on its heat production rate better than a power function for 11 of 17 data sets. Changes in the sizes and specific heat production rates of metabolically active internal organs, and body composition and tissue turnover rates were found to explain notable portions of the expected increase in heat production rate as animals approached and then exceeded their energetically optimum size. Accelerating maintenance costs in this region decrease net energy available for productive functions. Energetically and economically optimum size criteria were also compared.
Optimal Inspection of Imports to Prevent Invasive Pest Introduction.
Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G
2018-03-01
The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Lu, H. R.; Su, L. C.; Ruan, H. D.
2016-08-01
This study attempts to find out and optimize the removal efficiency of heavy metals in a water purification unit using a low-cost waste material and modified mineral waste materials (MMWM) accompanied with activated carbon (AC) derived from waste materials. The factors of the inner diameter of the purification unit (2.6-5cm), the height of the packing materials (5-20cm), the size of AC (200-20mesh), the size of MMWM (1-0.045mm), and the ratio between AC and MMWM in the packing materials (1:0 - 0:1) were examined based on a L18 (5) 3 orthogonal array design. In order to achieve an optimally maximum removal efficiency, the factors of the inner diameter of the purification unit (2.6-7.5cm), the height of the packing materials (10-30cm), and the ratio between AC and MMWM in the packing materials (1:4-4:1) were examined based on a L16 (4) 3 orthogonal array design. A height of 25cm, inner diameter of 5cm, ratio between AC and MMWM of 3:2 with size of 60-40mesh and 0.075-0.045mm, respectively, were the best conditions determined by the ICP-OES analysis to perform the adsorption of heavy metals in this study.
Structural Optimization of Triboelectric Nanogenerator for Harvesting Water Wave Energy.
Jiang, Tao; Zhang, Li Min; Chen, Xiangyu; Han, Chang Bao; Tang, Wei; Zhang, Chi; Xu, Liang; Wang, Zhong Lin
2015-12-22
Ocean waves are one of the most abundant energy sources on earth, but harvesting such energy is rather challenging due to various limitations of current technologies. Recently, networks formed by triboelectric nanogenerator (TENG) have been proposed as a promising technology for harvesting water wave energy. In this work, a basic unit for the TENG network was studied and optimized, which has a box structure composed of walls made of TENG composed of a wavy-structured Cu-Kapton-Cu film and two FEP thin films, with a metal ball enclosed inside. By combination of the theoretical calculations and experimental studies, the output performances of the TENG unit were investigated for various structural parameters, such as the size, mass, or number of the metal balls. From the viewpoint of theory, the output characteristics of TENG during its collision with the ball were numerically calculated by the finite element method and interpolation method, and there exists an optimum ball size or mass to reach maximized output power and electric energy. Moreover, the theoretical results were well verified by the experimental tests. The present work could provide guidance for structural optimization of wavy-structured TENGs for effectively harvesting water wave energy toward the dream of large-scale blue energy.
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557
Ramamoorthy, Ambika; Ramachandran, Rajeswari
2016-01-01
Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.
NASA Astrophysics Data System (ADS)
Hsieh, Tsu-Pang; Cheng, Mei-Chuan; Dye, Chung-Yuan; Ouyang, Liang-Yuh
2011-01-01
In this article, we extend the classical economic production quantity (EPQ) model by proposing imperfect production processes and quality-dependent unit production cost. The demand rate is described by any convex decreasing function of the selling price. In addition, we allow for shortages and a time-proportional backlogging rate. For any given selling price, we first prove that the optimal production schedule not only exists but also is unique. Next, we show that the total profit per unit time is a concave function of price when the production schedule is given. We then provide a simple algorithm to find the optimal selling price and production schedule for the proposed model. Finally, we use a couple of numerical examples to illustrate the algorithm and conclude this article with suggestions for possible future research.
Wang, Jiaxi; Gronalt, Manfred; Sun, Yan
2017-01-01
Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.
Gronalt, Manfred; Sun, Yan
2017-01-01
Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers. PMID:28704489
What is the optimal architecture for visual information routing?
Wolfrum, Philipp; von der Malsburg, Christoph
2007-12-01
Analyzing the design of networks for visual information routing is an underconstrained problem due to insufficient anatomical and physiological data. We propose here optimality criteria for the design of routing networks. For a very general architecture, we derive the number of routing layers and the fanout that minimize the required neural circuitry. The optimal fanout l is independent of network size, while the number k of layers scales logarithmically (with a prefactor below 1), with the number n of visual resolution units to be routed independently. The results are found to agree with data of the primate visual system.
DOE R&D Accomplishments Database
Wigner, E. P.; Weinberg, A. M.; Stephenson, J.
1944-02-11
The multiplication constant and optimal concentration of a slurry pile is recalculated on the basis of Mitchell's experiments on resonance absorption. The smallest chain reacting unit contains 45 to 55 m{sup 3}of d{sub 2}O. (auth).
DOE R&D Accomplishments Database
Wigner, E. P.; Weinberg, A. M.; Stephenson, J.
1944-02-11
The multiplication constant and optimal concentration of a slurry pile is recalculated on the basis of Mitchell`s experiments on resonance absorption. The smallest chain reacting unit contains 45 to 55 m{sup 3}of D{sub 2}O. (auth)
Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T
2016-06-01
Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.
NASA Astrophysics Data System (ADS)
Ding, J.; Johnson, E. A.; Martin, Y. E.
2017-12-01
Leaf is the basic production unit of plants. Water is the most critical resource of plants. Its availability controls primary productivity of plants by affecting leaf carbon budget. To avoid the damage of cavitation from lowering vein water potential t caused by evapotranspiration, the leaf must increase the stomatal resistance to reduce evapotranspiration rate. This comes at the cost of reduced carbon fixing rate as increasing stoma resistance meanwhile slows carbon intake rate. Studies suggest that stoma will operate at an optimal resistance to maximize the carbon gain with respect to water. Different plant species have different leaf shapes, a genetically determined trait. Further, on the same plant leaf size can vary many times in size that is related to soil moisture, an indicator of water availability. According to metabolic scaling theory, increasing leaf size will increase total xylem resistance of vein, which may also constrain leaf carbon budget. We present a Constrained Maximization Model of leaf (leaf CMM) that incorporates metabolic theory into the coupling of evapotranspiration and carbon fixation to examine how leaf size, stoma resistance and maximum net leaf primary productivity change with petiole xylem water potential. The model connects vein network structure to leaf shape and use the difference between petiole xylem water potential and the critical minor vein cavitation forming water potential as the budget. The CMM shows that both maximum net leaf primary production and optimal leaf size increase with petiole xylem water potential while optimal stoma resistance decreases. Narrow leaf has overall lower optimal leaf size and maximum net leaf carbon gain and higher optimal stoma resistance than those of broad leaf. This is because with small width to length ratio, total xylem resistance increases faster with leaf size. Total xylem resistance of narrow leaf increases faster with leaf size causing higher average and marginal cost of xylem water potential with respect to net leaf carbon gain. With same leaf area, total xylem resistance of narrow leaf is higher than broad leaf. Given same stoma resistance and petiole water potential, narrow leaf will lose more xylem water potential than broad leaf. Consequently, narrow leaf has smaller size and higher stoma resistance at optimum.
Meng, Dan; Falconer, James; Krauel-Goellner, Karen; Chen, John J J J; Farid, Mohammed; Alany, Raid G
2008-01-01
The purpose of this study was to design and build a supercritical CO(2) anti-solvent (SAS) unit and use it to produce microparticles of the class II drug carbamazepine. The operation conditions of the constructed unit affected the carbamazepine yield. Optimal conditions were: organic solution flow rate of 0.15 mL/min, CO(2) flow rate of 7.5 mL/min, pressure of 4,200 psi, over 3,000 s and at 33 degrees C. The drug solid-state characteristics, morphology and size distribution were examined before and after processing using X-ray powder diffraction and differential scanning calorimetry, scanning electron microscopy and laser diffraction particle size analysis, respectively. The in vitro dissolution of the treated particles was investigated and compared to that of untreated particles. Results revealed a change in the crystalline structure of carbamazepine with different polymorphs co-existing under various operation conditions. Scanning electron micrographs showed a change in the crystalline habit from the prismatic into bundled whiskers, fibers and filaments. The volume weighted diameter was reduced from 209 to 29 mum. Furthermore, the SAS CO(2) process yielded particles with significantly improved in vitro dissolution. Further research is needed to optimize the operation conditions of the self-built unit to maximize the production yield and produce a uniform polymorphic form of carbamazepine.
Design and optimization of membrane-type acoustic metamaterials
NASA Astrophysics Data System (ADS)
Blevins, Matthew Grant
One of the most common problems in noise control is the attenuation of low frequency noise. Typical solutions require barriers with high density and/or thickness. Membrane-type acoustic metamaterials are a novel type of engineered material capable of high low-frequency transmission loss despite their small thickness and light weight. These materials are ideally suited to applications with strict size and weight limitations such as aircraft, automobiles, and buildings. The transmission loss profile can be manipulated by changing the micro-level substructure, stacking multiple unit cells, or by creating multi-celled arrays. To date, analysis has focused primarily on experimental studies in plane-wave tubes and numerical modeling using finite element methods. These methods are inefficient when used for applications that require iterative changes to the structure of the material. To facilitate design and optimization of membrane-type acoustic metamaterials, computationally efficient dynamic models based on the impedance-mobility approach are proposed. Models of a single unit cell in a waveguide and in a baffle, a double layer of unit cells in a waveguide, and an array of unit cells in a baffle are studied. The accuracy of the models and the validity of assumptions used are verified using a finite element method. The remarkable computational efficiency of the impedance-mobility models compared to finite element methods enables implementation in design tools based on a graphical user interface and in optimization schemes. Genetic algorithms are used to optimize the unit cell design for a variety of noise reduction goals, including maximizing transmission loss for broadband, narrow-band, and tonal noise sources. The tools for design and optimization created in this work will enable rapid implementation of membrane-type acoustic metamaterials to solve real-world noise control problems.
Optimizing the passenger air bag of an adaptive restraint system for multiple size occupants.
Bai, Zhonghao; Jiang, Binhui; Zhu, Feng; Cao, Libo
2014-01-01
The development of the adaptive occupant restraint system (AORS) has led to an innovative way to optimize such systems for multiple size occupants. An AORS consists of multiple units such as adaptive air bags, seat belts, etc. During a collision, as a supplemental protective device, air bags can provide constraint force and play a role in dissipating the crash energy of the occupants' head and thorax. This article presents an investigation into an adaptive passenger air bag (PAB). The purpose of this study is to develop a base shape of a PAB for different size occupants using an optimization method. Four typical base shapes of a PAB were designed based on geometric data on the passenger side. Then 4 PAB finite element (FE) models and a validated sled with different size dummy models were developed in MADYMO (TNO, Rijswijk, The Netherlands) to conduct the optimization to obtain the best baseline PAB that would be used in the AORS. The objective functions-that is, the minimum total probability of injuries (∑Pcomb) of the 5th percentile female and 50th and 95th percentile male dummies-were adopted to evaluate the optimal configurations. The injury probability (Pcomb) for each dummy was adopted from the U.S. New Car Assessment Program (US-NCAP). The parameters of the AORS were first optimized for different types of PAB base shapes in a frontal impact. Then, contact time duration and force between the PAB and dummy head/chest were optimized by adjusting the parameters of the PAB, such as the number and position of tethers, lower the Pcomb of the 95th percentile male dummy. According to the optimization results, 4 typical PABs could provide effective protection to 5th and 50th percentile dummies. However, due to the heavy and large torsos of the 95th percentile occupants, the current occupant restraint system does not demonstrate satisfactory protective function, particularly for the thorax.
Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module
NASA Astrophysics Data System (ADS)
Deepak, SHARMA; Paritosh, CHAUDHURI
2018-04-01
The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.
Oral controlled release optimization of pellets prepared by extrusion-spheronization processing.
Bianchini, R; Vecchio, C
1989-06-01
Controlled release high dosage forms of a typical drug such as Indobufen were prepared as multiple-unit doses by employing extrusion-spheronization processing and subsequently film coating operations. The effects of drug particle size, drug/binder ratio, extruder screen size and preparation reproducibility on the physical properties of the spherical granules were evaluated. Controlled release optimization was obtained on the same granules by coating with polymeric membranes of different thickness consisting of water-soluble and insoluble substances. Film coating was applied from an organic solution using pan coating technique. The drug diffusion is allowed by dissolution of part of the membrane leaving small channels of the polymer coat. Further preparations were conducted to evaluate coatings applied from aqueous dispersion (pseudolatex) using air suspension coating technique. In this system the drug diffusion is governed by the intrinsic pore network of the membrane. The most promising preparations having the desired in vitro release, were metered into hard capsules to obtain the drug unit dosage. Accelerated stability tests were carried out to assess the influence of time and the other storage parameters on the drug release profile.
Performance, optimization, and latest development of the SRI family of rotary cryocoolers
NASA Astrophysics Data System (ADS)
Dovrtel, Klemen; Megušar, Franc
2017-05-01
In this paper the SRI family of Le-tehnika rotary cryocoolers is presented (SRI401, SRI423/SRI421 and SRI474). The Stirling coolers cooling power range starts from 0.25W to 0.75W at 77K with available temperature range from 60K to 150K and are fitted to typical dewar detector sizes and powers supply voltages. The DDCA performance optimizing procedure is presented. The procedure includes cooler steady state performance mapping and optimization and cooldown optimization. The current cryogenic performance status and reliability evaluation method and figures are presented on the existing and new units. The latest improved SRI401 demonstrated MTTF close to 25'000 hours and the test is still on going.
Stand-alone hybrid wind-photovoltaic power generation systems optimal sizing
NASA Astrophysics Data System (ADS)
Crǎciunescu, Aurelian; Popescu, Claudia; Popescu, Mihai; Florea, Leonard Marin
2013-10-01
Wind and photovoltaic energy resources have attracted energy sectors to generate power on a large scale. A drawback, common to these options, is their unpredictable nature and dependence on day time and meteorological conditions. Fortunately, the problems caused by the variable nature of these resources can be partially overcome by integrating the two resources in proper combination, using the strengths of one source to overcome the weakness of the other. The hybrid systems that combine wind and solar generating units with battery backup can attenuate their individual fluctuations and can match with the power requirements of the beneficiaries. In order to efficiently and economically utilize the hybrid energy system, one optimum match design sizing method is necessary. In this way, literature offers a variety of methods for multi-objective optimal designing of hybrid wind/photovoltaic (WG/PV) generating systems, one of the last being genetic algorithms (GA) and particle swarm optimization (PSO). In this paper, mathematical models of hybrid WG/PV components and a short description of the last proposed multi-objective optimization algorithms are given.
Exchange interaction in hexagonal MnRhP from first-principles studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X. B., E-mail: liuxubo@uta.edu; Zhang, Qiming; Ping Liu, J., E-mail: pliu@uta.edu
2014-05-07
Electronic structure and magnetic properties for MnRhP have been studied from a first-principles density functional calculation. The calculated lattice constants, a = 6.228 Å and c = 3.571 Å, are in good agreement with the experimental values of a = 6.223 Å and c = 3.585 Å. The calculated moment of Mn is 3.1 μ{sub B}/atom, resulting in a total moment of 3.0 μ{sub B}/atom due to small moments induced at Rh and P sites. The magnetic moment of Mn decreases with unit cell size. The exchange interactions are dominated by positive Mn-Mn exchange coupling (J{sub Mn−Mn}), implying a stable ferromagnetic ordering in Mn sublattice. In particular, J{sub Mn−Mn} showsmore » a maximum value (1.5 mRy) at the the optimized unit cell size. The structural distortion or unit cell size change will affect J{sub Mn−Mn}, which is intimately related to the magneto-elastic and magneto-caloric effect.« less
A proactive transfer policy for critical patient flow management.
González, Jaime; Ferrer, Juan-Carlos; Cataldo, Alejandro; Rojas, Luis
2018-02-17
Hospital emergency departments are often overcrowded, resulting in long wait times and a public perception of poor attention. Delays in transferring patients needing further treatment increases emergency department congestion, has negative impacts on their health and may increase their mortality rates. A model built around a Markov decision process is proposed to improve the efficiency of patient flows between the emergency department and other hospital units. With each day divided into time periods, the formulation estimates bed demand for the next period as the basis for determining a proactive rather than reactive transfer decision policy. Due to the high dimensionality of the optimization problem involved, an approximate dynamic programming approach is used to derive an approximation of the optimal decision policy, which indicates that a certain number of beds should be kept free in the different units as a function of the next period demand estimate. Testing the model on two instances of different sizes demonstrates that the optimal number of patient transfers between units changes when the emergency patient arrival rate for transfer to other units changes at a single unit, but remains stable if the change is proportionally the same for all units. In a simulation using real data for a hospital in Chile, significant improvements are achieved by the model in key emergency department performance indicators such as patient wait times (reduction higher than 50%), patient capacity (21% increase) and queue abandonment (from 7% down to less than 1%).
COLA: Optimizing Stream Processing Applications via Graph Partitioning
NASA Astrophysics Data System (ADS)
Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra
In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.
NASA Astrophysics Data System (ADS)
Sankar Sana, Shib
2016-01-01
The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.
Effect of heliostat size on the levelized cost of electricity for power towers
NASA Astrophysics Data System (ADS)
Pidaparthi, Arvind; Hoffmann, Jaap
2017-06-01
The objective of this study is to investigate the effects of heliostat size on the levelized cost of electricity (LCOE) for power tower plants. These effects are analyzed in a power tower with a net capacity of 100 MWe, 8 hours of thermal energy storage and a solar multiple of 1.8 in Upington, South Africa. A large, medium and a small size heliostat with a total area of 115.56 m2, 43.3 m2 and 15.67 m2 respectively are considered for comparison. A radial-staggered pattern and an external cylindrical receiver are considered for the heliostat field layouts. The optical performance of the optimized heliostat field layouts has been evaluated by the Hermite (analytical) method using SolarPILOT, a tool used for the generation and optimization of the heliostat field layout. The heliostat cost per unit is calculated separately for the three different heliostat sizes and the effects due to size scaling, learning curve benefits and the price index is included. The annual operation and maintenance (O&M) costs are estimated separately for the three heliostat fields, where the number of personnel required in the field is determined by the number of heliostats in the field. The LCOE values are used as a figure of merit to compare the different heliostat sizes. The results, which include the economic and the optical performance along with the annual O&M costs, indicate that lowest LCOE values are achieved by the medium size heliostat with an area of 43.3 m2 for this configuration. This study will help power tower developers determine the optimal heliostat size for power tower plants currently in the development stage.
NASA Astrophysics Data System (ADS)
Pando, V.; García-Laguna, J.; San-José, L. A.
2012-11-01
In this article, we integrate a non-linear holding cost with a stock-dependent demand rate in a maximising profit per unit time model, extending several inventory models studied by other authors. After giving the mathematical formulation of the inventory system, we prove the existence and uniqueness of the optimal policy. Relying on this result, we can obtain the optimal solution using different numerical algorithms. Moreover, we provide a necessary and sufficient condition to determine whether a system is profitable, and we establish a rule to check when a given order quantity is the optimal lot size of the inventory model. The results are illustrated through numerical examples and the sensitivity of the optimal solution with respect to changes in some values of the parameters is assessed.
Jan, Show-Li; Shieh, Gwowen
2016-08-31
The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E.; Patel, Bhargav A.; Ambudkar, Suresh V.; Talele, Tanaji T.
2014-01-01
Multidrug resistance (MDR) caused by ATP-binding cassette (ABC) transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure to cancer chemotherapy. Previously, selenazole containing cyclic peptides were reported as P-gp inhibitors and these were also used for co-crystallization with mouse P-gp, which has 87% homology to human P-gp. It has been reported that human P-gp, can simultaneously accommodate 2-3 moderate size molecules at the drug binding pocket. Our in-silico analysis based on the homology model of human P-gp spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at drug-binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity and the structural form (linear and cyclic) of valine-derived thiazole peptides that can accommodate well in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear- (13) and cyclic-trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 = 1.5 μM). Cyclic trimer and linear trimer being equipotent, future studies can be focused on non-cyclic counterparts of cyclic peptides maintaining linear trimer length. Binding model of the linear trimer (13) within the drug-binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the non-cyclic form. PMID:24288265
Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E; Patel, Bhargav A; Ambudkar, Suresh V; Talele, Tanaji T
2014-01-03
Multidrug resistance caused by ATP binding cassette transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure in cancer chemotherapy. Previously, selenazole-containing cyclic peptides were reported as P-gp inhibitors and were also used for co-crystallization with mouse P-gp, which has 87 % homology to human P-gp. It has been reported that human P-gp can simultaneously accommodate two to three moderately sized molecules at the drug binding pocket. Our in silico analysis, based on the homology model of human P-gp, spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at the drug binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity, and structural form (linear or cyclic) of valine-derived thiazole peptides that can be accommodated in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear (13) and cyclic trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 =1.5 μM). As the cyclic trimer and linear trimer compounds are equipotent, future studies should focus on noncyclic counterparts of cyclic peptides maintaining linear trimer length. A binding model of the linear trimer 13 within the drug binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the noncyclic form. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.
Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui
2017-08-07
We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.
NASA Astrophysics Data System (ADS)
Shankar Kumar, Ravi; Goswami, A.
2015-06-01
The article scrutinises the learning effect of the unit production time on optimal lot size for the uncertain and imprecise imperfect production process, wherein shortages are permissible and partially backlogged. Contextually, we contemplate the fuzzy chance of production process shifting from an 'in-control' state to an 'out-of-control' state and re-work facility of imperfect quality of produced items. The elapsed time until the process shifts is considered as a fuzzy random variable, and consequently, fuzzy random total cost per unit time is derived. Fuzzy expectation and signed distance method are used to transform the fuzzy random cost function into an equivalent crisp function. The results are illustrated with the help of numerical example. Finally, sensitivity analysis of the optimal solution with respect to major parameters is carried out.
NASA Technical Reports Server (NTRS)
Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana
2016-01-01
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.
Zhang, Dongjing; Zhang, Meichun; Wu, Yu; Gilles, Jeremie R L; Yamada, Hanano; Wu, Zhongdao; Xi, Zhiyong; Zheng, Xiaoying
2017-11-13
Standardized larval rearing units for mosquito production are essential for the establishment of a mass-rearing facility. Two larval rearing units, developed respectively by the Guangzhou Wolbaki Biotech Co. Ltd. (Wolbaki) and Insect Pest Control Laboratory, Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture (FAO/IAEA-IPCL), are tested to assess their potential uses to mass-rear the larval stages of Aedes albopictus in support of the establishment of a medium-scale mosquito facility for the application of mosquito genetic control strategies. The triple Wolbachia-infected Ae. albopictus strain (HC strain) was used in this study. The effects of larval densities of two larval rearing trays (corresponding to 2.4, 3.0 and 3.6 larvae/cm 2 ) and tray size/position (top, middle and bottom layers) on the pupae production and larval survival were assessed when trays were stacked within the larval rearing units. The male pupae production, female pupae contamination after sex separation, and male mating competitiveness were also studied by using both larval rearing units in their entirety. The optimal larval rearing density for Wolbaki-tray (Wol-tray) was 6,600 larvae (equal to 3.0 larvae/cm 2 ) and 18,000 larvae (3.6 larvae/cm 2 ) for the FAO/IAEA-IPCL tray (IAEA-tray). No significant difference in pupae production was observed when trays were stacked within top, middle or bottom layers for both units. At thirty-four hours after the first pupation, the average male pupae production was (0.89 × 10 5 ) for the Wol-unit and (3.16 × 10 5 ) for the IAEA-unit. No significant difference was observed in female pupae contamination between these two units. The HC males showed equal male mating competitiveness to wild type males for mating with wild type females in large cages, regardless of whether they were reared in the Wol-unit or IAEA-unit. The current study has indicated that both the Wol-unit and IAEA-unit are suitable for larvae mass-rearing for Ae. albopictus. However, the IAEA-unit, with higher male production and less space required compared to the Wol-unit, is recommended to be used in support of the establishment of a medium-sized mosquito facility.
Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool
NASA Astrophysics Data System (ADS)
Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.
2015-03-01
Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
Estimation method for serial dilution experiments.
Ben-David, Avishai; Davidson, Charles E
2014-12-01
Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.
Conceptual design of the 6 MW Mod-5A wind turbine generator
NASA Technical Reports Server (NTRS)
Barton, R. S.; Lucas, W. C.
1982-01-01
The General Electric Company, Advanced Energy Programs Department, is designing under DOE/NASA sponsorship the MOD-5A wind turbine system which must generate electricity for 3.75 cent/KWH (1980) or less. During the Conceptual Design Phase, completed in March, 1981, the MOD-5A WTG system size and features were established as a result of tradeoff and optimization studies driven by minimizing the system cost of energy (COE). This led to a 400' rotor diameter size. The MOD-5A system which resulted is defined in this paper along with the operational and environmental factors that drive various portions of the design. Development of weight and cost estimating relationships (WCER's) and their use in optimizing the MOD-5A are discussed. The results of major tradeoff studies are also presented. Subsystem COE contributions for the 100th unit are shown along with the method of computation. Detailed descriptions of the major subsystems are given, in order that the results of the various trade and optimization studies can be more readily visualized.
3D Biomimetic Magnetic Structures for Static Magnetic Field Stimulation of Osteogenesis.
Paun, Irina Alexandra; Popescu, Roxana Cristina; Calin, Bogdan Stefanita; Mustaciosu, Cosmin Catalin; Dinescu, Maria; Luculescu, Catalin Romeo
2018-02-07
We designed, fabricated and optimized 3D biomimetic magnetic structures that stimulate the osteogenesis in static magnetic fields. The structures were fabricated by direct laser writing via two-photon polymerization of IP-L780 photopolymer and were based on ellipsoidal, hexagonal units organized in a multilayered architecture. The magnetic activity of the structures was assured by coating with a thin layer of collagen-chitosan-hydroxyapatite-magnetic nanoparticles composite. In vitro experiments using MG-63 osteoblast-like cells for 3D structures with gradients of pore size helped us to find an optimum pore size between 20-40 µm. Starting from optimized 3D structures, we evaluated both qualitatively and quantitatively the effects of static magnetic fields of up to 250 mT on cell proliferation and differentiation, by ALP (alkaline phosphatase) production, Alizarin Red and osteocalcin secretion measurements. We demonstrated that the synergic effect of 3D structure optimization and static magnetic stimulation enhances the bone regeneration by a factor greater than 2 as compared with the same structure in the absence of a magnetic field.
3D Biomimetic Magnetic Structures for Static Magnetic Field Stimulation of Osteogenesis
Paun, Irina Alexandra; Popescu, Roxana Cristina; Calin, Bogdan Stefanita; Mustaciosu, Cosmin Catalin; Dinescu, Maria; Luculescu, Catalin Romeo
2018-01-01
We designed, fabricated and optimized 3D biomimetic magnetic structures that stimulate the osteogenesis in static magnetic fields. The structures were fabricated by direct laser writing via two-photon polymerization of IP-L780 photopolymer and were based on ellipsoidal, hexagonal units organized in a multilayered architecture. The magnetic activity of the structures was assured by coating with a thin layer of collagen-chitosan-hydroxyapatite-magnetic nanoparticles composite. In vitro experiments using MG-63 osteoblast-like cells for 3D structures with gradients of pore size helped us to find an optimum pore size between 20–40 µm. Starting from optimized 3D structures, we evaluated both qualitatively and quantitatively the effects of static magnetic fields of up to 250 mT on cell proliferation and differentiation, by ALP (alkaline phosphatase) production, Alizarin Red and osteocalcin secretion measurements. We demonstrated that the synergic effect of 3D structure optimization and static magnetic stimulation enhances the bone regeneration by a factor greater than 2 as compared with the same structure in the absence of a magnetic field. PMID:29414875
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syh, J; Ding, X; Syh, J
2015-06-15
Purpose: An approved proton pencil beam scanning (PBS) treatment plan might not be able to deliver because of existed extremely low monitor unit per beam spot. A dual hybrid plan with higher efficiency of higher spot monitor unit and the efficacy of less number of energy layers were searched and optimized. The range of monitor unit threshold setting was investigated and the plan quality was evaluated by target dose conformity. Methods: Certain limitations and requirements need to be checks and tested before a nominal proton PBS treatment plan can be delivered. The plan needs to be met the machine characterization,more » specification in record and verification to deliver the beams. Minimal threshold of monitor unit, e.g. 0.02, per spot was set to filter the low counts and plan was re-computed. Further MU threshold increment was tested in sequence without sacrificing the plan quality. The number of energy layer was also alternated due to elimination of low count layer(s). Results: Minimal MU/spot threshold, spot spacing in each energy layer and total number of energy layer and the MU weighting of beam spots of each beam were evaluated. Plan optimization between increases of the spot MU (efficiency) and less energy layers of delivery (efficacy) was adjusted. 5% weighting limit of total monitor unit per beam was feasible. Scarce spreading of beam spots was not discouraging as long as target dose conformity within 3% criteria. Conclusion: Each spot size is equivalent to the relative dose in the beam delivery system. The energy layer is associated with the depth of the targeting tumor. Our work is crucial to maintain the best possible quality plan. To keep integrity of all intrinsic elements such as spot size, spot number, layer number and the carried weighting of spots in each layer is important in this study.« less
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
State Models to Incentivize and Streamline Small Hydropower Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Taylor; Levine, Aaron; Johnson, Kurt
In 2016, the hydropower fleet in the United States produced more than 6 percent (approximately 265,829 gigawatt-hours [GWh]) of the total net electricity generation. The median-size hydroelectric facility in the United States is 1.6 MW and 75 percent of total facilities have a nameplate capacity of 10 MW or less. Moreover, the U.S. Department of Energy's Hydropower Vision study identified approximately 79 GW hydroelectric potential beyond what is already developed. Much of the potential identified is at low-impact new stream-reaches, existing conduits, and non-powered dams with a median project size of 10 MW or less. To optimize the potential andmore » value of small hydropower development, state governments are crafting policies that provide financial assistance and expedite state and federal review processes for small hydroelectric projects. This report analyzes state-led initiatives and programs that incentivize and streamline small hydroelectric development.« less
NASA Astrophysics Data System (ADS)
Yoo, S.; Zeng, X. C.
2006-05-01
We performed a constrained search for the geometries of low-lying neutral germanium clusters GeN in the size range of 21⩽N⩽29. The basin-hopping global optimization method is employed for the search. The potential-energy surface is computed based on the plane-wave pseudopotential density functional theory. A new series of low-lying clusters is found on the basis of several generic structural motifs identified previously for silicon clusters [S. Yoo and X. C. Zeng, J. Chem. Phys. 124, 054304 (2006)] as well as for smaller-sized germanium clusters [S. Bulusu et al., J. Chem. Phys. 122, 164305 (2005)]. Among the generic motifs examined, we found that two motifs stand out in producing most low-lying clusters, namely, the six/nine motif, a puckered-hexagonal-ring Ge6 unit attached to a tricapped trigonal prism Ge9, and the six/ten motif, a puckered-hexagonal-ring Ge6 unit attached to a bicapped antiprism Ge10. The low-lying clusters obtained are all prolate in shape and their energies are appreciably lower than the near-spherical low-energy clusters. This result is consistent with the ion-mobility measurement in that medium-sized germanium clusters detected are all prolate in shape until the size N ˜65.
NASA Astrophysics Data System (ADS)
Armstrong, Michael James
Increases in power demands and changes in the design practices of overall equipment manufacturers has led to a new paradigm in vehicle systems definition. The development of unique power systems architectures is of increasing importance to overall platform feasibility and must be pursued early in the aircraft design process. Many vehicle systems architecture trades must be conducted concurrent to platform definition. With an increased complexity introduced during conceptual design, accurate predictions of unit level sizing requirements must be made. Architecture specific emergent requirements must be identified which arise due to the complex integrated effect of unit behaviors. Off-nominal operating scenarios present sizing critical requirements to the aircraft vehicle systems. These requirements are architecture specific and emergent. Standard heuristically defined failure mitigation is sufficient for sizing traditional and evolutionary architectures. However, architecture concepts which vary significantly in terms of structure and composition require that unique failure mitigation strategies be defined for accurate estimations of unit level requirements. Identifying of these off-nominal emergent operational requirements require extensions to traditional safety and reliability tools and the systematic identification of optimal performance degradation strategies. Discrete operational constraints posed by traditional Functional Hazard Assessment (FHA) are replaced by continuous relationships between function loss and operational hazard. These relationships pose the objective function for hazard minimization. Load shedding optimization is performed for all statistically significant failures by varying the allocation of functional capability throughout the vehicle systems architecture. Expressing hazards, and thereby, reliability requirements as continuous relationships with the magnitude and duration of functional failure requires augmentations to the traditional means for system safety assessment (SSA). The traditional two state and discrete system reliability assessment proves insufficient. Reliability is, therefore, handled in an analog fashion: as a function of magnitude of failure and failure duration. A series of metrics are introduced which characterize system performance in terms of analog hazard probabilities. These include analog and cumulative system and functional risk, hazard correlation, and extensions to the traditional component importance metrics. Continuous FHA, load shedding optimization, and analog SSA constitute the SONOMA process (Systematic Off-Nominal Requirements Analysis). Analog system safety metrics inform both architecture optimization (changes in unit level capability and reliability) and architecture augmentation (changes in architecture structure and composition). This process was applied for two vehicle systems concepts (conventional and 'more-electric') in terms of loss/hazard relationships with varying degrees of fidelity. Application of this process shows that the traditional assumptions regarding the structure of the function loss vs. hazard relationship apply undue design bias to functions and components during exploratory design. This bias is illustrated in terms of inaccurate estimations of the system and function level risk and unit level importance. It was also shown that off-nominal emergent requirements must be defined specific to each architecture concept. Quantitative comparisons of architecture specific off-nominal performance were obtained which provide evidence to the need for accurate definition of load shedding strategies during architecture exploratory design. Formally expressing performance degradation strategies in terms of the minimization of a continuous hazard space enhances the system architects ability to accurately predict sizing critical emergent requirements concurrent to architecture definition. Furthermore, the methods and frameworks generated here provide a structured and flexible means for eliciting these architecture specific requirements during the performance of architecture trades.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosard, D.D.; Steltz, W.G.
1986-10-01
Properly sized turbine and boiler bypass systems permit two-shift cycling operation of units, shorten start-up time, and reduce life expenditures of plant components. With bypasses installed, faster startups can reduce fuel costs by $100,000 per year for a typical 500-MW fossil-fired unit. This report discusses the technical characteristics of existing bypass systems and provides guidelines for sizing bypass systems to achieve economical and reliable two-shift operation. The collection and analysis of startup data from several generating units were used in conjunction with computer simulations to illustrate the effects of adding various arrangements and sizes of steam bypass systems. The report,more » which indicates that shutdown procedures have significant impact on subsequent startup and loading time, describes operating practices to optimize the effectiveness of bypass systems. To determine the effectiveness of large turbine bypass systems of less than 100% capacity in preventing boiler trips following load rejection, transient field data were compared to a load rejection simulation using the modular modeling system (MMS). The MMS was then used to predict system response to other levels of load rejection. 7 refs., 87 figs., 8 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)
NASA Astrophysics Data System (ADS)
Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.
2016-05-01
This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.
Phase transitions in restricted Boltzmann machines with generic priors
NASA Astrophysics Data System (ADS)
Barra, Adriano; Genovese, Giuseppe; Sollich, Peter; Tantari, Daniele
2017-10-01
We study generalized restricted Boltzmann machines with generic priors for units and weights, interpolating between Boolean and Gaussian variables. We present a complete analysis of the replica symmetric phase diagram of these systems, which can be regarded as generalized Hopfield models. We underline the role of the retrieval phase for both inference and learning processes and we show that retrieval is robust for a large class of weight and unit priors, beyond the standard Hopfield scenario. Furthermore, we show how the paramagnetic phase boundary is directly related to the optimal size of the training set necessary for good generalization in a teacher-student scenario of unsupervised learning.
Power Converters Maximize Outputs Of Solar Cell Strings
NASA Technical Reports Server (NTRS)
Frederick, Martin E.; Jermakian, Joel B.
1993-01-01
Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.
Francese, Joseph A; Rietz, Michael L; Mastro, Victor C
2013-12-01
Field assays were conducted in southeastern and south-central Michigan in 2011 and 2012 to optimize green and purple multifunnel (Lindgren funnel) traps for use as a survey tool for the emerald ash borer, Agrilus planipennis Fairmaire. Larger sized (12- and 16-unit) multifunnel traps caught more beetles than their smaller-sized (4- and 8-unit) counterparts. Green traps coated with untinted (white) fluon caught almost four times as many adult A. planipennis as Rain-X and tinted (green) fluon-coated traps and almost 33 times more beetles than untreated control traps. Purple multifunnel traps generally caught much lower numbers of A. planipennis adults than green traps, and trap catch on them was not affected by differences in the type of coating applied. However, trap coating was necessary as untreated control purple traps caught significantly less beetles than traps treated with Rain-X and untinted or tinted (purple) fluon. Proportions of male beetles captured were generally much higher on green traps than on purple traps, but sex ratios were not affected by trap coating. In 2012, a new shade of purple plastic, based on a better color match to an attractive purple paint than the previously used purple, was used for trapping assays. When multifunnel traps were treated with fluon, green traps caught more A. planipennis adults than both shades of purple and a prism trap that was manufactured based on the same color match. Trap catch was not affected by diluting the fluon concentration applied to traps to 50% (1:1 mixture in water). At 10%, trap catch was significantly lowered.
NASA Astrophysics Data System (ADS)
Barnawi, Abdulwasa Bakr
Hybrid power generation system and distributed generation technology are attracting more investments due to the growing demand for energy nowadays and the increasing awareness regarding emissions and their environmental impacts such as global warming and pollution. The price fluctuation of crude oil is an additional reason for the leading oil producing countries to consider renewable resources as an alternative. Saudi Arabia as the top oil exporter country in the word announced the "Saudi Arabia Vision 2030" which is targeting to generate 9.5 GW of electricity from renewable resources. Two of the most promising renewable technologies are wind turbines (WT) and photovoltaic cells (PV). The integration or hybridization of photovoltaics and wind turbines with battery storage leads to higher adequacy and redundancy for both autonomous and grid connected systems. This study presents a method for optimal generation unit planning by installing a proper number of solar cells, wind turbines, and batteries in such a way that the net present value (NPV) is minimized while the overall system redundancy and adequacy is maximized. A new renewable fraction technique (RFT) is used to perform the generation unit planning. RFT was tested and validated with particle swarm optimization and HOMER Pro under the same conditions and environment. Renewable resources and load randomness and uncertainties are considered. Both autonomous and grid-connected system designs were adopted in the optimal generation units planning process. An uncertainty factor was designed and incorporated in both autonomous and grid connected system designs. In the autonomous hybrid system design model, the strategy including an additional amount of operation reserve as a percent of the hourly load was considered to deal with resource uncertainty since the battery storage system is the only backup. While in the grid-connected hybrid system design model, demand response was incorporated to overcome the impact of uncertainty and perform energy trading between the hybrid grid utility and main grid utility in addition to the designed uncertainty factor. After the generation unit planning was carried out and component sizing was determined, adequacy evaluation was conducted by calculating the loss of load expectation adequacy index for different contingency criteria considering probability of equipment failure. Finally, a microgrid planning was conducted by finding the proper size and location to install distributed generation units in a radial distribution network.
Dynamic resource allocation in conservation planning
Golovin, D.; Krause, A.; Gardner, B.; Converse, S.J.; Morey, S.
2011-01-01
Consider the problem of protecting endangered species by selecting patches of land to be used for conservation purposes. Typically, the availability of patches changes over time, and recommendations must be made dynamically. This is a challenging prototypical example of a sequential optimization problem under uncertainty in computational sustainability. Existing techniques do not scale to problems of realistic size. In this paper, we develop an efficient algorithm for adaptively making recommendations for dynamic conservation planning, and prove that it obtains near-optimal performance. We further evaluate our approach on a detailed reserve design case study of conservation planning for three rare species in the Pacific Northwest of the United States. Copyright ?? 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
The X-IFU end-to-end simulations performed for the TES array optimization exercise
NASA Astrophysics Data System (ADS)
Peille, Philippe; Wilms, J.; Brand, T.; Cobo, B.; Ceballos, M. T.; Dauser, T.; Smith, S. J.; Barret, D.; den Herder, J. W.; Piro, L.; Barcons, X.; Pointecouteau, E.; Bandler, S.; den Hartog, R.; de Plaa, J.
2015-09-01
The focal plane assembly of the Athena X-ray Integral Field Unit (X-IFU) includes as the baseline an array of ~4000 single size calorimeters based on Transition Edge Sensors (TES). Other sensor array configurations could however be considered, combining TES of different properties (e.g. size). In attempting to improve the X-IFU performance in terms of field of view, count rate performance, and even spectral resolution, two alternative TES array configurations to the baseline have been simulated, each combining a small and a large pixel array. With the X-IFU end-to-end simulator, a sub-sample of the Athena core science goals, selected by the X-IFU science team as potentially driving the optimal TES array configuration, has been simulated for the results to be scientifically assessed and compared. In this contribution, we will describe the simulation set-up for the various array configurations, and highlight some of the results of the test cases simulated.
Berthaume, Michael A.; Dumont, Elizabeth R.; Godfrey, Laurie R.; Grosse, Ian R.
2014-01-01
Teeth are often assumed to be optimal for their function, which allows researchers to derive dietary signatures from tooth shape. Most tooth shape analyses normalize for tooth size, potentially masking the relationship between relative food item size and tooth shape. Here, we model how relative food item size may affect optimal tooth cusp radius of curvature (RoC) during the fracture of brittle food items using a parametric finite-element (FE) model of a four-cusped molar. Morphospaces were created for four different food item sizes by altering cusp RoCs to determine whether optimal tooth shape changed as food item size changed. The morphospaces were also used to investigate whether variation in efficiency metrics (i.e. stresses, energy and optimality) changed as food item size changed. We found that optimal tooth shape changed as food item size changed, but that all optimal morphologies were similar, with one dull cusp that promoted high stresses in the food item and three cusps that acted to stabilize the food item. There were also positive relationships between food item size and the coefficients of variation for stresses in food item and optimality, and negative relationships between food item size and the coefficients of variation for stresses in the enamel and strain energy absorbed by the food item. These results suggest that relative food item size may play a role in selecting for optimal tooth shape, and the magnitude of these selective forces may change depending on food item size and which efficiency metric is being selected. PMID:25320068
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumuluru, Jaya Shankar; McCulloch, Richard Chet James
In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less
Damage modeling of small-scale experiments on dental enamel with hierarchical microstructure.
Scheider, I; Xiao, T; Yilmaz, E; Schneider, G A; Huber, N; Bargmann, S
2015-03-01
Dental enamel is a highly anisotropic and heterogeneous material, which exhibits an optimal reliability with respect to the various loads occurring over years. In this work, enamel's microstructure of parallel aligned rods of mineral fibers is modeled and mechanical properties are evaluated in terms of strength and toughness with the help of a multiscale modeling method. The established model is validated by comparing it with the stress-strain curves identified by microcantilever beam experiments extracted from these rods. Moreover, in order to gain further insight in the damage-tolerant behavior of enamel, the size of crystallites below which the structure becomes insensitive to flaws is studied by a microstructural finite element model. The assumption regarding the fiber strength is verified by a numerical study leading to accordance of fiber size and flaw tolerance size, and the debonding strength is estimated by optimizing the failure behavior of the microstructure on the hierarchical level above the individual fibers. Based on these well-grounded properties, the material behavior is predicted well by homogenization of a representative unit cell including damage, taking imperfections (like microcracks in the present case) into account. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Optimal Operation of Energy Storage in Power Transmission and Distribution
NASA Astrophysics Data System (ADS)
Akhavan Hejazi, Seyed Hossein
In this thesis, we investigate optimal operation of energy storage units in power transmission and distribution grids. At transmission level, we investigate the problem where an investor-owned independently-operated energy storage system seeks to offer energy and ancillary services in the day-ahead and real-time markets. We specifically consider the case where a significant portion of the power generated in the grid is from renewable energy resources and there exists significant uncertainty in system operation. In this regard, we formulate a stochastic programming framework to choose optimal energy and reserve bids for the storage units that takes into account the fluctuating nature of the market prices due to the randomness in the renewable power generation availability. At distribution level, we develop a comprehensive data set to model various stochastic factors on power distribution networks, with focus on networks that have high penetration of electric vehicle charging load and distributed renewable generation. Furthermore, we develop a data-driven stochastic model for energy storage operation at distribution level, where the distribution of nodal voltage and line power flow are modelled as stochastic functions of the energy storage unit's charge and discharge schedules. In particular, we develop new closed-form stochastic models for such key operational parameters in the system. Our approach is analytical and allows formulating tractable optimization problems. Yet, it does not involve any restricting assumption on the distribution of random parameters, hence, it results in accurate modeling of uncertainties. By considering the specific characteristics of random variables, such as their statistical dependencies and often irregularly-shaped probability distributions, we propose a non-parametric chance-constrained optimization approach to operate and plan energy storage units in power distribution girds. In the proposed stochastic optimization, we consider uncertainty from various elements, such as solar photovoltaic , electric vehicle chargers, and residential baseloads, in the form of discrete probability functions. In the last part of this thesis we address some other resources and concepts for enhancing the operation of power distribution and transmission systems. In particular, we proposed a new framework to determine the best sites, sizes, and optimal payment incentives under special contracts for committed-type DG projects to offset distribution network investment costs. In this framework, the aim is to allocate DGs such that the profit gained by the distribution company is maximized while each DG unit's individual profit is also taken into account to assure that private DG investment remains economical.
TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, J; Culberson, W; Bender, E
2016-06-15
Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less
Optimal Design of Wireless Power Transmission Links for Millimeter-Sized Biomedical Implants.
Ahn, Dukju; Ghovanloo, Maysam
2016-02-01
This paper presents a design methodology for RF power transmission to millimeter-sized implantable biomedical devices. The optimal operating frequency and coil geometries are found such that power transfer efficiency (PTE) and tissue-loss-constrained allowed power are maximized. We define receiver power reception susceptibility (Rx-PRS) and transmitter figure of merit (Tx-FoM) such that their multiplication yields the PTE. Rx-PRS and Tx-FoM define the roles of the Rx and Tx in the PTE, respectively. First, the optimal Rx coil geometry and operating frequency range are identified such that the Rx-PRS is maximized for given implant constraints. Since the Rx is very small and has lesser design freedom than the Tx, the overall operating frequency is restricted mainly by the Rx. Rx-PRS identifies such operating frequency constraint imposed by the Rx. Secondly, the Tx coil geometry is selected such that the Tx-FoM is maximized under the frequency constraint at which the Rx-PRS was saturated. This aligns the target frequency range of Tx optimization with the frequency range at which Rx performance is high, resulting in the maximum PTE. Finally, we have found that even in the frequency range at which the PTE is relatively flat, the tissue loss per unit delivered power can be significantly different for each frequency. The Rx-PRS can predict the frequency range at which the tissue loss per unit delivered power is minimized while PTE is maintained high. In this way, frequency adjustment for the PTE and tissue-loss-constrained allowed power is realized by characterizing the Rx-PRS. The design procedure was verified through full-wave electromagnetic field simulations and measurements using de-embedding method. A prototype implant, 1 mm in diameter, achieved PTE of 0.56% ( -22.5 dB) and power delivered to load (PDL) was 224 μW at 200 MHz with 12 mm Tx-to-Rx separation in the tissue environment.
Augusto, Elisabeth F P; Moraes, Angela M; Piccoli, Rosane A M; Barral, Manuel F; Suazo, Cláudio A T; Tonso, Aldo; Pereira, Carlos A
2010-01-01
Studies of a bioprocess optimization and monitoring for protein synthesis in animal cells face a challenge on how to express in quantitative terms the system performance. It is possible to have a panel of calculated variables that fits more or less appropriately the intended goal. Each mathematical expression approach translates different quantitative aspects. We can basically separate them into two categories: those used for the evaluation of cell physiology in terms of product synthesis, which can be for bioprocess improvement or optimization, and those used for production unit sizing and for bioprocess operation. With these perspectives and based on our own data of kinetic S2 cells growth and metabolism, as well as on their synthesis of the transmembrane recombinant rabies virus glycoprotein, here indicated as P, we show and discuss the main characteristics of calculated variables and their recommended use. Mainly applied to a bioprocess improvement/optimization and that mainly used for operation definition and to design the production unit, we expect these definitions/recommendations would improve the quality of data produced in this field and lead to more standardized procedures. In turn, it would allow a better and easier comprehension of scientific and technological communications for specialized readers. Copyright 2009 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.
Development of a prototype regeneration carbon dioxide absorber. [for use in EVA conditions
NASA Technical Reports Server (NTRS)
Patel, P. S.; Baker, B. S.
1977-01-01
A prototype regenerable carbon dioxide absorber was developed to maintain the environmental quality of the portable life support system. The absorber works on the alkali metal carbonate-bicarbonate solid-gas reaction to remove carbon dioxide from the atmosphere. The prototype sorber module was designed, fabricated, and tested at simulated extravehicular activity conditions to arrive at optimum design. The unit maintains sorber outlet concentration below 5 mm Hg. An optimization study was made with respect to heat transfer, temperature control, sorbent utilization, sorber life and regenerability, and final size of the module. Important parameters influencing the capacity of the final absorber unit were identified and recommendations for improvement were made.
Holmes, W J M; Timmons, M J; Kauser, S
2015-10-01
Techniques used to estimate implant size for primary breast augmentation have evolved since the 1970s. Currently no consensus exists on the optimal method to select implant size for primary breast augmentation. In 2013 we asked United Kingdom consultant plastic surgeons who were full members of BAPRAS or BAAPS what was their technique for implant size selection for primary aesthetic breast augmentation. We also asked what was the range of implant sizes they commonly used. The answers to question one were grouped into four categories: experience, measurements, pre-operative external sizers and intra-operative sizers. The response rate was 46% (164/358). Overall, 95% (153/159) of all respondents performed some form of pre-operative assessment, the others relied on "experience" only. The most common technique for pre-operative assessment was by external sizers (74%). Measurements were used by 57% of respondents and 3% used intra-operative sizers only. A combination of measurements and sizers was used by 34% of respondents. The most common measurements were breast base (68%), breast tissue compliance (19%), breast height (15%), and chest diameter (9%). The median implant size commonly used in primary breast augmentation was 300cc. Pre-operative external sizers are the most common technique used by UK consultant plastic surgeons to select implant size for primary breast augmentation. We discuss the above findings in relation to the evolution of pre-operative planning techniques for breast augmentation. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife
NASA Astrophysics Data System (ADS)
Kearney, Vasant; Cheung, Joey P.; McGuinness, Christopher; Solberg, Timothy D.
2017-07-01
The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.
CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife.
Kearney, Vasant; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D
2017-06-26
The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning
Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron
2014-01-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474
Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.
Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron
2014-05-01
We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.
Continued Water-Based Phase Change Material Heat Exchanger Development
NASA Technical Reports Server (NTRS)
Hansen, Scott W.; Sheth, Rubik B.; Poynot, Joe; Giglio, Tony; Ungar, Gene K.
2015-01-01
In a cyclical heat load environment such as low Lunar orbit, a spacecraft's radiators are not sized to meet the full heat rejection demands. Traditionally, a supplemental heat rejection device (SHReD) such as an evaporator or sublimator is used to act as a "topper" to meet the additional heat rejection demands. Utilizing a Phase Change Material (PCM) heat exchanger (HX) as a SHReD provides an attractive alternative to evaporators and sublimators as PCM HX's do not use a consumable, thereby leading to reduced launch mass and volume requirements. In continued pursuit of water PCM HX development two full-scale, Orion sized water-based PCM HX's were constructed by Mezzo Technologies. These HX's were designed by applying prior research on freeze front propagation to a full-scale design. Design options considered included bladder restraint and clamping mechanisms, bladder manufacturing, tube patterns, fill/drain methods, manifold dimensions, weight optimization, and midplate designs. Two units, Units A and B, were constructed and differed only in their midplate design. Both units failed multiple times during testing. This report highlights learning outcomes from these tests and are applied to a final sub-scale PCM HX which is slated to be tested on the ISS in early 2017.
NASA Astrophysics Data System (ADS)
Mosby, Matthew; Matouš, Karel
2015-12-01
Three-dimensional simulations capable of resolving the large range of spatial scales, from the failure-zone thickness up to the size of the representative unit cell, in damage mechanics problems of particle reinforced adhesives are presented. We show that resolving this wide range of scales in complex three-dimensional heterogeneous morphologies is essential in order to apprehend fracture characteristics, such as strength, fracture toughness and shape of the softening profile. Moreover, we show that computations that resolve essential physical length scales capture the particle size-effect in fracture toughness, for example. In the vein of image-based computational materials science, we construct statistically optimal unit cells containing hundreds to thousands of particles. We show that these statistically representative unit cells are capable of capturing the first- and second-order probability functions of a given data-source with better accuracy than traditional inclusion packing techniques. In order to accomplish these large computations, we use a parallel multiscale cohesive formulation and extend it to finite strains including damage mechanics. The high-performance parallel computational framework is executed on up to 1024 processing cores. A mesh convergence and a representative unit cell study are performed. Quantifying the complex damage patterns in simulations consisting of tens of millions of computational cells and millions of highly nonlinear equations requires data-mining the parallel simulations, and we propose two damage metrics to quantify the damage patterns. A detailed study of volume fraction and filler size on the macroscopic traction-separation response of heterogeneous adhesives is presented.
Petrini, Carlo
2014-01-01
The procedures for collecting voluntarily and freely donated umbilical cord blood (UCB) units and processing them for use in transplants are extremely costly, and the capital flows thus generated form part of an increasingly pervasive global bioeconomy. To place the issue in perspective, this article first examines the different types of UCB biobank, the organization of international registries of public UCB biobanks, the optimal size of national inventories, and the possibility of obtaining commercial products from donated units. The fees generally applied for the acquisition of UCB units for transplantation are then discussed, and some considerations are proposed regarding the social and ethical implications raised by the international network for the importation and exportation of UCB, with a particular emphasis on the globalized bioeconomy of UCB and its commerciality or lack thereof. PMID:24971040
[Advances in research on automatic exposure control of mammography system].
Wang, Guoyi; Ye, Chengfu; Wu, Haiming; Wang, Tainfu; Zhang, Hong
2014-12-01
Mammography imaging is one of the most demanding imaging modalities from the point of view of the bal- ance between image quality (the visibility of small size and/or low contrast structures) and dose (screening of many asymptomatic people). Therefore, since the introduction of the first dedicated mammographic units, many efforts have been directed to seek the best possible image quality while minimizing patient dose. The performance of auto- matic exposure control (AEC) is the manifestation of this demand. The theory of AEC includes exposure detection and optimization and also involves some accomplished methodology. This review presents the development and present situa- tion of spectrum optimization, detector evolution, and the way how to accomplish and evaluate AEC methods.
Energetic constraints, size gradients, and size limits in benthic marine invertebrates.
Sebens, Kenneth P
2002-08-01
Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.
2017-09-01
these groups . In the 2014/2015 year, efforts focused on securing a commitment from the United States Marine Corps to host the study. In Winter 2014...we can reach an adjusted sample size target in the 2017/2018 project year by expanding our recruitment to incorporate deploying infantry groups ...Vocabulary Test Revised. Circle Pines, MN: American Guidance Service. George, C. & Solomon , J. (2008). The caregving system: A behavioral systems approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorman, A; Seabrook, G; Brakken, A
Purpose: Small surgical devices and needles are used in many surgical procedures. Conventionally, an x-ray film is taken to identify missing devices/needles if post procedure count is incorrect. There is no data to indicate smallest surgical devices/needles that can be identified with digital radiography (DR), and its optimized acquisition technique. Methods: In this study, the DR equipment used is a Canon RadPro mobile with CXDI-70c wireless DR plate, and the same DR plate on a fixed Siemens Multix unit. Small surgical devices and needles tested include Rubber Shod, Bulldog, Fogarty Hydrogrip, and needles with sizes 3-0 C-T1 through 8-0 BV175-6.more » They are imaged with PMMA block phantoms with thickness of 2–8 inch, and an abdomen phantom. Various DR techniques are used. Images are reviewed on the portable x-ray acquisition display, a clinical workstation, and a diagnostic workstation. Results: all small surgical devices and needles are visible in portable DR images with 2–8 inch of PMMA. However, when they are imaged with the abdomen phantom plus 2 inch of PMMA, needles smaller than 9.3 mm length can not be visualized at the optimized technique of 81 kV and 16 mAs. There is no significant difference in visualization with various techniques, or between mobile and fixed radiography unit. However, there is noticeable difference in visualizing the smallest needle on a diagnostic reading workstation compared to the acquisition display on a portable x-ray unit. Conclusion: DR images should be reviewed on a diagnostic reading workstation. Using optimized DR techniques, the smallest needle that can be identified on all phantom studies is 9.3 mm. Sample DR images of various small surgical devices/needles available on diagnostic workstation for comparison may improve their identification. Further in vivo study is needed to confirm the optimized digital radiography technique for identification of lost small surgical devices and needles.« less
NASA Astrophysics Data System (ADS)
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-01
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-16
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
GPU-based ultra-fast dose calculation using a finite size pencil beam model.
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B
2009-10-21
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.
NASA Astrophysics Data System (ADS)
Lu, Siqi; Wang, Xiaorong; Wu, Junyong
2018-01-01
The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.
Bio-inspired Murray materials for mass transfer and activity
NASA Astrophysics Data System (ADS)
Zheng, Xianfeng; Shen, Guofang; Wang, Chao; Li, Yu; Dunphy, Darren; Hasan, Tawfique; Brinker, C. Jeffrey; Su, Bao-Lian
2017-04-01
Both plants and animals possess analogous tissues containing hierarchical networks of pores, with pore size ratios that have evolved to maximize mass transport and rates of reactions. The underlying physical principles of this optimized hierarchical design are embodied in Murray's law. However, we are yet to realize the benefit of mimicking nature's Murray networks in synthetic materials due to the challenges in fabricating vascularized structures. Here we emulate optimum natural systems following Murray's law using a bottom-up approach. Such bio-inspired materials, whose pore sizes decrease across multiple scales and finally terminate in size-invariant units like plant stems, leaf veins and vascular and respiratory systems provide hierarchical branching and precise diameter ratios for connecting multi-scale pores from macro to micro levels. Our Murray material mimics enable highly enhanced mass exchange and transfer in liquid-solid, gas-solid and electrochemical reactions and exhibit enhanced performance in photocatalysis, gas sensing and as Li-ion battery electrodes.
NASA Astrophysics Data System (ADS)
Chen, Wei; Xu, Yue; Zhang, Huaxin; Liu, Peng; Jiao, Guohua
2016-09-01
Laser scanners are critical components in material processing systems, such as welding, cutting, and drilling. To achieve high-accuracy processing, the laser spot size should be small and uniform in the entire objective flat field. However, traditional static focusing method using F-theta objective lens is limited by the narrow flat field. To overcome these limitations, a dynamic focusing unit consisting of two lenses is presented in this paper. The dual-lens system has a movable plano-concave lens and a fixed convex lens. As the location of the movable optical elements is changed, the focal length is shifted to keep a small focus spot in a broad flat processing filed. The optical parameters of the two elements are theoretical analyzed. The spot size is calculated to obtain the relationship between the moving length of first lens and the shift focus length of the system. Also, the Zemax model of the optical system is built up to verify the theoretical design and optimize the optical parameter. The proposed lenses are manufactured and a test system is built up to investigate their performances. The experimental results show the spot size is smaller than 450um in all the 500*500mm 2 filed with CO2 laser. Compared with the other dynamic focusing units, this design has fewer lenses and no focusing spot in the optical path. In addition, the focal length minimal changes with the shit of incident laser beam.
Taisova, A S; Yakovlev, A G; Fetisova, Z G
2014-03-01
This work continuous a series of studies devoted to discovering principles of organization of natural antennas in photosynthetic microorganisms that generate in vivo large and highly effective light-harvesting structures. The largest antenna is observed in green photosynthesizing bacteria, which are able to grow over a wide range of light intensities and adapt to low intensities by increasing of size of peripheral BChl c/d/e antenna. However, increasing antenna size must inevitably cause structural changes needed to maintain high efficiency of its functioning. Our model calculations have demonstrated that aggregation of the light-harvesting antenna pigments represents one of the universal structural factors that optimize functioning of any antenna and manage antenna efficiency. If the degree of aggregation of antenna pigments is a variable parameter, then efficiency of the antenna increases with increasing size of a single aggregate of the antenna. This means that change in degree of pigment aggregation controlled by light-harvesting antenna size is biologically expedient. We showed in our previous work on the oligomeric chlorosomal BChl c superantenna of green bacteria of the Chloroflexaceae family that this principle of optimization of variable antenna structure, whose size is controlled by light intensity during growth of bacteria, is actually realized in vivo. Studies of this phenomenon are continued in the present work, expanding the number of studied biological materials and investigating optical linear and nonlinear spectra of chlorosomes having different structures. We show for oligomeric chlorosomal superantennas of green bacteria (from two different families, Chloroflexaceae and Oscillochloridaceae) that a single BChl c aggregate is of small size, and the degree of BChl c aggregation is a variable parameter, which is controlled by the size of the entire BChl c superantenna, and the latter, in turn, is controlled by light intensity in the course of cell culture growth.
2011-01-01
Background Controlling airborne contamination is of major importance in burn units because of the high susceptibility of burned patients to infections and the unique environmental conditions that can accentuate the infection risk. In particular the required elevated temperatures in the patient room can create thermal convection flows which can transport airborne contaminates throughout the unit. In order to estimate this risk and optimize the design of an intensive care room intended to host severely burned patients, we have relied on a computational fluid dynamic methodology (CFD). Methods The study was carried out in 4 steps: i) patient room design, ii) CFD simulations of patient room design to model air flows throughout the patient room, adjacent anterooms and the corridor, iii) construction of a prototype room and subsequent experimental studies to characterize its performance iv) qualitative comparison of the tendencies between CFD prediction and experimental results. The Electricité De France (EDF) open-source software Code_Saturne® (http://www.code-saturne.org) was used and CFD simulations were conducted with an hexahedral mesh containing about 300 000 computational cells. The computational domain included the treatment room and two anterooms including equipment, staff and patient. Experiments with inert aerosol particles followed by time-resolved particle counting were conducted in the prototype room for comparison with the CFD observations. Results We found that thermal convection can create contaminated zones near the ceiling of the room, which can subsequently lead to contaminate transfer in adjacent rooms. Experimental confirmation of these phenomena agreed well with CFD predictions and showed that particles greater than one micron (i.e. bacterial or fungal spore sizes) can be influenced by these thermally induced flows. When the temperature difference between rooms was 7°C, a significant contamination transfer was observed to enter into the positive pressure room when the access door was opened, while 2°C had little effect. Based on these findings the constructed burn unit was outfitted with supplemental air exhaust ducts over the doors to compensate for the thermal convective flows. Conclusions CFD simulations proved to be a particularly useful tool for the design and optimization of a burn unit treatment room. Our results, which have been confirmed qualitatively by experimental investigation, stressed that airborne transfer of microbial size particles via thermal convection flows are able to bypass the protective overpressure in the patient room, which can represent a potential risk of cross contamination between rooms in protected environments. PMID:21371304
Beauchêne, Christian; Laudinet, Nicolas; Choukri, Firas; Rousset, Jean-Luc; Benhamadouche, Sofiane; Larbre, Juliette; Chaouat, Marc; Benbunan, Marc; Mimoun, Maurice; Lajonchère, Jean-Patrick; Bergeron, Vance; Derouin, Francis
2011-03-03
Controlling airborne contamination is of major importance in burn units because of the high susceptibility of burned patients to infections and the unique environmental conditions that can accentuate the infection risk. In particular the required elevated temperatures in the patient room can create thermal convection flows which can transport airborne contaminates throughout the unit. In order to estimate this risk and optimize the design of an intensive care room intended to host severely burned patients, we have relied on a computational fluid dynamic methodology (CFD). The study was carried out in 4 steps: i) patient room design, ii) CFD simulations of patient room design to model air flows throughout the patient room, adjacent anterooms and the corridor, iii) construction of a prototype room and subsequent experimental studies to characterize its performance iv) qualitative comparison of the tendencies between CFD prediction and experimental results. The Electricité De France (EDF) open-source software Code_Saturne® (http://www.code-saturne.org) was used and CFD simulations were conducted with an hexahedral mesh containing about 300 000 computational cells. The computational domain included the treatment room and two anterooms including equipment, staff and patient. Experiments with inert aerosol particles followed by time-resolved particle counting were conducted in the prototype room for comparison with the CFD observations. We found that thermal convection can create contaminated zones near the ceiling of the room, which can subsequently lead to contaminate transfer in adjacent rooms. Experimental confirmation of these phenomena agreed well with CFD predictions and showed that particles greater than one micron (i.e. bacterial or fungal spore sizes) can be influenced by these thermally induced flows. When the temperature difference between rooms was 7°C, a significant contamination transfer was observed to enter into the positive pressure room when the access door was opened, while 2°C had little effect. Based on these findings the constructed burn unit was outfitted with supplemental air exhaust ducts over the doors to compensate for the thermal convective flows. CFD simulations proved to be a particularly useful tool for the design and optimization of a burn unit treatment room. Our results, which have been confirmed qualitatively by experimental investigation, stressed that airborne transfer of microbial size particles via thermal convection flows are able to bypass the protective overpressure in the patient room, which can represent a potential risk of cross contamination between rooms in protected environments.
NASA Astrophysics Data System (ADS)
Chen, Linghua; Jiang, Yingjie; Xing, Li; Yao, Jun
2017-10-01
We have proposed a full dielectric (silicon) nanocube array polarizer based on a silicon dioxide substrate. Each polarization unit column includes a plurality of equal spaced polarization units. By optimizing the length, the width, the height of the polarization units and the center distance of adjacent polarization unit (x direction and y direction), an extinction ratio (ER) of higher than 25dB was obtained theoretically when the incident light wavelength is 1550nm. while for applications of most polarization optical elements, ER above 10dB is enough. With this condition, the polarizer we designed can work in a wide wavelength range from 1509.31nm to 1611.51nm. Compared with the previous polarizer, we have introduced a polarizer which is a full dielectric device, which solves the problems of low efficiency caused by Ohmic loss and weak coupling. Furthermore, compared with the existing optical polarizers, our polarizer has the advantages of thin thickness, small size, light weight, and low processing difficulty, which is in line with the future development trend of optical elements.
NASA Astrophysics Data System (ADS)
Kubasco, A. J.
1991-07-01
The objective of Gas Engine Heat Recovery Unit was to design, fabricate, and test an efficient, compact, and corrosion resistant heat recovery unit (HRU) for use on exhaust of natural gas-fired reciprocating engine-generator sets in the 50-500 kW range. The HRU would be a core component of a factory pre-packaged cogeneration system designed around component optimization, reliability, and efficiency. The HRU uses finned high alloy, stainless steel tubing wound into a compact helical coil heat exchanger. The corrosion resistance of the tubing allows more heat to be taken from the exhaust gas without fear of the effects of acid condensation. One HRU is currently installed in a cogeneration system at the Henry Ford Hospital Complex in Dearborn, Michigan. A second unit underwent successful endurance testing for 850 hours. The plan was to commercialize the HRU through its incorporation into a Caterpillar pre-packaged cogeneration system. Caterpillar is not proceeding with the concept at this time because of a downturn in the small size cogeneration market.
NASA Astrophysics Data System (ADS)
Ozkaya, Efe; Yilmaz, Cetin
2017-02-01
The effect of eddy current damping on a novel locally resonant periodic structure is investigated. The frequency response characteristics are obtained by using a lumped parameter and a finite element model. In order to obtain wide band gaps at low frequencies, the periodic structure is optimized according to certain constraints, such as mass distribution in the unit cell, lower limit of the band gap, stiffness between the components in the unit cell, the size of magnets used for eddy current damping, and the number of unit cells in the periodic structure. Then, the locally resonant periodic structure with eddy current damping is manufactured and its experimental frequency response is obtained. The frequency response results obtained analytically, numerically and experimentally match quite well. The inclusion of eddy current damping to the periodic structure decreases amplitudes of resonance peaks without disturbing stop band width.
Tschauner, Sebastian; Marterer, Robert; Gübitz, Michael; Kalmar, Peter I; Talakic, Emina; Weissensteiner, Sabine; Sorantin, Erich
2016-02-01
Accurate collimation helps to reduce unnecessary irradiation and improves radiographic image quality, which is especially important in the radiosensitive paediatric population. For AP/PA chest radiographs in children, a minimal field size (MinFS) from "just above the lung apices" to "T12/L1" with age-dependent tolerance is suggested by the 1996 European Commission (EC) guidelines, which were examined qualitatively and quantitatively at a paediatric radiology division. Five hundred ninety-eight unprocessed chest X-rays (45% boys, 55% girls; mean age 3.9 years, range 0-18 years) were analysed with a self-developed tool. Qualitative standards were assessed based on the EC guidelines, as well as the overexposed field size and needlessly irradiated tissue compared to the MinFS. While qualitative guideline recommendations were satisfied, mean overexposure of +45.1 ± 18.9% (range +10.2% to +107.9%) and tissue overexposure of +33.3 ± 13.3% were found. Only 4% (26/598) of the examined X-rays completely fulfilled the EC guidelines. This study presents a new chest radiography quality control tool which allows assessment of field sizes, distances, overexposures and quality parameters based on the EC guidelines. Utilising this tool, we detected inadequate field sizes, inspiration depths, and patient positioning. Furthermore, some debatable EC guideline aspects were revealed. • European Guidelines on X-ray quality recommend exposed field sizes for common examinations. • The major failing in paediatric radiographic imaging techniques is inappropriate field size. • Optimal handling of radiographic units can reduce radiation exposure to paediatric patients. • Constant quality control helps ensure optimal chest radiographic image acquisition in children.
24 CFR 983.259 - Overcrowded, under-occupied, and accessible units.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-occupied, and accessible units. (a) Family occupancy of wrong-size or accessible unit. The PHA subsidy standards determine the appropriate unit size for the family size and composition. If the PHA determines that a family is occupying a: (1) Wrong-size unit, or (2) Unit with accessibility features that the...
Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford
2018-04-01
Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.
Pair 2-electron reduced density matrix theory using localized orbitals
NASA Astrophysics Data System (ADS)
Head-Marsden, Kade; Mazziotti, David A.
2017-08-01
Full configuration interaction (FCI) restricted to a pairing space yields size-extensive correlation energies but its cost scales exponentially with molecular size. Restricting the variational two-electron reduced-density-matrix (2-RDM) method to represent the same pairing space yields an accurate lower bound to the pair FCI energy at a mean-field-like computational scaling of O (r3) where r is the number of orbitals. In this paper, we show that localized molecular orbitals can be employed to generate an efficient, approximately size-extensive pair 2-RDM method. The use of localized orbitals eliminates the substantial cost of optimizing iteratively the orbitals defining the pairing space without compromising accuracy. In contrast to the localized orbitals, the use of canonical Hartree-Fock molecular orbitals is shown to be both inaccurate and non-size-extensive. The pair 2-RDM has the flexibility to describe the spectra of one-electron RDM occupation numbers from all quantum states that are invariant to time-reversal symmetry. Applications are made to hydrogen chains and their dissociation, n-acene from naphthalene through octacene, and cadmium telluride 2-, 3-, and 4-unit polymers. For the hydrogen chains, the pair 2-RDM method recovers the majority of the energy obtained from similar calculations that iteratively optimize the orbitals. The localized-orbital pair 2-RDM method with its mean-field-like computational scaling and its ability to describe multi-reference correlation has important applications to a range of strongly correlated phenomena in chemistry and physics.
NASA Astrophysics Data System (ADS)
Agrawal, Navik; Davis, Christopher C.
2008-08-01
Omnidirectional free space optical communication receivers can employ multiple non-imaging collectors, such as compound parabolic concentrators (CPCs), in an array-like fashion to increase the amount of possible light collection. CPCs can effectively channel light collected over a large aperture to a small area photodiode. The aperture to length ratio of such devices can increase the overall size of the transceiver unit, which may limit the practicality of such systems, especially when small size is desired. New non-imaging collector designs with smaller sizes, larger field of view (FOV), and comparable transmission curves to CPCs, offer alternative transceiver designs. This paper examines how transceiver performance is affected by the use of different non-imaging collector shapes that are designed for wide FOV with reduced efficiency compared with shapes such as the CPC that are designed for small FOV with optimal efficiency. Theoretical results provide evidence indicating that array-like transceiver designs using various non-imaging collector shapes with less efficient transmission curves, but a larger FOV will be an effective means for the design of omnidirectional optical transceiver units. The results also incorporate the effects of Fresnel loss at the collector exit aperture-photodiode interface, which is an important consideration for indoor omnidirectional FSO systems.
"Optimal" Size and Schooling: A Relative Concept.
ERIC Educational Resources Information Center
Swanson, Austin D.
Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…
On the complexity of turbulence near a wall
NASA Technical Reports Server (NTRS)
Moin, Parviz
1992-01-01
Some measures of the intrinsic complexity of the near wall turbulence are reviewed. The number of modes required in an 'optimal' eigenfunction expansion is compared with the dimension obtained from the calculation of Liapunov exponents. These measures are of the same order, but they are very large. It is argued that the basic building block element of the near wall turbulence can be isolated in a small region of space (minimal flow unit). When the size of the domain is taken into account, the dimension becomes more manageable.
Small-angle scattering from 3D Sierpinski tetrahedron generated using chaos game
NASA Astrophysics Data System (ADS)
Slyamov, Azat
2017-12-01
We approximate a three dimensional version of deterministic Sierpinski gasket (SG), also known as Sierpinski tetrahedron (ST), by using the chaos game representation (CGR). Structural properties of the fractal, generated by both deterministic and CGR algorithms are determined using small-angle scattering (SAS) technique. We calculate the corresponding monodisperse structure factor of ST, using an optimized Debye formula. We show that scattering from CGR of ST recovers basic fractal properties, such as fractal dimension, iteration number, scaling factor, overall size of the system and the number of units composing the fractal.
Optimal energy-utilization ratio for long-distance cruising of a model fish
NASA Astrophysics Data System (ADS)
Liu, Geng; Yu, Yong-Liang; Tong, Bing-Gang
2012-07-01
The efficiency of total energy utilization and its optimization for long-distance migration of fish have attracted much attention in the past. This paper presents theoretical and computational research, clarifying the above well-known classic questions. Here, we specify the energy-utilization ratio (fη) as a scale of cruising efficiency, which consists of the swimming speed over the sum of the standard metabolic rate and the energy consumption rate of muscle activities per unit mass. Theoretical formulation of the function fη is made and it is shown that based on a basic dimensional analysis, the main dimensionless parameters for our simplified model are the Reynolds number (Re) and the dimensionless quantity of the standard metabolic rate per unit mass (Rpm). The swimming speed and the hydrodynamic power output in various conditions can be computed by solving the coupled Navier-Stokes equations and the fish locomotion dynamic equations. Again, the energy consumption rate of muscle activities can be estimated by the quotient of dividing the hydrodynamic power by the muscle efficiency studied by previous researchers. The present results show the following: (1) When the value of fη attains a maximum, the dimensionless parameter Rpm keeps almost constant for the same fish species in different sizes. (2) In the above cases, the tail beat period is an exponential function of the fish body length when cruising is optimal, e.g., the optimal tail beat period of Sockeye salmon is approximately proportional to the body length to the power of 0.78. Again, the larger fish's ability of long-distance cruising is more excellent than that of smaller fish. (3) The optimal swimming speed we obtained is consistent with previous researchers’ estimations.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
NASA Astrophysics Data System (ADS)
Tai, Wei; Abbasi, Mortez; Ricketts, David S.
2018-01-01
We present the analysis and design of high-power millimetre-wave power amplifier (PA) systems using zero-degree combiners (ZDCs). The methodology presented optimises the PA device sizing and the number of combined unit PAs based on device load pull simulations, driver power consumption analysis and loss analysis of the ZDC. Our analysis shows that an optimal number of N-way combined unit PAs leads to the highest power-added efficiency (PAE) for a given output power. To illustrate our design methodology, we designed a 1-W PA system at 45 GHz using a 45 nm silicon-on-insulator process and showed that an 8-way combined PA has the highest PAE that yields simulated output power of 30.6 dBm and 31% peak PAE.
Positivity in healthcare: relation of optimism to performance.
Luthans, Kyle W; Lebsack, Sandra A; Lebsack, Richard R
2008-01-01
The purpose of this paper is to explore the linkage between nurses' levels of optimism and performance outcomes. The study sample consisted of 78 nurses in all areas of a large healthcare facility (hospital) in the Midwestern United States. The participants completed surveys to determine their current state of optimism. Supervisory performance appraisal data were gathered in order to measure performance outcomes. Spearman correlations and a one-way ANOVA were used to analyze the data. The results indicated a highly significant positive relationship between the nurses' measured state of optimism and their supervisors' ratings of their commitment to the mission of the hospital, a measure of contribution to increasing customer satisfaction, and an overall measure of work performance. This was an exploratory study. Larger sample sizes and longitudinal data would be beneficial because it is probable that state optimism levels will vary and that it might be more accurate to measure state optimism at several points over time in order to better predict performance outcomes. Finally, the study design does not imply causation. Suggestions for effectively developing and managing nurses' optimism to positively impact their performance are provided. To date, there has been very little empirical evidence assessing the impact that positive psychological capacities such as optimism of key healthcare professionals may have on performance. This paper was designed to help begin to fill this void by examining the relationship between nurses' self-reported optimism and their supervisors' evaluations of their performance.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
Global optimization of cholic acid aggregates
NASA Astrophysics Data System (ADS)
Jójárt, Balázs; Viskolcz, Béla; Poša, Mihalj; Fejer, Szilard N.
2014-04-01
In spite of recent investigations into the potential pharmaceutical importance of bile acids as drug carriers, the structure of bile acid aggregates is largely unknown. Here, we used global optimization techniques to find the lowest energy configurations for clusters composed between 2 and 10 cholate molecules, and evaluated the relative stabilities of the global minima. We found that the energetically most preferred geometries for small aggregates are in fact reverse micellar arrangements, and the classical micellar behaviour (efficient burial of hydrophobic parts) is achieved only in systems containing more than five cholate units. Hydrogen bonding plays a very important part in keeping together the monomers, and among the size range considered, the most stable structure was found to be the decamer, having 17 hydrogen bonds. Molecular dynamics simulations showed that the decamer has the lowest dissociation propensity among the studied aggregation numbers.
Engineering two-wire optical antennas for near field enhancement
NASA Astrophysics Data System (ADS)
Yang, Zhong-Jian; Zhao, Qian; Xiao, Si; He, Jun
2017-07-01
We study the optimization of near field enhancement in the two-wire optical antenna system. By varying the nanowire sizes we obtain the optimized side-length (width and height) for the maximum field enhancement with a given gap size. The optimized side-length applies to a broadband range (λ = 650-1000 nm). The ratio of extinction cross section to field concentration size is found to be closely related to the field enhancement behavior. We also investigate two experimentally feasible cases which are antennas on glass substrate and mirror, and find that the optimized side-length also applies to these systems. It is also found that the optimized side-length shows a tendency of increasing with the gap size. Our results could find applications in field-enhanced spectroscopies.
24 CFR 982.505 - Voucher tenancy: How to calculate housing assistance payment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... unit size; or (ii) The payment standard amount for the size of the dwelling unit rented by the family... effective date of the increase in the payment standard amount. (5) Change in family unit size during the HAP... size increases or decreases during the HAP contract term, the new family unit size must be used to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matlin, R. W.
1979-07-10
Tens of millions of the world's poorest farmers currently subsist on small farms below two hectares in size. The increasing cost of animal irrigation coupled with decreasing farm size and the lack of a utility grid or acceptable alternate power sources is causing interest in the use of solar photovoltaics for these very small (subkilowatt) water pumping systems. The attractive combinations of system components (array, pump, motor, storage and controls) have been identified and their interactions characterized in order to optimize overall system efficiency. Computer simulations as well as component tests were made of systems utilizing flat-plate and low-concentration arrays,more » direct-coupled and electronic-impedance-matching controls, fixed and incremental (once or twice a day) tracking, dc and ac motors, and positive-displacement, centrifugal and vertical turbine pumps. The results of these analyses and tests are presented, including water volume pumped as a function of time of day and year, for the locations of Orissa, India and Cairo, Egypt. Finally, a description and operational data are given for a prototype unit that was developed as a result of the previous analyses and tests.« less
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.
Kim, Sehwi; Jung, Inkyung
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data
Kim, Sehwi
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674
Adesina, Simeon K.; Wight, Scott A.; Akala, Emmanuel O.
2015-01-01
Purpose Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize crosslinked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Methods Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Results and Conclusion Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the crosslinking agent and stabilizer indicate the important factors for minimizing particle size. PMID:24059281
Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O
2014-11-01
Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.
Kern, Maximilian M.; Guzy, Jacquelyn C.; Lovich, Jeffrey E.; Gibbons, J. Whitfield; Dorcas, Michael E.
2016-01-01
Because resources are finite, female animals face trade-offs between the size and number of offspring they are able to produce during a single reproductive event. Optimal egg size (OES) theory predicts that any increase in resources allocated to reproduction should increase clutch size with minimal effects on egg size. Variations of OES predict that egg size should be optimized, although not necessarily constant across a population, because optimality is contingent on maternal phenotypes, such as body size and morphology, and recent environmental conditions. We examined the relationships among body size variables (pelvic aperture width, caudal gap height, and plastron length), clutch size, and egg width of diamondback terrapins from separate but proximate populations at Kiawah Island and Edisto Island, South Carolina. We found that terrapins do not meet some of the predictions of OES theory. Both populations exhibited greater variation in egg size among clutches than within, suggesting an absence of optimization except as it may relate to phenotype/habitat matching. We found that egg size appeared to be constrained by more than just pelvic aperture width in Kiawah terrapins but not in the Edisto population. Terrapins at Edisto appeared to exhibit osteokinesis in the caudal region of their shells, which may aid in the oviposition of large eggs.
Kubo, Takuya; Nishimura, Naoki; Furuta, Hayato; Kubota, Kei; Naito, Toyohiro; Otsuka, Koji
2017-11-10
We report novel capillary gel electrophoresis (CGE) with poly(ethylene glycol) (PEG)-based hydrogels for the effective separations of biomolecules containing sugars and DNAs based on a molecular size effect. The gel capillaries were prepared in a fused silica capillary modified with 3-(trimethoxysilyl)propylmethacrylate using a variety of the PEG-based hydrogels. After the fundamental evaluations in CGE regarding the separation based on the molecular size effect depending on the crosslinking density, the optimized capillary provided the efficient separation of glucose ladder (G1 to G20). In addition, another capillary showed the successful separation of DNA ladder in the range of 10-1100 base pair, which is superior to an authentic acrylamide-based gel capillary. For both glucose and DNA ladders, the separation ranges against the molecular size were simply controllable by alteration of the concentration and/or units of ethylene oxide in the PEG-based crosslinker. Finally, we demonstrated the separations of real samples, which included sugars carved out from monoclonal antibodies, mAbs, and then the efficient separations based on the molecular size effect were achieved. Copyright © 2017 Elsevier B.V. All rights reserved.
A microfluidic distribution system for an array of hollow microneedles
NASA Astrophysics Data System (ADS)
Hoel, Antonin; Baron, Nolwenn; Cabodevila, Gonzalo; Jullien, Marie-Caroline
2008-06-01
We report a microfluidic device able to control the ejection of fluid through a matrix of out-of-plane microneedles. The device comprises a matrix of open dispensing units connected to needles and filled by a common filling system. A deformable membrane (e.g. in PDMS) is brought into contact with the dispensing units. Pressure exerted on the deformable membrane closes (and thus individualizes) each dispensing unit and provokes the ejection of the dispensing unit content through the outlets. Sufficient pressure over the deformable membrane ensures that all dispensing units deliver a fixed volume (their content) irrespective of the hydrodynamic pressure outside the dispensing unit outlet. The size of the ensemble matrix of dispensing units, the number of liquid reservoirs, as well as the material can vary depending on the considered application of the device or on the conditions of use. In the present paper, the liquid reservoirs are geometrically identical. The geometrical parameters of the device are optimized to avoid as much dead volume as possible, as it was to handle plasmid DNA solutions which are very expensive. The conception, the fabrication and the experimental results are described in this paper. Our prototype is conceived to inject in a uniform way 10 µl of drug through 100 microneedles distributed over 1 cm2.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Feinstein, Wei P; Brylinski, Michal
2015-01-01
Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.
NASA Astrophysics Data System (ADS)
Dar, Aasif Bashir; Jha, Rakesh Kumar
2017-03-01
Various dispersion compensation units are presented and evaluated in this paper. These dispersion compensation units include dispersion compensation fiber (DCF), DCF merged with fiber Bragg grating (FBG) (joint technique), and linear, square root, and cube root chirped tanh apodized FBG. For the performance evaluation 10 Gb/s NRZ transmission system over 100-km-long single-mode fiber is used. The three chirped FBGs are optimized individually to yield pulse width reduction percentage (PWRP) of 86.66, 79.96, 62.42% for linear, square root, and cube root, respectively. The DCF and Joint technique both provide a remarkable PWRP of 94.45 and 96.96%, respectively. The performance of optimized linear chirped tanh apodized FBG and DCF is compared for long-haul transmission system on the basis of quality factor of received signal. For both the systems maximum transmission distance is calculated such that quality factor is ≥ 6 at the receiver and result shows that performance of FBG is comparable to that of DCF with advantages of very low cost, small size and reduced nonlinear effects.
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.
Bio-inspired Murray materials for mass transfer and activity
Zheng, Xianfeng; Shen, Guofang; Wang, Chao; Li, Yu; Dunphy, Darren; Hasan, Tawfique; Brinker, C. Jeffrey; Su, Bao-Lian
2017-01-01
Both plants and animals possess analogous tissues containing hierarchical networks of pores, with pore size ratios that have evolved to maximize mass transport and rates of reactions. The underlying physical principles of this optimized hierarchical design are embodied in Murray's law. However, we are yet to realize the benefit of mimicking nature's Murray networks in synthetic materials due to the challenges in fabricating vascularized structures. Here we emulate optimum natural systems following Murray's law using a bottom-up approach. Such bio-inspired materials, whose pore sizes decrease across multiple scales and finally terminate in size-invariant units like plant stems, leaf veins and vascular and respiratory systems provide hierarchical branching and precise diameter ratios for connecting multi-scale pores from macro to micro levels. Our Murray material mimics enable highly enhanced mass exchange and transfer in liquid–solid, gas–solid and electrochemical reactions and exhibit enhanced performance in photocatalysis, gas sensing and as Li-ion battery electrodes. PMID:28382972
Toropova, Alla P; Toropov, Andrey A; Benfenati, Emilio; Puzyn, Tomasz; Leszczynska, Danuta; Leszczynski, Jerzy
2014-10-01
The development of quantitative structure-activity relationships for nanomaterials needs representation of molecular structure of extremely complex molecular systems. Obviously, various characteristics of nanomaterial could impact associated biochemical endpoints. Following features of TiO2 and ZnO nanoparticles (n=42) are considered here: (i) engineered size (nm); (ii) size in water suspension (nm); (iii) size in phosphate buffered saline (PBS, nm); (iv) concentration (mg/L); and (v) zeta potential (mV). The damage to cellular membranes (units/L) is selected as an endpoint. Quantitative features-activity relationships (QFARs) are calculated by the Monte Carlo technique for three distributions of data representing values associated with membrane damage into the training and validation sets. The obtained models are characterized by the following average statistics: 0.78
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.
2004-01-01
A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.
Code of Federal Regulations, 2010 CFR
2010-04-01
... definitions-family size and income known (owner-occupied units, actual tenants, and prospective tenants). 81...—Income level definitions—family size and income known (owner-occupied units, actual tenants, and...-income families, where the unit is owner-occupied or, for rental housing, family size and income...
Constituents of Quality of Life and Urban Size
ERIC Educational Resources Information Center
Royuela, Vicente; Surinach, Jordi
2005-01-01
Do cities have an optimal size? In seeking to answer this question, various theories, including Optimal City Size Theory, the supply-oriented dynamic approach and the city network paradigm, have been forwarded that considered a city's population size as a determinant of location costs and benefits. However, the generalised growth in wealth that…
Topology-Scaling Identification of Layered Solids and Stable Exfoliated 2D Materials.
Ashton, Michael; Paul, Joshua; Sinnott, Susan B; Hennig, Richard G
2017-03-10
The Materials Project crystal structure database has been searched for materials possessing layered motifs in their crystal structures using a topology-scaling algorithm. The algorithm identifies and measures the sizes of bonded atomic clusters in a structure's unit cell, and determines their scaling with cell size. The search yielded 826 stable layered materials that are considered as candidates for the formation of two-dimensional monolayers via exfoliation. Density-functional theory was used to calculate the exfoliation energy of each material and 680 monolayers emerge with exfoliation energies below those of already-existent two-dimensional materials. The crystal structures of these two-dimensional materials provide templates for future theoretical searches of stable two-dimensional materials. The optimized structures and other calculated data for all 826 monolayers are provided at our database (https://materialsweb.org).
Guaranteed Discrete Energy Optimization on Large Protein Design Problems.
Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas
2015-12-08
In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.
The evolution of island gigantism and body size variation in tortoises and turtles
Jaffe, Alexander L.; Slater, Graham J.; Alfaro, Michael E.
2011-01-01
Extant chelonians (turtles and tortoises) span almost four orders of magnitude of body size, including the startling examples of gigantism seen in the tortoises of the Galapagos and Seychelles islands. However, the evolutionary determinants of size diversity in chelonians are poorly understood. We present a comparative analysis of body size evolution in turtles and tortoises within a phylogenetic framework. Our results reveal a pronounced relationship between habitat and optimal body size in chelonians. We found strong evidence for separate, larger optimal body sizes for sea turtles and island tortoises, the latter showing support for the rule of island gigantism in non-mammalian amniotes. Optimal sizes for freshwater and mainland terrestrial turtles are similar and smaller, although the range of body size variation in these forms is qualitatively greater. The greater number of potential niches in freshwater and terrestrial environments may mean that body size relationships are more complicated in these habitats. PMID:21270022
Magnetic bearing design and control optimization for a four-stage centrifugal compressor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinckney, F.D.; Keesee, J.M.
1992-07-01
A four-stage centrifugal pipeline compressor with a flexible rotor was equipped with magnetic bearings. Magnetic bearing sizing, shaft rotor dynamics, and controller/bearing design are discussed. Controller changes during shop and field tuning and the resulting rotor dynamic effects are also presented. Results of the field operation of this compressor indicate no vibration-related problems, despite the shaft second and third undamped modes being within the operating speed range. During the first 14 months after field commissioning, 9900 operating hours had been accumulated, indicating a 97 percent unit availability. 6 refs.
Testing the Birth Unit Design Spatial Evaluation Tool (BUDSET) in Australia: a pilot study.
Foureur, Maralyn J; Leap, Nicky; Davis, Deborah L; Forbes, Ian F; Homer, Caroline E S
2011-01-01
To pilot test the Birth Unit Design Spatial Evaluation Tool (BUDSET) in an Australian maternity care setting to determine whether such an instrument can measure the optimality of different birth settings. Optimally designed spaces to give birth are likely to influence a woman's ability to experience physiologically normal labor and birth. This is important in the current industrialized environment, where increased caesarean section rates are causing concerns. The measurement of an optimal birth space is currently impossible, because there are limited tools available. A quantitative study was undertaken to pilot test the discriminant ability of the BUDSET in eight maternity units in New South Wales, Australia. Five auditors trained in the use of the BUDSET assessed the birth units using the BUDSET, which is based on 18 design principles and is divided into four domains (Fear Cascade, Facility, Aesthetics, and Support) with three to eight assessable items in each. Data were independently collected in eight birth units. Values for each of the domains were aggregated to provide an overall Optimality Score for each birth unit. A range of Optimality Scores was derived for each of the birth units (from 51 to 77 out of a possible 100 points). The BUDSET identified units with low-scoring domains. Essentially these were older units and conventional labor ward settings. The BUDSET provides a way to assess the optimality of birth units and determine which domain areas may need improvement. There is potential for improvements to existing birth spaces, and considerable improvement can be made with simple low-cost modifications. Further research is needed to validate the tool.
Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units
Song, Chenchen; Martinez, Todd J.
2017-08-29
Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. Furthermore, the resulting energy conservation in micro-canonical AIMD demonstrates that the implementationmore » provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.« less
Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units
NASA Astrophysics Data System (ADS)
Song, Chenchen; Martínez, Todd J.
2017-10-01
Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. The resulting energy conservation in micro-canonical AIMD demonstrates that the implementation provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.
Analytical gradients for tensor hyper-contracted MP2 and SOS-MP2 on graphical processing units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Chenchen; Martinez, Todd J.
Analytic energy gradients for tensor hyper-contraction (THC) are derived and implemented for second-order Møller-Plesset perturbation theory (MP2), with and without the scaled-opposite-spin (SOS)-MP2 approximation. By exploiting the THC factorization, the formal scaling of MP2 and SOS-MP2 gradient calculations with respect to system size is reduced to quartic and cubic, respectively. An efficient implementation has been developed that utilizes both graphics processing units and sparse tensor techniques exploiting spatial sparsity of the atomic orbitals. THC-MP2 has been applied to both geometry optimization and ab initio molecular dynamics (AIMD) simulations. Furthermore, the resulting energy conservation in micro-canonical AIMD demonstrates that the implementationmore » provides accurate nuclear gradients with respect to the THC-MP2 potential energy surfaces.« less
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Automating Structural Analysis of Spacecraft Vehicles
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.
2004-01-01
A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.
NASA Astrophysics Data System (ADS)
Bhattacharjya, Rajib Kumar
2018-05-01
The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.
NASA Astrophysics Data System (ADS)
Dreifuss, Tamar; Betzer, Oshra; Barnoy, Eran; Motiei, Menachem; Popovtzer, Rachela
2018-02-01
Theranostics is an emerging field, defined as combination of therapeutic and diagnostic capabilities in the same material. Nanoparticles are considered as an efficient platform for theranostics, particularly in cancer treatment, as they offer substantial advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of theranostic nanoplatforms raises an important question: Is the optimal particle for imaging also optimal for therapy? Are the specific parameters required for maximal drug delivery, similar to those required for imaging applications? Herein, we examined this issue by investigating the effect of nanoparticle size on tumor uptake and imaging. Anti-epidermal growth factor receptor (EGFR)-conjugated gold nanoparticles (GNPs) in different sizes (diameter range: 20-120 nm) were injected to tumor bearing mice and their uptake by tumors was measured, as well as their tumor visualization capabilities as tumor-targeted CT contrast agent. Interestingly, the results showed that different particles led to highest tumor uptake or highest contrast enhancement, meaning that the optimal particle size for drug delivery is not necessarily optimal for tumor imaging. These results have important implications on the design of theranostic nanoplatforms.
Hawkins, Robert C; Badrick, Tony
2015-08-01
In this study we aimed to compare the reporting unit size used by Australian laboratories for routine chemistry and haematology tests to the unit size used by learned authorities and in standard laboratory textbooks and to the justified unit size based on measurement uncertainty (MU) estimates from quality assurance program data. MU was determined from Royal College of Pathologists of Australasia (RCPA) - Australasian Association of Clinical Biochemists (AACB) and RCPA Haematology Quality Assurance Program survey reports. The reporting unit size implicitly suggested in authoritative textbooks, the RCPA Manual, and the General Serum Chemistry program itself was noted. We also used published data on Australian laboratory practices.The best performing laboratories could justify their chemistry unit size for 55% of analytes while comparable figures for the 50% and 90% laboratories were 14% and 8%, respectively. Reporting unit size was justifiable for all laboratories for red cell count, >50% for haemoglobin but only the top 10% for haematocrit. Few, if any, could justify their mean cell volume (MCV) and mean cell haemoglobin concentration (MCHC) reporting unit sizes.The reporting unit size used by many laboratories is not justified by present analytical performance. Using MU estimates to determine the reporting interval for quantitative laboratory results ensures reporting practices match local analytical performance and recognises the inherent error of the measurement process.
Optimal deployment of thermal energy storage under diverse economic and climate conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, Nicholas; Mendes, Gonçalo; Stadler, Michael
2014-04-01
This paper presents an investigation of the economic benefit of thermal energy storage (TES) for cooling, across a range of economic and climate conditions. Chilled water TES systems are simulated for a large office building in four distinct locations, Miami in the U.S.; Lisbon, Portugal; Shanghai, China; and Mumbai, India. Optimal system size and operating schedules are determined using the optimization model DER-CAM, such that total cost, including electricity and amortized capital costs are minimized. The economic impacts of each optimized TES system is then compared to systems sized using a simple heuristic method, which bases system size as fractionmore » (50percent and 100percent) of total on-peak summer cooling loads. Results indicate that TES systems of all sizes can be effective in reducing annual electricity costs (5percent-15percent) and peak electricity consumption (13percent-33percent). The investigation also indentifies a number of criteria which drive TES investment, including low capital costs, electricity tariffs with high power demand charges and prolonged cooling seasons. In locations where these drivers clearly exist, the heuristically sized systems capture much of the value of optimally sized systems; between 60percent and 100percent in terms of net present value. However, in instances where these drivers are less pronounced, the heuristic tends to oversize systems, and optimization becomes crucial to ensure economically beneficial deployment of TES, increasing the net present value of heuristically sized systems by as much as 10 times in some instances.« less
Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation
NASA Technical Reports Server (NTRS)
1972-01-01
The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.
Optimization of solar cell contacts by system cost-per-watt minimization
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.
Assessment of solar-assisted gas-fired heat pump systems
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
As a possible application for the Goldstone Energy Project, the performance of a 10 ton heat pump unit using a hybrid solar gas energy source was evaluated in an effort to optimize the solar collector size. The heat pump system is designed to provide all the cooling and/or heating requirements of a selected office building. The system performance is to be augmented in the heating mode by utilizing the waste heat from the power cycle. A simplified system analysis is described to assess and compute interrrelationships of the engine, heat pump, and solar and building performance parameters, and to optimize the solar concentrator/building area ratio for a minimum total system cost. In addition, four alternative heating cooling systems, commonly used for building comfort, are described; their costs are compared, and are found to be less competitive with the gas solar heat pump system at the projected solar equipment costs.
Instructional versus schedule control of humans' choices in situations of diminishing returns
Hackenberg, Timothy D.; Joker, Veronica R.
1994-01-01
Four adult humans chose repeatedly between a fixed-time schedule (of points later exchangeable for money) and a progressive-time schedule that began at 0 s and increased by a fixed number of seconds with each point delivered by that schedule. Each point delivered by the fixed-time schedule reset the requirements of the progressive-time schedule to its minimum value. Subjects were provided with instructions that specified a particular sequence of choices. Under the initial conditions, the instructions accurately specified the optimal choice sequence. Thus, control by instructions and optimal control by the programmed contingencies both supported the same performance. To distinguish the effects of instructions from schedule sensitivity, the correspondence between the instructed and optimal choice patterns was gradually altered across conditions by varying the step size of the progressive-time schedule while maintaining the same instructions. Step size was manipulated, typically in 1-s units, first in an ascending and then in a descending sequence of conditions. Instructions quickly established control in all 4 subjects but, by narrowing the range of choice patterns, they reduced subsequent sensitivity to schedule changes. Instructional control was maintained across the ascending sequence of progressive-time values for each subject, but eventually diminished, giving way to more schedule-appropriate patterns. The transition from instruction-appropriate to schedule-appropriate behavior was characterized by an increase in the variability of choice patterns and local increases in point density. On the descending sequence of progressive-time values, behavior appeared to be schedule sensitive, sometimes even optimally sensitive, but it did not always change systematically with the contingencies, suggesting the involvement of other factors. PMID:16812747
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
2010-01-01
Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.
Chua, Michael E; Gatchalian, Glenn T; Corsino, Michael Vincent; Reyes, Buenaventura B
2012-10-01
(1) To determine the best cut-off level of Hounsfield units (HU) in the CT stonogram that would predict the appearance of a urinary calculi in plain KUB X-ray; (2) to estimate the sensitivity and specificity of the best cut-off HU; and (3) to determine whether stone size and location affect the in vivo predictability. A prospective cross-sectional study of patients aged 18-85 diagnosed with urolithiases on CT stonogram with concurrent plain KUB radiograph was conducted. Appearance of stones was recorded, and significant difference between radiolucent and radio-opaque CT attenuation level was determined using ANOVA. Receiver operating characteristics (ROC) curve determined the best HU cut-off value. Stone size and location were used for factor variability analysis. A total of 184 cases were included in this study, and the average urolithiasis size on CT stonogram was 0.84 cm (0.3-4.9 cm). On KUB X-ray, 34.2 % of the urolithiases were radiolucent and 65.8 % were radio-opaque. Mean value of CT Hounsfield unit for radiolucent stones was 358.25 (±156), and that for radio-opaque stones was 816.51 (±274). ROC curve determined the best cut-off value of HU at 498.5, with the sensitivity of 89.3 % and specificity of 87.3 %. For >4 mm stones, the sensitivity was 91.3 % and the specificity was 81.8 %. On the other hand, for =<4 mm stones, the sensitivity was 60 % and the specificity was 89.5 %. Based on the constructed ROC curve, a threshold value of 498.5 HU in CT stonogram was established as cut-off in determining whether a calculus is radio-opaque or radiolucent. The determined overall sensitivity and specificity of the set cut-off HU value are optimal. Stone size but not location affects the sensitivity and specificity.
Smith, Rebecca L.; Schukken, Ynte H.; Lu, Zhao; Mitchell, Rebecca M.; Grohn, Yrjo T.
2013-01-01
Objective To develop a mathematical model to simulate infection dynamics of Mycobacterium bovis in cattle herds in the United States and predict efficacy of the current national control strategy for tuberculosis in cattle. Design Stochastic simulation model. Sample Theoretical cattle herds in the United States. Procedures A model of within-herd M bovis transmission dynamics following introduction of 1 latently infected cow was developed. Frequency- and density-dependent transmission modes and 3 tuberculin-test based culling strategies (no test-based culling, constant (annual) testing with test-based culling, and the current strategy of slaughterhouse detection-based testing and culling) were investigated. Results were evaluated for 3 herd sizes over a 10-year period and validated via simulation of known outbreaks of M bovis infection. Results On the basis of 1,000 simulations (1000 herds each) at replacement rates typical for dairy cattle (0.33/y), median time to detection of M bovis infection in medium-sized herds (276 adult cattle) via slaughterhouse surveillance was 27 months after introduction, and 58% of these herds would spontaneously clear the infection prior to that time. Sixty-two percent of medium-sized herds without intervention and 99% of those managed with constant test-based culling were predicted to clear infection < 10 years after introduction. The model predicted observed outbreaks best for frequency-dependent transmission, and probability of clearance was most sensitive to replacement rate. Conclusions and Clinical Relevance Although modeling indicated the current national control strategy was sufficient for elimination of M bovis infection from dairy herds after detection, slaughterhouse surveillance was not sufficient to detect M bovis infection in all herds and resulted in subjectively delayed detection, compared with the constant testing method. Further research is required to economically optimize this strategy. PMID:23865885
NASA Astrophysics Data System (ADS)
Kumar, Ashwani; Vijay Babu, P.; Murty, V. V. S. N.
2017-06-01
Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of distributed generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. The objective of the paper is to reduce the power losses and improve the voltage profile of the radial distribution system with optimal allocation of the multiple DG in the system. The main contribution in this paper is (i) combined power loss sensitivity (CPLS) based method for multiple DG locations, (ii) determination of optimal sizes for multiple DG units at unity and lagging power factor, (iii) impact of DG installed at optimal, that is, combined load power factor on the system performance, (iv) impact of load growth on optimal DG planning, (v) Impact of DG integration in distribution systems on voltage stability index, (vi) Economic and technical Impact of DG integration in the distribution systems. The load growth factor has been considered in the study which is essential for planning and expansion of the existing systems. The technical and economic aspects are investigated in terms of improvement in voltage profile, reduction in total power losses, cost of energy loss, cost of power obtained from DG, cost of power intake from the substation, and savings in cost of energy loss. The results are obtained on IEEE 69-bus radial distribution systems and also compared with other existing methods.
Stability of discrete memory states to stochastic fluctuations in neuronal systems
Miller, Paul; Wang, Xiao-Jing
2014-01-01
Noise can degrade memories by causing transitions from one memory state to another. For any biological memory system to be useful, the time scale of such noise-induced transitions must be much longer than the required duration for memory retention. Using biophysically-realistic modeling, we consider two types of memory in the brain: short-term memories maintained by reverberating neuronal activity for a few seconds, and long-term memories maintained by a molecular switch for years. Both systems require persistence of (neuronal or molecular) activity self-sustained by an autocatalytic process and, we argue, that both have limited memory lifetimes because of significant fluctuations. We will first discuss a strongly recurrent cortical network model endowed with feedback loops, for short-term memory. Fluctuations are due to highly irregular spike firing, a salient characteristic of cortical neurons. Then, we will analyze a model for long-term memory, based on an autophosphorylation mechanism of calcium/calmodulin-dependent protein kinase II (CaMKII) molecules. There, fluctuations arise from the fact that there are only a small number of CaMKII molecules at each postsynaptic density (putative synaptic memory unit). Our results are twofold. First, we demonstrate analytically and computationally the exponential dependence of stability on the number of neurons in a self-excitatory network, and on the number of CaMKII proteins in a molecular switch. Second, for each of the two systems, we implement graded memory consisting of a group of bistable switches. For the neuronal network we report interesting ramping temporal dynamics as a result of sequentially switching an increasing number of discrete, bistable, units. The general observation of an exponential increase in memory stability with the system size leads to a trade-off between the robustness of memories (which increases with the size of each bistable unit) and the total amount of information storage (which decreases with increasing unit size), which may be optimized in the brain through biological evolution. PMID:16822041
Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Hug, Gabriela; Li, Xin
Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less
A chaos wolf optimization algorithm with self-adaptive variable step-size
NASA Astrophysics Data System (ADS)
Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun
2017-10-01
To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.
Comparing kinetic curves in liquid chromatography
NASA Astrophysics Data System (ADS)
Kurganov, A. A.; Kanat'eva, A. Yu.; Yakubenko, E. E.; Popova, T. P.; Shiryaeva, V. E.
2017-01-01
Five equations for kinetic curves which connect the number of theoretical plates N and time of analysis t 0 for five different versions of optimization, depending on the parameters being varied (e.g., mobile phase flow rate, pressure drop, sorbent grain size), are obtained by means of mathematical modeling. It is found that a method based on the optimization of a sorbent grain size at fixed pressure is most suitable for the optimization of rapid separations. It is noted that the advantages of the method are limited by an area of relatively low efficiency, and the advantage of optimization is transferred to a method based on the optimization of both the sorbent grain size and the drop in pressure across a column in the area of high efficiency.
Directional Convexity and Finite Optimality Conditions.
1984-03-01
system, Necessary Conditions for optimality. Work Unit Number 5 (Optimization and Large Scale Systems) *Istituto di Matematica Applicata, Universita...that R(T) is convex would then imply x(u,T) e int R(T). Cletituto di Matematica Applicata, Universita di Padova, 35100 ITALY. Sponsored by the United
Shi, Wendong; Wang, Jizeng; Fan, Xiaojun; Gao, Huajian
2008-12-01
A mechanics model describing how a cell membrane with diffusive mobile receptors wraps around a ligand-coated cylindrical or spherical particle has been recently developed to model the role of particle size in receptor-mediated endocytosis. The results show that particles in the size range of tens to hundreds of nanometers can enter cells even in the absence of clathrin or caveolin coats. Here we report further progress on modeling the effects of size and shape in diffusion, interaction, and absorption of finite-sized colloidal particles near a partially absorbing sphere. Our analysis indicates that, from the diffusion and interaction point of view, there exists an optimal hydrodynamic size of particles, typically in the nanometer regime, for the maximum rate of particle absorption. Such optimal size arises as a result of balance between the diffusion constant of the particles and the interaction energy between the particles and the absorbing sphere relative to the thermal energy. Particles with a smaller hydrodynamic radius have larger diffusion constant but weaker interaction with the sphere while larger particles have smaller diffusion constant but stronger interaction with the sphere. Since the hydrodynamic radius is also determined by the particle shape, an optimal hydrodynamic radius implies an optimal size as well as an optimal aspect ratio for a nonspherical particle. These results show broad agreement with experimental observations and may have general implications on interaction between nanoparticles and animal cells.
NASA Astrophysics Data System (ADS)
Shi, Wendong; Wang, Jizeng; Fan, Xiaojun; Gao, Huajian
2008-12-01
A mechanics model describing how a cell membrane with diffusive mobile receptors wraps around a ligand-coated cylindrical or spherical particle has been recently developed to model the role of particle size in receptor-mediated endocytosis. The results show that particles in the size range of tens to hundreds of nanometers can enter cells even in the absence of clathrin or caveolin coats. Here we report further progress on modeling the effects of size and shape in diffusion, interaction, and absorption of finite-sized colloidal particles near a partially absorbing sphere. Our analysis indicates that, from the diffusion and interaction point of view, there exists an optimal hydrodynamic size of particles, typically in the nanometer regime, for the maximum rate of particle absorption. Such optimal size arises as a result of balance between the diffusion constant of the particles and the interaction energy between the particles and the absorbing sphere relative to the thermal energy. Particles with a smaller hydrodynamic radius have larger diffusion constant but weaker interaction with the sphere while larger particles have smaller diffusion constant but stronger interaction with the sphere. Since the hydrodynamic radius is also determined by the particle shape, an optimal hydrodynamic radius implies an optimal size as well as an optimal aspect ratio for a nonspherical particle. These results show broad agreement with experimental observations and may have general implications on interaction between nanoparticles and animal cells.
The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations
NASA Astrophysics Data System (ADS)
Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.
2017-09-01
We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.
Suppressing epidemics with a limited amount of immunization units.
Schneider, Christian M; Mihaljev, Tamara; Havlin, Shlomo; Herrmann, Hans J
2011-12-01
The way diseases spread through schools, epidemics through countries, and viruses through the internet is crucial in determining their risk. Although each of these threats has its own characteristics, its underlying network determines the spreading. To restrain the spreading, a widely used approach is the fragmentation of these networks through immunization, so that epidemics cannot spread. Here we develop an immunization approach based on optimizing the susceptible size, which outperforms the best known strategy based on immunizing the highest-betweenness links or nodes. We find that the network's vulnerability can be significantly reduced, demonstrating this on three different real networks: the global flight network, a school friendship network, and the internet. In all cases, we find that not only is the average infection probability significantly suppressed, but also for the most relevant case of a small and limited number of immunization units the infection probability can be reduced by up to 55%.
Near-Field Phase-Change Optical Recording of 1.36 Numerical Aperture
NASA Astrophysics Data System (ADS)
Ichimura, Isao; Kishima, Koichiro; Osato, Kiyoshi; Yamamoto, Kenji; Kuroda, Yuji; Saito, Kimihiro
2000-02-01
A bit density of 125 nm was demonstrated through near-field phase-change (PC) optical recording at the wavelength of 657 nm by using a supersphere solid immersion lens (SIL). The lens unit consists of a standard objective and a φ2.5 mm SIL@. Since this lens size still prevents the unit from being mounted on an air-bearing slider, we developed a one-axis positioning actuator and an active capacitance servo for precise gap control to thoroughly investigate near-field recording. An electrode was fabricated on the bottom of the SIL, and a capacitor was formed facing a disk material. This setup realized a stable air gap below 50 nm, and a new method of simulating modulation transfer function (MTF) optimized the PC disk structure at this gap height. Obtained jitter of 8.8% and a clear eye-pattern prove that our system successfully attained the designed numerical-aperture (\\mathit{NA}) of 1.36.
Optimal input sizes for neural network de-interlacing
NASA Astrophysics Data System (ADS)
Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee
2009-02-01
Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.
Harvey, R.W.; Kinner, N.E.; Bunn, A.; MacDonald, D.; Metge, D.
1995-01-01
Transport behaviors of unidentified flagellated protozoa (flagellates) and flagellate-sized carboxylated microspheres in sandy, organically contaminated aquifer sediments were investigated in a small-scale (1 to 4-m travel distance) natural-gradient tracer test on Cape Cod and in flow-through columns packed with sieved (0.5-to 1.0-mm grain size) aquifer sediments. The minute (average in situ cell size, 2 to 3 ??m) flagellates, which are relatively abundant in the Cape Cod aquifer, were isolated from core samples, grown in a grass extract medium, labeled with hydroethidine (a vital eukaryotic stain), and coinjected into aquifer sediments along with bromide, a conservative tracer. The 2-??m flagellates appeared to be near the optimal size for transport, judging from flowthrough column experiments involving a polydispersed (0.7 to 6.2 ??m in diameter) suspension of carboxylated microspheres. However, immobilization within the aquifer sediments accounted for a log unit reduction over the first meter of travel compared with a log unit reduction over the first 10 m of travel for indigenous, free-living groundwater bacteria in earlier tests. High rates of flagellate immobilization in the presence of aquifer sediments also was observed in the laboratory. However, immobilization rates for the laboratory-grown flagellates (initially 4 to 5 ??m) injected into the aquifer were not constant and decreased noticeably with increasing time and distance of travel. The decrease in propensity for grain surfaces was accompanied by a decrease in cell size, as the flagellates presumably readapted to aquifer conditions. Retardation and apparent dispersion were generally at least twofold greater than those observed earlier for indigenous groundwater bacteria but were much closer to those observed for highly surface active carboxylated latex microspheres. Field and laboratory results suggest that 2- ??m carboxylated microspheres may be useful as analogs in investigating several abiotic aspects of flagellate transport behavior in groundwater.
NASA Astrophysics Data System (ADS)
Kazemzadeh Azad, Saeid
2018-01-01
In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient metaheuristic optimization of large-scale structural systems.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
[Diagnosis and the technology for optimizing the medical support of a troop unit].
Korshever, N G; Polkovov, S V; Lavrinenko, O V; Krupnov, P A; Anastasov, K N
2000-05-01
The work is devoted to investigation of the system of military unit medical support with the use of principles and states of organizational diagnosis; development of the method allowing to assess its functional activity; and determination of optimization trends. Basing on the conducted organizational diagnosis and expert inquiry the informative criteria were determined which characterize the stages of functioning of the military unit medical support system. To evaluate the success of military unit medical support the complex multi-criteria pattern was developed and algorithm of this process optimization was substantiated. Using the results obtained, particularly realization of principles and states of decision taking theory in machine program it is possible to solve more complex problem of comparison between any number of military units: to dispose them according to priority decrease; to select the programmed number of the best and worst; to determine the trends of activity optimization in corresponding medical service personnel.
NASA Technical Reports Server (NTRS)
Parker, T. J.; Pieri, D. C.
1985-01-01
In assessing the relative ages of the geomorphic/geologic units, crater counts of the entire unit or nearly the entire unit were made and summed in order to get a more accurate value than obtainable by counts of isolated sections of each unit. Cumulative size-frequency counts show some interesting relationships. Most of the units show two distinct crater populations with a flattening out of the distribution curve at and below 10 km diameter craters. Above this crater size the curves for the different units diverge most notably. In general, the variance may reflect the relative ages of these units. At times, however, in the larger crater size range, these curves can overlap and cross on another. Also the error bars at these larger sizes are broader (and thus more suspect), since counts of larger craters show more scatter, whereas the unit areas remain constant. Occasional clusters of relatively large craters within a given unit, particularly one of limited areal extent, can affect the curve so that the unit might seem to be older than units which it overlies or cuts.
Lin, Chenxi; Martínez, Luis Javier; Povinelli, Michelle L
2013-09-09
We design silicon membranes with nanohole structures with optimized complex unit cells that maximize broadband absorption. We fabricate the optimized design and measure the optical absorption. We demonstrate an experimental broadband absorption about 3.5 times higher than an equally-thick thin film.
Conditional Optimal Design in Three- and Four-Level Experiments
ERIC Educational Resources Information Center
Hedges, Larry V.; Borenstein, Michael
2014-01-01
The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…
Optimal web investment in sub-optimal foraging conditions.
Harmer, Aaron M T; Kokko, Hanna; Herberstein, Marie E; Madin, Joshua S
2012-01-01
Orb web spiders sit at the centre of their approximately circular webs when waiting for prey and so face many of the same challenges as central-place foragers. Prey value decreases with distance from the hub as a function of prey escape time. The further from the hub that prey are intercepted, the longer it takes a spider to reach them and the greater chance they have of escaping. Several species of orb web spiders build vertically elongated ladder-like orb webs against tree trunks, rather than circular orb webs in the open. As ladder web spiders invest disproportionately more web area further from the hub, it is expected they will experience reduced prey gain per unit area of web investment compared to spiders that build circular webs. We developed a model to investigate how building webs in the space-limited microhabitat on tree trunks influences the optimal size, shape and net prey gain of arboricolous ladder webs. The model suggests that as horizontal space becomes more limited, optimal web shape becomes more elongated, and optimal web area decreases. This change in web geometry results in decreased net prey gain compared to webs built without space constraints. However, when space is limited, spiders can achieve higher net prey gain compared to building typical circular webs in the same limited space. Our model shows how spiders optimise web investment in sub-optimal conditions and can be used to understand foraging investment trade-offs in other central-place foragers faced with constrained foraging arenas.
Optimal web investment in sub-optimal foraging conditions
NASA Astrophysics Data System (ADS)
Harmer, Aaron M. T.; Kokko, Hanna; Herberstein, Marie E.; Madin, Joshua S.
2012-01-01
Orb web spiders sit at the centre of their approximately circular webs when waiting for prey and so face many of the same challenges as central-place foragers. Prey value decreases with distance from the hub as a function of prey escape time. The further from the hub that prey are intercepted, the longer it takes a spider to reach them and the greater chance they have of escaping. Several species of orb web spiders build vertically elongated ladder-like orb webs against tree trunks, rather than circular orb webs in the open. As ladder web spiders invest disproportionately more web area further from the hub, it is expected they will experience reduced prey gain per unit area of web investment compared to spiders that build circular webs. We developed a model to investigate how building webs in the space-limited microhabitat on tree trunks influences the optimal size, shape and net prey gain of arboricolous ladder webs. The model suggests that as horizontal space becomes more limited, optimal web shape becomes more elongated, and optimal web area decreases. This change in web geometry results in decreased net prey gain compared to webs built without space constraints. However, when space is limited, spiders can achieve higher net prey gain compared to building typical circular webs in the same limited space. Our model shows how spiders optimise web investment in sub-optimal conditions and can be used to understand foraging investment trade-offs in other central-place foragers faced with constrained foraging arenas.
Optimal visual-haptic integration with articulated tools.
Takahashi, Chie; Watt, Simon J
2017-05-01
When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.
Optimal load scheduling in commercial and residential microgrids
NASA Astrophysics Data System (ADS)
Ganji Tanha, Mohammad Mahdi
Residential and commercial electricity customers use more than two third of the total energy consumed in the United States, representing a significant resource of demand response. Price-based demand response, which is in response to changes in electricity prices, represents the adjustments in load through optimal load scheduling (OLS). In this study, an efficient model for OLS is developed for residential and commercial microgrids which include aggregated loads in single-units and communal loads. Single unit loads which include fixed, adjustable and shiftable loads are controllable by the unit occupants. Communal loads which include pool pumps, elevators and central heating/cooling systems are shared among the units. In order to optimally schedule residential and commercial loads, a community-based optimal load scheduling (CBOLS) is proposed in this thesis. The CBOLS schedule considers hourly market prices, occupants' comfort level, and microgrid operation constraints. The CBOLS' objective in residential and commercial microgrids is the constrained minimization of the total cost of supplying the aggregator load, defined as the microgrid load minus the microgrid generation. This problem is represented by a large-scale mixed-integer optimization for supplying single-unit and communal loads. The Lagrangian relaxation methodology is used to relax the linking communal load constraint and decompose the independent single-unit functions into subproblems which can be solved in parallel. The optimal solution is acceptable if the aggregator load limit and the duality gap are within the bounds. If any of the proposed criteria is not satisfied, the Lagrangian multiplier will be updated and a new optimal load schedule will be regenerated until both constraints are satisfied. The proposed method is applied to several case studies and the results are presented for the Galvin Center load on the 16th floor of the IIT Tower in Chicago.
NASA Astrophysics Data System (ADS)
Panda, S.; Saha, S.; Basu, M.
2013-01-01
Product perishability is an important aspect of inventory control. To minimise the effect of deterioration, retailers in supermarkets, departmental store managers, etc. always want higher inventory depletion rate. In this article, we propose a dynamic pre- and post-deterioration cumulative discount policy to enhance inventory depletion rate resulting low volume of deterioration cost, holding cost and hence higher profit. It is assumed that demand is a price and time dependent ramp-type function and the product starts to deteriorate after certain amount of time. Unlike the conventional inventory models with pricing strategies, which are restricted to a fixed number of price changes and to a fixed cycle length, we allow the number of price changes before as well as after the start of deterioration and the replenishment cycle length to be the decision variables. Before start of deterioration, discounts on unit selling price are provided cumulatively in successive pricing cycles. After the start of deterioration, discounts on reduced unit selling price are also provided in a cumulative way. A mathematical model is developed and the existence of the optimal solution is verified. A numerical example is presented, which indicates that under the cumulative effect of price discounting, dynamic pricing policy outperforms static pricing strategy. Sensitivity analysis of the model is carried out.
24 CFR 886.325 - Overcrowded and underoccupied units.
Code of Federal Regulations, 2010 CFR
2010-04-01
... of a change in family composition and shall transfer to an appropriate size dwelling unit, based on.... Such a family shall have priority over a family on the owner's waiting list seeking the same size unit... notification by the family of a change in the family size, the owner agrees to offer the family a suitable unit...
Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; Tiedje, James M.
2016-01-01
We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation. PMID:27313569
NASA Technical Reports Server (NTRS)
1972-01-01
The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184
Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul
2014-01-01
This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.
Structure of n-alkyltrichlorosilane mono layers on Si(100)/SiO 2
H. -G. Steinruck; Ocko, B.; Will, J.; ...
2015-10-05
The structure of n-alkyltrichlorosilane self-assembled monolayers (SAMs) of alkyl chain lengths n = 12, 14, 18, and 22 formed on the amorphous native oxide of silicon (100) has been investigated via angstrom-resolution surface X-ray scattering techniques, with particular focus on the proliferation of lateral order along the molecules’ long axis. Grazing incidence diffraction shows that the monolayer is composed of hexagonally packed crystalline-like domains for n = 14, 18, and 22 with a lateral size of about 60 Å. However, Bragg rod analysis shows that ~12 of the CH 2 units are not included in the crystalline-like domains. We assignmore » this, and the limited lateral crystallites’ size, to strain induced by the size mismatch between the optimal chain–chain and headgroup–headgroup spacings. Lastly, analysis of X-ray reflectivity profiles for n = 12, 14, and 22 shows that the density profile used to successfully model n = 18 provides an excellent fit where the analysis-derived parameters provide complementary structural information to the grazing incidence results.« less
NASA Astrophysics Data System (ADS)
Mihn, Byeong-Hee; Lee, Ki-Won; Ahn, Young Sook; Lee, Yong Sam
2015-03-01
During the reign of King Sejong (世宗, 1418-1450) in the Joseon Dynasty, there were lots of astronomical instruments, including miniaturized ones. Those instruments utilized the technical know-how acquired through building contemporary astronomical instruments previously developed in the Song(宋), Jin(金), and Yuan(元) dynasties of China. In those days, many astronomical instruments had circles, rings, and spheres carved with a scale of 365.25, 100, and 24 parts, respectively, on their circumference. These were called the celestial-circumference degree, hundred-interval (Baekgak), and 24 direction, respectively. These scales are marked by the angular distance, not by the angle. Therefore, these circles, rings, and spheres had to be optimized in size to accomodate proper scales. Assuming that the scale system is composed of integer multiples of unit length, we studied the sizes of circles by referring to old articles and investigating existing artifacts. We discovered that the star chart of Cheonsang yeolcha bunyajido was drawn with a royal standard ruler (周尺) based on the unit length of 207 mm. Interestingly, its circumference was marked by the unit scale of 3 puns per 1 du (or degree) like Honsang (a celestial globe). We also found that Hyeonju ilgu (a equatorial sundial) has a Baekgak disk on a scale of 1 pun per 1 gak (that is an interval of time similar to a quarter). This study contributes to the analysis of specifications of numerous circular elements from old Korean astronomical instruments.
Nursing Unit Design, Nursing Staff Communication Networks, and Patient Falls: Are They Related?
Brewer, Barbara B; Carley, Kathleen M; Benham-Hutchins, Marge; Effken, Judith A; Reminga, Jeffrey
2018-01-01
The purpose of this research is to (1) investigate the impact of nursing unit design on nursing staff communication patterns and, ultimately, on patient falls in acute care nursing units; and (2) evaluate whether differences in fall rates, if found, were associated with the nursing unit physical structure (shape) or size. Nursing staff communication and nursing unit design are frequently linked to patient safety outcomes, yet little is known about the impact of specific nursing unit designs on nursing communication patterns that might affect patient falls. An exploratory longitudinal correlational design was used to measure nursing unit communication structures using social network analysis techniques. Data were collected 4 times over a 7-month period. Floor plans were used to determine nursing unit design. Fall rates were provided by hospital coordinators. An analysis of covariance controlling for hospitals resulted in a statistically significant interaction of unit shape and size (number of beds). The interaction occurred when medium- and large-sized racetrack-shaped units intersected with medium- and large-sized cross-shaped units. The results suggest that nursing unit design shape impacts nursing communication patterns, and the interaction of shape and size may impact patient falls. How those communication patterns affect patient falls should be considered when planning hospital construction of nursing care units.
Validation of the Gatortail method for accurate sizing of pulmonary vessels from 3D medical images.
O'Dell, Walter G; Gormaley, Anne K; Prida, David A
2017-12-01
Detailed characterization of changes in vessel size is crucial for the diagnosis and management of a variety of vascular diseases. Because clinical measurement of vessel size is typically dependent on the radiologist's subjective interpretation of the vessel borders, it is often prone to high inter- and intra-user variability. Automatic methods of vessel sizing have been developed for two-dimensional images but a fully three-dimensional (3D) method suitable for vessel sizing from volumetric X-ray computed tomography (CT) or magnetic resonance imaging has heretofore not been demonstrated and validated robustly. In this paper, we refined and objectively validated Gatortail, a method that creates a mathematical geometric 3D model of each branch in a vascular tree, simulates the appearance of the virtual vascular tree in a 3D CT image, and uses the similarity of the simulated image to a patient's CT scan to drive the optimization of the model parameters, including vessel size, to match that of the patient. The method was validated with a 2-dimensional virtual tree structure under deformation, and with a realistic 3D-printed vascular phantom in which the diameter of 64 branches were manually measured 3 times each. The phantom was then scanned on a conventional clinical CT imaging system and the images processed with the in-house software to automatically segment and mathematically model the vascular tree, label each branch, and perform the Gatortail optimization of branch size and trajectory. Previously proposed methods of vessel sizing using matched Gaussian filters and tubularity metrics were also tested. The Gatortail method was then demonstrated on the pulmonary arterial tree segmented from a human volunteer's CT scan. The standard deviation of the difference between the manually measured and Gatortail-based radii in the 3D physical phantom was 0.074 mm (0.087 in-plane pixel units for image voxels of dimension 0.85 × 0.85 × 1.0 mm) over the 64 branches, representing vessel diameters ranging from 1.2 to 7 mm. The linear regression fit gave a slope of 1.056 and an R 2 value of 0.989. These three metrics reflect superior agreement of the radii estimates relative to previously published results over all sizes tested. Sizing via matched Gaussian filters resulted in size underestimates of >33% over all three test vessels, while the tubularity-metric matching exhibited a sizing uncertainty of >50%. In the human chest CT data set, the vessel voxel intensity profiles with and without branch model optimization showed excellent agreement and improvement in the objective measure of image similarity. Gatortail has been demonstrated to be an automated, objective, accurate and robust method for sizing of vessels in 3D non-invasively from chest CT scans. We anticipate that Gatortail, an image-based approach to automatically compute estimates of blood vessel radii and trajectories from 3D medical images, will facilitate future quantitative evaluation of vascular response to disease and environmental insult and improve understanding of the biological mechanisms underlying vascular disease processes. © 2017 American Association of Physicists in Medicine.
Kinship-based politics and the optimal size of kin groups
Hammel, E. A.
2005-01-01
Kin form important political groups, which change in size and relative inequality with demographic shifts. Increases in the rate of population growth increase the size of kin groups but decrease their inequality and vice versa. The optimal size of kin groups may be evaluated from the marginal political product (MPP) of their members. Culture and institutions affect levels and shapes of MPP. Different optimal group sizes, from different perspectives, can be suggested for any MPP schedule. The relative dominance of competing groups is determined by their MPP schedules. Groups driven to extremes of sustainability may react in Malthusian fashion, including fission and fusion, or in Boserupian fashion, altering social technology to accommodate changes in size. The spectrum of alternatives for actors and groups, shaped by existing institutions and natural and cultural selection, is very broad. Nevertheless, selection may result in survival of particular kinds of political structures. PMID:16091466
Kinship-based politics and the optimal size of kin groups.
Hammel, E A
2005-08-16
Kin form important political groups, which change in size and relative inequality with demographic shifts. Increases in the rate of population growth increase the size of kin groups but decrease their inequality and vice versa. The optimal size of kin groups may be evaluated from the marginal political product (MPP) of their members. Culture and institutions affect levels and shapes of MPP. Different optimal group sizes, from different perspectives, can be suggested for any MPP schedule. The relative dominance of competing groups is determined by their MPP schedules. Groups driven to extremes of sustainability may react in Malthusian fashion, including fission and fusion, or in Boserupian fashion, altering social technology to accommodate changes in size. The spectrum of alternatives for actors and groups, shaped by existing institutions and natural and cultural selection, is very broad. Nevertheless, selection may result in survival of particular kinds of political structures.
Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.
Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo
2011-06-01
This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.
Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method
Huh, Kyung-Hoe; Baik, Jee-Seon; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo
2011-01-01
Purpose This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Materials and Methods Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. Results The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. Conclusion The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm. PMID:21977478
Boccaccio, Antonio; Fiorentino, Michele; Uva, Antonio E; Laghetti, Luca N; Monno, Giuseppe
2018-02-01
In a context more and more oriented towards customized medical solutions, we propose a mechanobiology-driven algorithm to determine the optimal geometry of scaffolds for bone regeneration that is the most suited to specific boundary and loading conditions. In spite of the huge number of articles investigating different unit cells for porous biomaterials, no studies are reported in the literature that optimize the geometric parameters of such unit cells based on mechanobiological criteria. Parametric finite element models of scaffolds with rhombicuboctahedron unit cell were developed and incorporated into an optimization algorithm that combines them with a computational mechanobiological model. The algorithm perturbs iteratively the geometry of the unit cell until the best scaffold geometry is identified, i.e. the geometry that allows to maximize the formation of bone. Performances of scaffolds with rhombicuboctahedron unit cell were compared with those of other scaffolds with hexahedron unit cells. We found that scaffolds with rhombicuboctahedron unit cell are particularly suited for supporting medium-low loads, while, for higher loads, scaffolds with hexahedron unit cells are preferable. The proposed algorithm can guide the orthopaedic/surgeon in the choice of the best scaffold to be implanted in a patient-specific anatomic region. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, K. M.; Li, Hua
2018-07-01
A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.
Salis, Michele; Del Giudice, Liliana; Arca, Bachisio; Ager, Alan A; Alcasena-Urdiroz, Fermin; Lozano, Olga; Bacciu, Valentina; Spano, Donatella; Duce, Pierpaolo
2018-04-15
Wildfire spread and behavior can be limited by fuel treatments, even if their effects can vary according to a number of factors including type, intensity, extension, and spatial arrangement. In this work, we simulated the response of key wildfire exposure metrics to variations in the percentage of treated area, treatment unit size, and spatial arrangement of fuel treatments under different wind intensities. The study was carried out in a fire-prone 625 km 2 agro-pastoral area mostly covered by herbaceous fuels, and located in Northern Sardinia, Italy. We constrained the selection of fuel treatment units to areas covered by specific herbaceous land use classes and low terrain slope (<10%). We treated 2%, 5% and 8% of the landscape area, and identified priority sites to locate the fuel treatment units for all treatment alternatives. The fuel treatment alternatives were designed create diverse mosaics of disconnected treatment units with different sizes (0.5-10 ha, LOW strategy; 10-25 ha, MED strategy; 25-50 ha, LAR strategy); in addition, treatment units in a 100-m buffer around the road network (ROAD strategy) were tested. We assessed pre- and post-treatment wildfire behavior by the Minimum Travel Time (MTT) fire spread algorithm. The simulations replicated a set of southwestern wind speed scenarios (16, 24 and 32 km h -1 ) and the driest fuel moisture conditions observed in the study area. Our results showed that fuel treatments implemented near the existing road network were significantly more efficient than the other alternatives, and this difference was amplified at the highest wind speed. Moreover, the largest treatment unit sizes were the most effective in containing wildfire growth. As expected, increasing the percentage of the landscape treated and reducing wind speed lowered fire exposure profiles for all fuel treatment alternatives, and this was observed at both the landscape scale and for highly valued resources. The methodology presented in this study can support the design and optimization of fuel management programs and policies in agro-pastoral areas of the Mediterranean Basin and herbaceous type landscapes elsewhere, where recurrent grassland fires pose a threat to rural communities, farms and infrastructures. Copyright © 2018 Elsevier Ltd. All rights reserved.
Design Methods and Optimization for Morphing Aircraft
NASA Technical Reports Server (NTRS)
Crossley, William A.
2005-01-01
This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Probability distribution functions for unit hydrographs with optimization using genetic algorithm
NASA Astrophysics Data System (ADS)
Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh
2017-05-01
A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.
Mullen, Lewis; Stamp, Robin C; Brooks, Wesley K; Jones, Eric; Sutcliffe, Christopher J
2009-05-01
In this study, a novel porous titanium structure for the purpose of bone in-growth has been designed, manufactured and evaluated. The structure was produced by Selective Laser Melting (SLM); a rapid manufacturing process capable of producing highly intricate, functionally graded parts. The technique described utilizes an approach based on a defined regular unit cell to design and produce structures with a large range of both physical and mechanical properties. These properties can be tailored to suit specific requirements; in particular, functionally graded structures with bone in-growth surfaces exhibiting properties comparable to those of human bone have been manufactured. The structures were manufactured and characterized by unit cell size, strand diameter, porosity, and compression strength. They exhibited a porosity (10-95%) dependant compression strength (0.5-350 Mpa) comparable to the typical naturally occurring range. It is also demonstrated that optimized structures have been produced that possesses ideal qualities for bone in-growth applications and that these structures can be applied in the production of orthopedic devices. (c) 2008 Wiley Periodicals, Inc.
Effect of Data Assimilation Parameters on The Optimized Surface CO2 Flux in Asia
NASA Astrophysics Data System (ADS)
Kim, Hyunjung; Kim, Hyun Mee; Kim, Jinwoong; Cho, Chun-Ho
2018-02-01
In this study, CarbonTracker, an inverse modeling system based on the ensemble Kalman filter, was used to evaluate the effects of data assimilation parameters (assimilation window length and ensemble size) on the estimation of surface CO2 fluxes in Asia. Several experiments with different parameters were conducted, and the results were verified using CO2 concentration observations. The assimilation window lengths tested were 3, 5, 7, and 10 weeks, and the ensemble sizes were 100, 150, and 300. Therefore, a total of 12 experiments using combinations of these parameters were conducted. The experimental period was from January 2006 to December 2009. Differences between the optimized surface CO2 fluxes of the experiments were largest in the Eurasian Boreal (EB) area, followed by Eurasian Temperate (ET) and Tropical Asia (TA), and were larger in boreal summer than in boreal winter. The effect of ensemble size on the optimized biosphere flux is larger than the effect of the assimilation window length in Asia, but the importance of them varies in specific regions in Asia. The optimized biosphere flux was more sensitive to the assimilation window length in EB, whereas it was sensitive to the ensemble size as well as the assimilation window length in ET. The larger the ensemble size and the shorter the assimilation window length, the larger the uncertainty (i.e., spread of ensemble) of optimized surface CO2 fluxes. The 10-week assimilation window and 300 ensemble size were the optimal configuration for CarbonTracker in the Asian region based on several verifications using CO2 concentration measurements.
Helgeson, Melvin D; Kang, Daniel G; Lehman, Ronald A; Dmitriev, Anton E; Luhmann, Scott J
2013-08-01
There is currently no reliable technique for intraoperative assessment of pedicle screw fixation strength and optimal screw size. Several studies have evaluated pedicle screw insertional torque (IT) and its direct correlation with pullout strength. However, there is limited clinical application with pedicle screw IT as it must be measured during screw placement and rarely causes the spine surgeon to change screw size. To date, no study has evaluated tapping IT, which precedes screw insertion, and its ability to predict pedicle screw pullout strength. The objective of this study was to investigate tapping IT and its ability to predict pedicle screw pullout strength and optimal screw size. In vitro human cadaveric biomechanical analysis. Twenty fresh-frozen human cadaveric thoracic vertebral levels were prepared and dual-energy radiographic absorptiometry scanned for bone mineral density (BMD). All specimens were osteoporotic with a mean BMD of 0.60 ± 0.07 g/cm(2). Five specimens (n=10) were used to perform a pilot study, as there were no previously established values for optimal tapping IT. Each pedicle during the pilot study was measured using a digital caliper as well as computed tomography measurements, and the optimal screw size was determined to be equal to or the first size smaller than the pedicle diameter. The optimal tap size was then selected as the tap diameter 1 mm smaller than the optimal screw size. During optimal tap size insertion, all peak tapping IT values were found to be between 2 in-lbs and 3 in-lbs. Therefore, the threshold tapping IT value for optimal pedicle screw and tap size was determined to be 2.5 in-lbs, and a comparison tapping IT value of 1.5 in-lbs was selected. Next, 15 test specimens (n=30) were measured with digital calipers, probed, tapped, and instrumented using a paired comparison between the two threshold tapping IT values (Group 1: 1.5 in-lbs; Group 2: 2.5 in-lbs), randomly assigned to the left or right pedicle on each specimen. Each pedicle was incrementally tapped to increasing size (3.75, 4.00, 4.50, and 5.50 mm) until the threshold value was reached based on the assigned group. Pedicle screw size was determined by adding 1 mm to the tap size that crossed the threshold torque value. Torque measurements were recorded with each revolution during tap and pedicle screw insertion. Each specimen was then individually potted and pedicle screws pulled out "in-line" with the screw axis at a rate of 0.25 mm/sec. Peak pullout strength (POS) was measured in Newtons (N). The peak tapping IT was significantly increased (50%) in Group 2 (3.23 ± 0.65 in-lbs) compared with Group 1 (2.15 ± 0.56 in-lbs) (p=.0005). The peak screw IT was also significantly increased (19%) in Group 2 (8.99 ± 2.27 in-lbs) compared with Group 1 (7.52 ± 2.96 in-lbs) (p=.02). The pedicle screw pullout strength was also significantly increased (23%) in Group 2 (877.9 ± 235.2 N) compared with Group 1 (712.3 ± 223.1 N) (p=.017). The mean pedicle screw diameter was significantly increased in Group 2 (5.70 ± 1.05 mm) compared with Group 1 (5.00 ± 0.80 mm) (p=.0002). There was also an increased rate of optimal pedicle screw size selection in Group 2 with 9 of 15 (60%) pedicle screws compared with Group 1 with 4 of 15 (26.7%) pedicle screws within 1 mm of the measured pedicle width. There was a moderate correlation for tapping IT with both screw IT (r=0.54; p=.002) and pedicle screw POS (r=0.55; p=.002). Our findings suggest that tapping IT directly correlates with pedicle screw IT, pedicle screw pullout strength, and optimal pedicle screw size. Therefore, tapping IT may be used during thoracic pedicle screw instrumentation as an adjunct to preoperative imaging and clinical experience to maximize fixation strength and optimize pedicle "fit and fill" with the largest screw possible. However, further prospective, in vivo studies are necessary to evaluate the intraoperative use of tapping IT to predict screw loosening/complications. Published by Elsevier Inc.
A segmentation approach for a delineation of terrestrial ecoregions
NASA Astrophysics Data System (ADS)
Nowosad, J.; Stepinski, T.
2017-12-01
Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.
Does size matter? Animal units and animal unit months
Lamar Smith; Joe Hicks; Scott Lusk; Mike Hemmovich; Shane Green; Sarah McCord; Mike Pellant; John Mitchell; Judith Dyess; Jim Sprinkle; Amanda Gearhart; Sherm Karl; Mike Hannemann; Ken Spaeth; Jason Karl; Matt Reeves; Dave Pyke; Jordan Spaak; Andrew Brischke; Del Despain; Matt Phillippi; Dave Weixelmann; Alan Bass; Jessie Page; Lori Metz; David Toledo; Emily Kachergis
2017-01-01
The concepts of animal units, animal unit months, and animal unit equivalents have long been used as standards for range management planning, estimating stocking rates, reporting actual use, assessing grazing fees, ranch appraisal, and other purposes. Increasing size of cattle on rangelands has led some to suggest that the definition of animal units and animal unit...
Design of shared unit-dose drug distribution network using multi-level particle swarm optimization.
Chen, Linjie; Monteiro, Thibaud; Wang, Tao; Marcon, Eric
2018-03-01
Unit-dose drug distribution systems provide optimal choices in terms of medication security and efficiency for organizing the drug-use process in large hospitals. As small hospitals have to share such automatic systems for economic reasons, the structure of their logistic organization becomes a very sensitive issue. In the research reported here, we develop a generalized multi-level optimization method - multi-level particle swarm optimization (MLPSO) - to design a shared unit-dose drug distribution network. Structurally, the problem studied can be considered as a type of capacitated location-routing problem (CLRP) with new constraints related to specific production planning. This kind of problem implies that a multi-level optimization should be performed in order to minimize logistic operating costs. Our results show that with the proposed algorithm, a more suitable modeling framework, as well as computational time savings and better optimization performance are obtained than that reported in the literature on this subject.
Power smart in-door optical wireless link design
NASA Astrophysics Data System (ADS)
Marraccini, P. J.; Riza, N. A.
2011-12-01
Presented for the first time, to the best of the authors´ knowledge, is the design of a power smart in-door optical wireless link that provides lossless beam propagation between Transmitter (T) and Receiver (R) for changing link distances. Each T/R unit uses a combination of fixed and variable focal length optics to smartly adjust the laser beam propagation parameters of minimum beam waist size and its location to produce the optimal zero propagation loss coupling condition at the R for that link distance. An Electronically Controlled Variable Focus Lens (ECVFL) is used to form the wide field-of-view search beam and change the beam size at R to form a low loss beam. The T/R unit can also deploy camera optics and thermal energy harvesting electronics to improve link operational smartness and efficiency. To demonstrate the principles of the beam conditioned low loss indoor link, a visible 633 nm laser link using an electro-wetting technology liquid ECVFL is demonstrated for a variable 1 to 4 m link range. Measurements indicate a 53% improvement over an unconditioned laser link at 4 m. Applications for this power efficient wireless link includes mobile computer platform communications and agile server rack interconnections in data centres.
NASA Astrophysics Data System (ADS)
Howlader, Harun Or Rashid; Matayoshi, Hidehito; Noorzad, Ahmad Samim; Muarapaz, Cirio Celestino; Senjyu, Tomonobu
2018-05-01
This paper presents a smart house-based power system for thermal unit commitment programme. The proposed power system consists of smart houses, renewable energy plants and conventional thermal units. The transmission constraints are considered for the proposed system. The generated power of the large capacity renewable energy plant leads to the violated transmission constraints in the thermal unit commitment programme, therefore, the transmission constraint should be considered. This paper focuses on the optimal operation of the thermal units incorporated with controllable loads such as Electrical Vehicle and Heat Pump water heater of the smart houses. The proposed method is compared with the power flow in thermal units operation without controllable loads and the optimal operation without the transmission constraints. Simulation results show the validation of the proposed method.
Deeper and sparser nets are optimal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1998-03-01
The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions (Home and Hush, 1994); and (2) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Home and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will provemore » that size-optimal solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less
NASA Technical Reports Server (NTRS)
Skillen, Michael D.; Crossley, William A.
2008-01-01
This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.
A Study on Optimal Sizing of Pipeline Transporting Equi-sized Particulate Solid-Liquid Mixture
NASA Astrophysics Data System (ADS)
Asim, Taimoor; Mishra, Rakesh; Pradhan, Suman; Ubbi, Kuldip
2012-05-01
Pipelines transporting solid-liquid mixtures are of practical interest to the oil and pipe industry throughout the world. Such pipelines are known as slurry pipelines where the solid medium of the flow is commonly known as slurry. The optimal designing of such pipelines is of commercial interests for their widespread acceptance. A methodology has been evolved for the optimal sizing of a pipeline transporting solid-liquid mixture. Least cost principle has been used in sizing such pipelines, which involves the determination of pipe diameter corresponding to the minimum cost for given solid throughput. The detailed analysis with regard to transportation of slurry having solids of uniformly graded particles size has been included. The proposed methodology can be used for designing a pipeline for transporting any solid material for different solid throughput.
2015-04-13
cope with dynamic, online optimisation problems with uncertainty, we developed some powerful and sophisticated techniques for learning heuristics...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) National ICT Australia United NICTA, Locked Bag 6016 Kensington...ABSTRACT Optimization solvers should learn to improve their performance over time. By learning both during the course of solving an optimization
NASA Astrophysics Data System (ADS)
Wang, H. B.; Li, J. W.; Zhou, B.; Yuan, Z. Q.; Chen, Y. P.
2013-03-01
In the last few decades, the development of Geographical Information Systems (GIS) technology has provided a method for the evaluation of landslide susceptibility and hazard. Slope units were found to be appropriate for the fundamental morphological elements in landslide susceptibility evaluation. Following the DEM construction in a loess area susceptible to landslides, the direct-reverse DEM technology was employed to generate 216 slope units in the studied area. After a detailed investigation, the landslide inventory was mapped in which 39 landslides, including paleo-landslides, old landslides and recent landslides, were present. Of the 216 slope units, 123 involved landslides. To analyze the mechanism of these landslides, six environmental factors were selected to evaluate landslide occurrence: slope angle, aspect, the height and shape of the slope, distance to river and human activities. These factors were extracted in terms of the slope unit within the ArcGIS software. The spatial analysis demonstrates that most of the landslides are located on convex slopes at an elevation of 100-150 m with slope angles from 135°-225° and 40°-60°. Landslide occurrence was then checked according to these environmental factors using an artificial neural network with back propagation, optimized by genetic algorithms. A dataset of 120 slope units was chosen for training the neural network model, i.e., 80 units with landslide presence and 40 units without landslide presence. The parameters of genetic algorithms and neural networks were then set: population size of 100, crossover probability of 0.65, mutation probability of 0.01, momentum factor of 0.60, learning rate of 0.7, max learning number of 10 000, and target error of 0.000001. After training on the datasets, the susceptibility of landslides was mapped for the land-use plan and hazard mitigation. Comparing the susceptibility map with landslide inventory, it was noted that the prediction accuracy of landslide occurrence is 93.02%, whereas units without landslide occurrence are predicted with an accuracy of 81.13%. To sum up, the verification shows satisfactory agreement with an accuracy of 86.46% between the susceptibility map and the landslide locations. In the landslide susceptibility assessment, ten new slopes were predicted to show potential for failure, which can be confirmed by the engineering geological conditions of these slopes. It was also observed that some disadvantages could be overcome in the application of the neural networks with back propagation, for example, the low convergence rate and local minimum, after the network was optimized using genetic algorithms. To conclude, neural networks with back propagation that are optimized by genetic algorithms are an effective method to predict landslide susceptibility with high accuracy.
Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji
2016-01-01
The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants’ performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words. PMID:26909052
Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji
2016-01-01
The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants' performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words.
Deeper sparsely nets are size-optimal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1997-12-01
The starting points of this paper are two size-optimal solutions: (i) one for implementing arbitrary Boolean functions (Horne, 1994); and (ii) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Horne and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will prove that size-optimalmore » solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e. minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less
Determination of a temperature sensor location for monitoring weld pool size in GMAW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boo, K.S.; Cho, H.S.
1994-11-01
This paper describes a method of determining the optimal sensor location to measure weldment surface temperature, which has a close correlation with weld pool size in the gas metal arc (GMA) welding process. Due to the inherent complexity and nonlinearity in the GMA welding process, the relationship between the weldment surface temperature and the weld pool size varies with the point of measurement. This necessitates an optimal selection of the measurement point to minimize the process nonlinearity effect in estimating the weld pool size from the measured temperature. To determine the optimal sensor location on the top surface of themore » weldment, the correlation between the measured temperature and the weld pool size is analyzed. The analysis is done by calculating the correlation function, which is based upon an analytical temperature distribution model. To validate the optimal sensor location, a series of GMA bead-on-plate welds are performed on a medium-carbon steel under various welding conditions. A comparison study is given in detail based upon the simulation and experimental results.« less
Li, Meng; Zhang, Lu; Davé, Rajesh N; Bilgili, Ecevit
2016-04-01
As a drug-sparing approach in early development, vibratory milling has been used for the preparation of nanosuspensions of poorly water-soluble drugs. The aim of this study was to intensify this process through a systematic increase in vibration intensity and bead loading with the optimal bead size for faster production. Griseofulvin, a poorly water-soluble drug, was wet-milled using yttrium-stabilized zirconia beads with sizes ranging from 50 to 1500 μm at low power density (0.87 W/g). Then, this process was intensified with the optimal bead size by sequentially increasing vibration intensity and bead loading. Additional experiments with several bead sizes were performed at high power density (16 W/g), and the results were compared to those from wet stirred media milling. Laser diffraction, scanning electron microscopy, X-ray diffraction, differential scanning calorimetry, and dissolution tests were used for characterization. Results for the low power density indicated 800 μm as the optimal bead size which led to a median size of 545 nm with more than 10% of the drug particles greater than 1.8 μm albeit the fastest breakage. An increase in either vibration intensity or bead loading resulted in faster breakage. The most intensified process led to 90% of the particles being smaller than 300 nm. At the high power intensity, 400 μm beads were optimal, which enhanced griseofulvin dissolution significantly and signified the importance of bead size in view of the power density. Only the optimally intensified vibratory milling led to a comparable nanosuspension to that prepared by the stirred media milling.
Formation of free round jets with long laminar regions at large Reynolds numbers
NASA Astrophysics Data System (ADS)
Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander
2018-04-01
The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.
Kassem, Mohamed A A; ElMeshad, Aliaa N; Fares, Ahmed R
2017-05-01
Lacidipine (LCDP) is a highly lipophilic calcium channel blocker of poor aqueous solubility leading to poor oral absorption. This study aims to prepare and optimize LCDP nanosuspensions using antisolvent sonoprecipitation technique to enhance the solubility and dissolution of LCDP. A three-factor, three-level Box-Behnken design was employed to optimize the formulation variables to obtain LCDP nanosuspension of small and uniform particle size. Formulation variables were as follows: stabilizer to drug ratio (A), sodium deoxycholate percentage (B), and sonication time (C). LCDP nanosuspensions were assessed for particle size, zeta potential, and polydispersity index. The formula with the highest desirability (0.969) was chosen as the optimized formula. The values of the formulation variables (A, B, and C) in the optimized nanosuspension were 1.5, 100%, and 8 min, respectively. Optimal LCDP nanosuspension had particle size (PS) of 273.21 nm, zeta potential (ZP) of -32.68 mV and polydispersity index (PDI) of 0.098. LCDP nanosuspension was characterized using x-ray powder diffraction, differential scanning calorimetry, and transmission electron microscopy. LCDP nanosuspension showed saturation solubility 70 times that of raw LCDP in addition to significantly enhanced dissolution rate due to particle size reduction and decreased crystallinity. These results suggest that the optimized LCDP nanosuspension could be promising to improve oral absorption of LCDP.
Integrated topology and shape optimization in structural design
NASA Technical Reports Server (NTRS)
Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.
1990-01-01
Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.
Dušek, Adam; Bartoš, Luděk; Sedláček, František
2017-01-01
Litter size is one of the most reliable state-dependent life-history traits that indicate parental investment in polytocous (litter-bearing) mammals. The tendency to optimize litter size typically increases with decreasing availability of resources during the period of parental investment. To determine whether this tactic is also influenced by resource limitations prior to reproduction, we examined the effect of experimental, pre-breeding food restriction on the optimization of parental investment in lactating mice. First, we investigated the optimization of litter size in 65 experimental and 72 control families (mothers and their dependent offspring). Further, we evaluated pre-weaning offspring mortality, and the relationships between maternal and offspring condition (body weight), as well as offspring mortality, in 24 experimental and 19 control families with litter reduction (the death of one or more offspring). Assuming that pre-breeding food restriction would signal unpredictable food availability, we hypothesized that the optimization of parental investment would be more effective in the experimental rather than in the control mice. In comparison to the controls, the experimental mice produced larger litters and had a more selective (size-dependent) offspring mortality and thus lower litter reduction (the proportion of offspring deaths). Selective litter reduction helped the experimental mothers to maintain their own optimum condition, thereby improving the condition and, indirectly, the survival of their remaining offspring. Hence, pre-breeding resource limitations may have facilitated the mice to optimize their inclusive fitness. On the other hand, in the control females, the absence of environmental cues indicating a risky environment led to "maternal optimism" (overemphasizing good conditions at the time of breeding), which resulted in the production of litters of super-optimal size and consequently higher reproductive costs during lactation, including higher offspring mortality. Our study therefore provides the first evidence that pre-breeding food restriction promotes the optimization of parental investment, including offspring number and developmental success.
Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.
Guo, Tianjiao; Englehardt, James D; Fallon, Howard J
While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.
Late-stage pharmaceutical R&D and pricing policies under two-stage regulation.
Jobjörnsson, Sebastian; Forster, Martin; Pertile, Paolo; Burman, Carl-Fredrik
2016-12-01
We present a model combining the two regulatory stages relevant to the approval of a new health technology: the authorisation of its commercialisation and the insurer's decision about whether to reimburse its cost. We show that the degree of uncertainty concerning the true value of the insurer's maximum willingness to pay for a unit increase in effectiveness has a non-monotonic impact on the optimal price of the innovation, the firm's expected profit and the optimal sample size of the clinical trial. A key result is that there exists a range of values of the uncertainty parameter over which a reduction in uncertainty benefits the firm, the insurer and patients. We consider how different policy parameters may be used as incentive mechanisms, and the incentives to invest in R&D for marginal projects such as those targeting rare diseases. The model is calibrated using data on a new treatment for cystic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Offspring fitness and individual optimization of clutch size
Both, C.; Tinbergen, J. M.; Noordwijk, A. J. van
1998-01-01
Within-year variation in clutch size has been claimed to be an adaptation to variation in the individual capacity to raise offspring. We tested this hypothesis by manipulating brood size to one common size, and predicted that if clutch size is individually optimized, then birds with originally large clutches have a higher fitness than birds with originally small clutches. No evidence was found that fitness was related to the original clutch size, and in this population clutch size is thus not related to the parental capacity to raise offspring. However, offspring from larger original clutches recruited better than their nest mates that came from smaller original clutches. This suggests that early maternal or genetic variation in viability is related to clutch size.
Optimizing abdominal CT dose and image quality with respect to x-ray tube voltage
NASA Astrophysics Data System (ADS)
Huda, Walter; Ogden, Kent M.
2004-05-01
The objective of this study was to identify the x-ray tube voltage that results in optimum performance for abdominal CT imaging for a range of imaging tasks and patient sizes. Theoretical calculations were performed of the contrast to noise ratio (CNR) for disk shaped lesions of muscle, fat, bone and iodine embedded in a uniform water background. Lesion contrast was the mean Hounsfield Unit value at the effective photon energy, and image noise was determined from the total radiation intensity incident on the CT x-ray detector. Patient size ranging from young infants (10 kg) to oversized adults (120 kg), with CNR values obtained for x-ray tube voltages ranging from 80 to 140 kV. Patients of varying sizes were modeled as an equivalent cylinder of water, and the mean section dose (D) was determined for each selected x-ray tube kV value at a constant mAs. For each patient size and lesion type, we identified an optimal kV as the x-ray tube voltage that yields a maximum value of the figure of merit (CNR2/D). Increasing the x-ray tube voltage from 80 to 140 kV reduced lesion contrast by 11% for muscle, 21% for fat, 35% for bone and 52% for iodine, and these reductions were approximately independent of patient size. Increasing the x-ray tube voltage from 80 to 140 kV increased a muscle lesion CNR relative to a uniform water background by a factor of 2.6, with similar trends observed for fat (2.3), bone (1.9) and iodine (1.4). The improvement in lesion CNR with increasing x-ray tube voltage was highest for the largest sized patients. Increasing the x-ray tube voltage from 80 to 140 kV increased the patient dose by a factor of between 5.0 and 6.2 depending on the patient size. For small sized patients (10 and 30 kg) and muscle lesions, best performance is obtained at 80 kV; however, for adults (70 kg) and oversized adults (120 kg), the best performance would be obtained at 140 kV. Imaging fat lesions was best performed at 80 kV for all patients except for oversized adults, where 140 kV offers the best imaging performance. For high Z lesions of bone and iodine, imaging performance generally degrades with increasing kV for all patient sizes, with the degree of degradation largest for the smallest patients. We conclude that 80 kV is optimal with respect to radiation dose in abdominal CT for all pediatric patients. For adults, 80 kV is the x-ray voltage of choice for high Z lesions, whereas 140 kV would generally be the voltage of choice of lesions that have an atomic number similar to that of water.
focuses on integration and optimization of distributed energy resources, specifically cost-optimal sizing Campus team which is focusing on NREL's own control system integration and energy informatics sizing and dispatch of distributed energy resources Integration of building and utility control systems
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
Anderson, D.R.
1974-01-01
Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.
A challenge for theranostics: is the optimal particle for therapy also optimal for diagnostics?
NASA Astrophysics Data System (ADS)
Dreifuss, Tamar; Betzer, Oshra; Shilo, Malka; Popovtzer, Aron; Motiei, Menachem; Popovtzer, Rachela
2015-09-01
Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications.Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03119b
Development of an adaptive harvest management program for Taiga bean geese
Johnson, Fred A.; Alhainen, Mikko; Fox, Anthony D.; Madsen, Jesper
2016-01-01
This report describes recent progress in specifying the elements of an adaptive harvest program for taiga bean goose. It describes harvest levels appropriate for first rebuilding the population of the Central Management Unit and then maintaining it near the goal specified in the AEWA International Single Species Action Plan (ISSAP). This report also provides estimates of the length of time it would take under ideal conditions (no density dependence and no harvest) to rebuild depleted populations in the Western and Eastern Management Units. We emphasize that our estimates are a first approximation because detailed demographic information is lacking for taiga bean geese. Using allometric relationships, we estimated parameters of a thetalogistic matrix population model. The mean intrinsic rate of growth was estimated as r = 0.150 (90% credible interval: 0.120 – 0.182). We estimated the mean form of density dependence as 2.361 (90% credible interval: 0.473 – 11.778), suggesting the strongest density dependence occurs when the population is near its carrying capacity. Based on expert opinion, carrying capacity (i.e., population size expected in the absence of hunting) for the Central Management Unit was estimated as K 87,900 (90% credible interval: 82,000 – 94,100). The ISSAP specifies a population goal for the Central Management Unit of 60,000 – 80,000 individuals in winter; thus, we specified a preliminary objective function as one which would minimize the difference between this goal and population size. Using the concept of stochastic dominance to explicitly account for uncertainty in demography, we determined that optimal harvest rates for 5, 10, 15, and 20-year time horizons were h = 0.00, 0.02, 0.05, and 0.06, respectively. These optima represent a tradeoff between the harvest rate and the time required to achieve and maintain a population size within desired bounds. We recognize, however, that regulation of absolute harvest rather than harvest rate is more practical, but our matrix model does not permit one to calculate an exact harvest associated with a specific harvest rate. Approximate harvests for current population size in the Central Management Unit are 0, 1,200, 2,300, and 3,500 for the 5, 10, 15, and 20-year time horizons, respectively. Populations of taiga bean geese in the Western and Eastern Units would require at least 10 and 13 years, respectively, to reach their minimum goals under the most optimistic of scenarios. The presence of harvest, density dependence, or environmental variation could extend these time frames considerably. Finally, we stress that development and implementation of internationally coordinated monitoring programs will be essential to further development and implementation of an adaptive harvest management program.
Value recovery from two mechanized bucking operations in the southeastern United States
Kevin Boston; Glen. Murphy
2003-01-01
The value recovered from two mechanized bucking operations in the southeastern United States was compared with the optimal value computed using an individual-stem log optimization program, AVIS. The first operation recovered 94% of the optimal value. The main cause for the value loss was a failure to capture potential sawlog volume; logs were bucked to a larger average...
Rashid, Mahbub
2014-01-01
In 2006, Critical Care Nursing Quarterly published a study of the physical design features of a set of best practice example adult intensive care units (ICUs). These adult ICUs were awarded between 1993 and 2003 by the Society of Critical Care Medicine (SCCM), the American Association of Critical-Care Nurses, and the American Institute of Architects/Academy of Architecture for Health for their efforts to promote the critical care unit environment through design. Since 2003, several more adult ICUs were awarded by the same organizations for similar efforts. This study includes these newer ICUs along with those of the previous study to cover a period of 2 decades from 1993 to 2012. Like the 2006 study, this study conducts a systematic content analysis of the materials submitted by the award-winning adult ICUs. On the basis of the analysis, the study compares the 1993-2002 and 2003-2012 adult ICUs in relation to construction type, unit specialty, unit layout, unit size, patient room size and design, support and service area layout, and family space design. The study also compares its findings with the 2010 Guidelines for Design and Construction of Health Care Facilities of the Facility Guidelines Institute and the 2012 Guidelines for Intensive Care Unit Design of the SCCM. The study indicates that the award-winning ICUs of both decades used several design features that were associated with positive outcomes in research studies. The study also indicates that the award-winning ICUs of the second decade used more evidence-based design features than those of the first decades. In most cases, these ICUs exceeded the requirements of the Facility Guidelines Institute Guidelines to meet those of the SCCM Guidelines. Yet, the award-winning ICUs of both decades also used several features that had very little or no supporting research evidence. Since they all were able to create an optimal critical care environment for which they were awarded, having knowledge of the physical design of these award-winning ICUs may help design better ICUs.
Filin, I
2009-06-01
Using diffusion processes, I model stochastic individual growth, given exogenous hazards and starvation risk. By maximizing survival to final size, optimal life histories (e.g. switching size for habitat/dietary shift) are determined by two ratios: mean growth rate over growth variance (diffusion coefficient) and mortality rate over mean growth rate; all are size dependent. For example, switching size decreases with either ratio, if both are positive. I provide examples and compare with previous work on risk-sensitive foraging and the energy-predation trade-off. I then decompose individual size into reversibly and irreversibly growing components, e.g. reserves and structure. I provide a general expression for optimal structural growth, when reserves grow stochastically. I conclude that increased growth variance of reserves delays structural growth (raises threshold size for its commencement) but may eventually lead to larger structures. The effect depends on whether the structural trait is related to foraging or defence. Implications for population dynamics are discussed.
NASA Astrophysics Data System (ADS)
Frank, T. D.; Patanarapeelert, K.; Beek, P. J.
2008-05-01
We derive a fundamental relationship between the mean and the variability of isometric force. The relationship arises from an optimal collection of active motor units such that the force variability assumes a minimum (optimal isometric force). The relationship is shown to be independent of the explicit motor unit properties and of the dynamical features of isometric force production. A constant coefficient of variation in the asymptotic regime and a nonequilibrium fluctuation-dissipation theorem for optimal isometric force are predicted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1997-09-01
The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions; and (2) another one for implementing certain subclasses of Boolean functions. Because VLSI implementations do not cope well with highly interconnected nets -- the area of a chip grows with the cube of the fan-in -- this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Horne and Hush valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will prove that size-optimal solutions are obtainedmore » for small constant fan-ins for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions, while there are similar small constants relating to the capacity of processing information.« less
Ryu, Se-Ah; Kim, Chang Sup; Kim, Hye-Jung; Baek, Dae Heoun; Oh, Deok-Kun
2003-01-01
D-Tagatose was continuously produced using thermostable L-arabinose isomerase immobilized in alginate with D-galactose solution in a packed-bed bioreactor. Bead size, L/D (length/diameter) of reactor, dilution rate, total loaded enzyme amount, and substrate concentration were found to be optimal at 0.8 mm, 520/7 mm, 0.375 h(-1), 5.65 units, and 300 g/L, respectively. Under these conditions, the bioreactor produced about 145 g/L tagatose with an average productivity of 54 g tagatose/L x h and an average conversion yield of 48% (w/w). Operational stability of the immobilized enzyme was demonstrated, with a tagatose production half-life of 24 days.
A high performance hardware implementation image encryption with AES algorithm
NASA Astrophysics Data System (ADS)
Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab
2011-06-01
This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.
The design of photovoltaic plants - An optimization procedure
NASA Astrophysics Data System (ADS)
Bartoli, B.; Cuomo, V.; Fontana, F.; Serio, C.; Silvestrini, V.
An analytical model is developed to match the components and overall size of a solar power facility (comprising photovoltaic array), maximum-power tracker, battery storage system, and inverter) to the load requirements and climatic conditions of a proposed site at the smallest possible cost. Input parameters are the efficiencies and unit costs of the components, the load fraction to be covered (for stand-alone systems), the statistically analyzed meteorological data, and the cost and efficiency data of the support system (for fuel-generator-assisted plants). Numerical results are presented in graphs and tables for sites in Italy, and it is found that the explicit form of the model equation is independent of locality, at least for this region.
Evidence-based design in an intensive care unit: end-user perceptions.
Ferri, Mauricio; Zygun, David A; Harrison, Alexandra; Stelfox, Henry T
2015-04-25
The objective of this study was to describe end-user impressions and experiences in a new intensive care unit built using evidence-based design. This qualitative study was comprised of early (2-3 months after opening) and late (12-15 months after opening) phase individual interviews with end-users (healthcare providers, support staff, and patient family members) of the newly constructed Foothills Medical Centre intensive care unit in Calgary, Canada. The study unit was the recipient of the Society of Critical Care Medicine Design Citation award in 2012. We conducted interviews with thirty-nine ICU end-users, twenty-four in the early phase and fifteen in the late phase. We identified four themes (eleven sub-themes): atmosphere (abundant natural light and low noise levels), physical spaces (single occupancy rooms, rooms clustered into clinical pods, medication rooms, and tradeoffs of larger spaces), family participation in care (family support areas and social networks), and equipment (usability, storage, and providers connectivity). Abundant natural light was the design feature most frequently associated with a pleasant atmosphere. Participants emphasized the tradeoffs of size and space, and reported that the benefits of additional space (e.g., fewer interruptions due to less noise) out-weighed the disadvantages (e.g., greater distances between patients, families and providers). End-users advised that local patient care policies (e.g., number of visitors allowed at a time) and staffing needed to be updated to reflect the characteristics of the new facility design. End-users identified design elements for creating a pleasant atmosphere, attention to the tradeoffs of space and size, designing family support areas to encourage family participation in care, and updating patient care policies and staffing to reflect the new physical space as important aspects to consider when building intensive care units. Evidence-based design may optimize ICU structure for patients, patient families and providers.
Beirowski, Jakob; Inghelbrecht, Sabine; Arien, Albertina; Gieseler, Henning
2011-05-01
It has been recently reported in the literature that using a fast freezing rate during freeze-drying of drug nanosuspensions is beneficial to preserve the original particle size distribution. All freezing rates studied were obtained by utilizing a custom-made apparatus and were then indirectly related to conventional vial freeze-drying. However, a standard freeze-dryer is only capable of achieving moderate freezing rates in the shelf fluid circulation system. Therefore, it was the purpose of the present study to evaluate the possibility to establish a typical freezing protocol applicable to a standard freeze-drying unit in combination with an adequate choice of cryoprotective excipients and steric stabilizers to preserve the original particle size distribution. Six different drug nanosuspensions containing itraconazole as a drug model were studied using freeze-thaw experiments and a full factorial design to reveal major factors for the stabilization of drug nanosuspensions and the corresponding interactions. In contrast to previous reports, the freezing regime showed no significant influence on preserving the original particle size distribution, suggesting that the concentrations of both the steric stabilizer and the cryoprotective agent are optimized. Moreover, it could be pinpointed that the combined effect of steric stabilizer and cryoprotectant clearly contribute to nanoparticle stability. Copyright © 2010 Wiley-Liss, Inc.
Fling, Brett W; Knight, Christopher A; Kamen, Gary
2009-08-01
As a part of the aging process, motor unit reorganization occurs in which small motoneurons reinnervate predominantly fast-twitch muscle fibers that have lost their innervation. We examined the relationship between motor unit size and the threshold force for recruitment in two muscles to determine whether older individuals might develop an alternative pattern of motor unit activation. Young and older adults performed isometric contractions ranging from 0 to 50% of maximal voluntary contraction in both the first dorsal interosseous (FDI) and tibialis anterior (TA) muscles. Muscle fiber action potentials were recorded with an intramuscular needle electrode and motor unit size was computed using spike-triggered averaging of the global EMG signal (macro EMG), which was also obtained from the intramuscular needle electrode. As expected, older individuals exhibited larger motor units than young subjects in both the FDI and the TA. However, moderately strong correlations were obtained for the macro EMG amplitude versus recruitment threshold relationship in both the young and older adults within both muscles, suggesting that the size principle of motor unit recruitment seems to be preserved in older adults.
Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; ...
2016-06-02
We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungalmore » community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian
We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungalmore » community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.« less
Unpredictable food supply modifies costs of reproduction and hampers individual optimization.
Török, János; Hegyi, Gergely; Tóth, László; Könczey, Réka
2004-11-01
Investment into the current reproductive attempt is thought to be at the expense of survival and/or future reproduction. Individuals are therefore expected to adjust their decisions to their physiological state and predictable aspects of environmental quality. The main predictions of the individual optimization hypothesis for bird clutch sizes are: (1) an increase in the number of recruits with an increasing number of eggs in natural broods, with no corresponding impairment of parental survival or future reproduction, and (2) a decrease in the fitness of parents in response to both negative and positive brood size manipulation, as a result of a low number of recruits, poor future reproduction of parents, or both. We analysed environmental influences on costs and optimization of reproduction on 6 years of natural and experimentally manipulated broods in a Central European population of the collared flycatcher. Based on dramatic differences in caterpillar availability, we classified breeding seasons as average and rich food years. The categorization was substantiated by the majority of present and future fitness components of adults and offspring. Neither observational nor experimental data supported the individual optimization hypothesis, in contrast to a Scandinavian population of the species. The quality of fledglings deteriorated, and the number of recruits did not increase with natural clutch size. Manipulation revealed significant costs of reproduction to female parents in terms of future reproductive potential. However, the influence of manipulation on recruitment was linear, with no significant polynomial effect. The number of recruits increased with manipulation in rich food years and tended to decrease in average years, so control broods did not recruit more young than manipulated broods in any of the year types. This indicates that females did not optimize their clutch size, and that they generally laid fewer eggs than optimal in rich food years. Mean yearly clutch size did not follow food availability, which suggests that females cannot predict food supply of the brood-rearing period at the beginning of the season. This lack of information on future food conditions seems to prevent them from accurately estimating their optimal clutch size for each season. Our results suggest that individual optimization may not be a general pattern even within a species, and alternative mechanisms are needed to explain clutch size variation.
The effect of code expanding optimizations on instruction cache design
NASA Technical Reports Server (NTRS)
Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.
1991-01-01
It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.
Use of cost-effectiveness analysis to determine inventory size for a national cord blood bank.
Howard, David H; Meltzer, David; Kollman, Craig; Maiers, Martin; Logan, Brent; Gragert, Loren; Setterholm, Michelle; Horowitz, Mary M
2008-01-01
Transplantation with stem cells from stored umbilical cord blood units is an alternative to living unrelated bone marrow transplantation. The larger the inventory of stored cord units, the greater the likelihood that transplant candidates will match to a unit, but storing units is costly. The authors present the results of a study, commissioned by the Institute of Medicine, as part of a report on the establishment of a national cord blood bank, examining the optimal inventory level. They emphasize the unique challenges of undertaking cost-effectiveness analysis in this field and the contribution of the analysis to policy. The authors estimate the likelihood that transplant candidates will match to a living unrelated marrow donor or a cord blood unit as a function of cord blood inventory and then calculate the life-years gained for each transplant type by match level using historical data. They develop a model of the cord blood inventory level to estimate total costs as a function of the number of stored units. The cost per life-year gained associated with increasing inventory from 50,000 to 100,000 units is $44,000 to $86,000 and from 100,000 to 150,000 units is $64,000 to $153,000, depending on the assumption about the degree to which survival rates for cord transplants vary by match quality. Expanding the cord blood inventory above current levels is cost-effective by conventional standards. The analysis helped shape the Institute of Medicine's report, but it is difficult to determine the extent to which the analysis influenced subsequent congressional legislation.
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.
Ads' click-through rates predicting based on gated recurrent unit neural networks
NASA Astrophysics Data System (ADS)
Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi
2018-05-01
In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm.
Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B
2015-06-01
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
Dwivedi, Mohit; Sharma, Vijay; Pathak, Kamla
2017-02-01
Eosinophilic pustular folliculitis is a secondary symptom associated with HIV infection appears as levels of CD4 lymphocyte cells and T4 lymphocyte cell. Isotretinoin, an analog of vitamin A (retinoid) alters the DNA transcription mechanism and interferes in the process of DNA formation. It also inhibits the eosinophilic chemotactic factors present in sebaceous lipids and in the stratum corneum of patients suffering from this ailment. The present research was aimed to formulate isotretenoin-loaded invasomal gel to deliver and target the drug to pilosebaceous follicular unit. Nine invasomal formulations (F1-F9) were prepared applying 3 2 factorial designs and characterized. Formulation F9 was selected as optimized formulation due to optimum results and highest %CDP of 85.94 ± 1.86% in 8 h. Transmission electron microscopy (TEM) suggested uniformity in vesicles shape and size in F9 and developed as invasomal gel (IG). Clinical phase-I, phase-II, and phase-III studies will be required before using on human patients. Confocal laser scanning microscopy (CLSM) validates that IG successfully reaches the pilosebaceous follicular unit and further studied on cell line (SZ-95) exhibited IC50 of ≤8 (25 μM of isotretenoin). Cell cycle analysis confirmed IG arrested the cell growth up to 82% with insignificant difference to pure isotretenion.
Coding of time-dependent stimuli in homogeneous and heterogeneous neural populations.
Beiran, Manuel; Kruscha, Alexandra; Benda, Jan; Lindner, Benjamin
2018-04-01
We compare the information transmission of a time-dependent signal by two types of uncoupled neuron populations that differ in their sources of variability: i) a homogeneous population whose units receive independent noise and ii) a deterministic heterogeneous population, where each unit exhibits a different baseline firing rate ('disorder'). Our criterion for making both sources of variability quantitatively comparable is that the interspike-interval distributions are identical for both systems. Numerical simulations using leaky integrate-and-fire neurons unveil that a non-zero amount of both noise or disorder maximizes the encoding efficiency of the homogeneous and heterogeneous system, respectively, as a particular case of suprathreshold stochastic resonance. Our findings thus illustrate that heterogeneity can render similarly profitable effects for neuronal populations as dynamic noise. The optimal noise/disorder depends on the system size and the properties of the stimulus such as its intensity or cutoff frequency. We find that weak stimuli are better encoded by a noiseless heterogeneous population, whereas for strong stimuli a homogeneous population outperforms an equivalent heterogeneous system up to a moderate noise level. Furthermore, we derive analytical expressions of the coherence function for the cases of very strong noise and of vanishing intrinsic noise or heterogeneity, which predict the existence of an optimal noise intensity. Our results show that, depending on the type of signal, noise as well as heterogeneity can enhance the encoding performance of neuronal populations.
Pi, Chao; Feng, Ting; Liang, Jing; Liu, Hao; Huang, Dongmei; Zhan, Chenglin; Yuan, Jiyuan; Lee, Robert J; Zhao, Ling; Wei, Yumeng
2018-06-01
Felodipine (FD) has been widely used in anti-hypertensive treatment. However, it has extremely low aqueous solubility and poor bioavailability. To address these problems, FD hollow microspheres as multiple-unit dosage forms were synthesized by a solvent diffusion evaporation method. Particle size of the hollow microspheres, types of ethylcellulose (EC), amounts of EC, polyvinyl pyrrolidone (PVP) and FD were investigated based on an orthogonal experiment of three factors and three levels. In addition, the release kinetics in vitro and pharmacokinetics in beagle dogs of the optimized FD hollow microspheres was investigated and compared with Plendil (commercial FD sustained-release tablets) as a single-unit dosage form. Results showed that the optimal formulation was composed of EC 10 cp :PVP:FD (0.9:0.16:0.36, w/w). The FD hollow microspheres were globular with a hollow structure and have high drug loading (17.69±0.44%) and floating rate (93.82±4.05%) in simulated human gastric fluid after 24h. Pharmacokinetic data showed that FD hollow microspheres exhibited sustained-release behavior and significantly improved relative bioavailability of FD compared with the control. Pharmacodynamic study showed that the FD hollow microspheres could effectively lower blood pressure. Therefore, these findings demonstrated that the hollow microspheres were an effective sustained-release delivery system for FD. Copyright © 2018 Elsevier B.V. All rights reserved.
Diagnostics and Optimization of a Miniature High Frequency Pulse Tube Cryocooler
NASA Astrophysics Data System (ADS)
Garaway, I.; Veprik, A.; Radebaugh, R.
2010-04-01
A miniature, high energy density, pulse tube cryocooler with an inertance tube and reservoir has been developed, tested, diagnosed and optimized to provide appropriate cooling for size-limited cryogenic applications demanding fast cool down. This cryocooler, originally designed using REGEN 3.2 for 80 K, an operating frequency of 150 Hz and an average pressure of 5.0 MPa, has regenerator dimensions of 4.4 mm inside diameter and 27 mm length and is filled with ♯635 mesh stainless steel screen. Various design features, such as the use of compact heat exchangers and a miniature linear compressor, resulted in a remarkably compact pulse tube cryocooler. In this report, we present the preliminary test results and the subsequent diagnostic and optimization sequence performed to improve the overall design and operation of the complete cryocooler. These experimentally determined optimal parameters, though slightly different from those proposed in the initial numerical model, yielded 530 mW of gross cooling power at 120 K with an input electrical power of only 25 W. This study highlights the need to further establish our understanding of miniature, high frequency, regenerative cryocoolers, not only as a collection of independent subcomponents, but as one single working unit. It has also led to a list of additional improvements that may yet be made to even further improve the operating characteristics of such a complete miniature cryocooler.
Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations
Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.
2016-01-01
Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094
NASA Astrophysics Data System (ADS)
Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.
2005-04-01
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.
Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W
2016-01-01
Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.
Modeling & processing of ceramic and polymer precursor ceramic matrix composite materials
NASA Astrophysics Data System (ADS)
Wang, Xiaolin
Synthesis and processing of novel materials with various advanced approaches have attracted much attention of engineers and scientists for the past thirty years. Many advanced materials display a number of exceptional properties and can be produced with different novel processing techniques. For example, AlN is a promising candidate for electronic, optical and opto-electronic applications due to its high thermal conductivity, high electrical resistivity, high acoustic wave velocity and large band gap. Large bulk AlN crystal can be produced by sublimation of AlN powder. Novel nonostructured multicomponent refractory metal-based ceramics (carbides, borides and nitrides) show a lot of exceptional mechanical, thermal and chemical properties, and can be easily produced by pyrolysis of suitable preceramic precursors mixed with metal particles. The objective of this work is to study sublimation and synthesis of AlN powder, and synthesis of SiC-based metal ceramics. For AlN sublimation crystal growth, we will focus on modeling the processes in the powder source that affect significantly the sublimation growth as a whole. To understand the powder porosity evolution and vapor transport during powder sublimation, the interplay between vapor transport and powder sublimation will be studied. A physics-based computational model will be developed considering powder sublimation and porosity evolution. Based on the proposed model, the effect of a central hole in the powder on the sublimation rate is studied and the result is compared to the case of powder without a hole. The effect of hole size on the sublimation rate will be studied. The effects of initial porosity, particle size and driving force on the sublimation rate are also studied. Moreover, the optimal growth condition for large diameter crystal quality and high growth rate will be determined. For synthesis of SiC-based metal ceramics, we will focus on developing a multi-scale process model to describe the dynamic behavior of filler particle reaction, microstructure evolution, at the microscale as well as transient fluid flow, heat transfer, and species transport at the macroscale. The model comprises of (i) a microscale model and (ii) a macroscale transport model, and aims to provide optimal conditions for the fabrication process of the ceramics. The porous media macroscale model for SiC-based metal-ceramic materials processing will be developed to understand the thermal polymer pyrolysis, chemical reaction of active fillers and transport phenomena in the porous media. The macroscale model will include heat and mass transfer, curing, pyrolysis, chemical reaction and crystallization in a mixture of preceramic polymers and submicron/nano-sized metal particles of uranium, zirconium, niobium, or hafnium. The effects of heating rate, sample size, size and volume ratio of the metal particles on the reaction rate and product uniformity will be studied. The microscale model will be developed for modeling the synthesis of SiC matrix and metal particles. The macroscale model provides thermal boundary conditions to the microscale model. The microscale model applies to repetitive units in the porous structure and describes mass transport, composition changes and motion of metal particles. The unit-cell is the representation unit of the source material, and it consists of several metal particles, SiC matrix and other components produced from the synthesis process. The reactions between different components, the microstructure evolution of the product will be considered. The effects of heating rate and metal particle size on species uniformity and microstructure are investigated.
Modified dwell time optimization model and its applications in subaperture polishing.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2014-05-20
The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200 mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920 mm Zerodur paraboloid and Φ100 mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8 nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
Sample size considerations when groups are the appropriate unit of analyses
Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith
2007-01-01
This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219
Optimal Sizing Tool for Battery Storage in Grid Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-24
The battery storage sizing tool developed at Pacific Northwest National Laboratory can be used to evaluate economic performance and determine the optimal size of battery storage in different use cases considering multiple power system applications. The considered use cases include i) utility owned battery storage, and ii) battery storage behind customer meter. The power system applications from energy storage include energy arbitrage, balancing services, T&D deferral, outage mitigation, demand charge reduction etc. Most of existing solutions consider only one or two grid services simultaneously, such as balancing service and energy arbitrage. ES-select developed by Sandia and KEMA is able tomore » consider multiple grid services but it stacks the grid services based on priorities instead of co-optimization. This tool is the first one that provides a co-optimization for systematic and local grid services.« less
Optimal synthesis and characterization of Ag nanofluids by electrical explosion of wires in liquids
2011-01-01
Silver nanoparticles were produced by electrical explosion of wires in liquids with no additive. In this study, we optimized the fabrication method and examined the effects of manufacturing process parameters. Morphology and size of the Ag nanoparticles were determined using transmission electron microscopy and field-emission scanning electron microscopy. Size and zeta potential were analyzed using dynamic light scattering. A response optimization technique showed that optimal conditions were achieved when capacitance was 30 μF, wire length was 38 mm, liquid volume was 500 mL, and the liquid type was deionized water. The average Ag nanoparticle size in water was 118.9 nm and the zeta potential was -42.5 mV. The critical heat flux of the 0.001-vol.% Ag nanofluid was higher than pure water. PMID:21711757
Patel, Lara A; Kindt, James T
2017-03-14
We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.
1981-12-01
file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler
Symplectic multi-particle tracking on GPUs
NASA Astrophysics Data System (ADS)
Liu, Zhicong; Qiang, Ji
2018-05-01
A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.
Improved CPAS Photogrammetric Capabilities for Engineering Development Unit (EDU) Testing
NASA Technical Reports Server (NTRS)
Ray, Eric S.; Bretz, David R.
2013-01-01
This paper focuses on two key improvements to the photogrammetric analysis capabilities of the Capsule Parachute Assembly System (CPAS) for the Orion vehicle. The Engineering Development Unit (EDU) system deploys Drogue and Pilot parachutes via mortar, where an important metric is the muzzle velocity. This can be estimated using a high speed camera pointed along the mortar trajectory. The distance to the camera is computed from the apparent size of features of known dimension. This method was validated with a ground test and compares favorably with simulations. The second major photogrammetric product is measuring the geometry of the Main parachute cluster during steady-state descent using onboard cameras. This is challenging as the current test vehicles are suspended by a single-point attachment unlike earlier stable platforms suspended under a confluence fitting. The mathematical modeling of fly-out angles and projected areas has undergone significant revision. As the test program continues, several lessons were learned about optimizing the camera usage, installation, and settings to obtain the highest quality imagery possible.
Numerical characterization of micro-cell UO2sbnd Mo pellet for enhanced thermal performance
NASA Astrophysics Data System (ADS)
Lee, Heung Soo; Kim, Dong-Joo; Kim, Sun Woo; Yang, Jae Ho; Koo, Yang-Hyun; Kim, Dong Rip
2016-08-01
Metallic micro-cell UO2 pellet with high thermal conductivity has received attention as a promising accident-tolerant fuel. Although experimental demonstrations have been successful, studies on the potency of current metallic micro-cell UO2 fuels for further enhancement of thermal performance are lacking. Here, we numerically investigated the thermal conductivities of micro-cell UO2sbnd Mo pellets in terms of the amount of Mo content, the unit cell size, and the aspect ratio of the micro-cells. The results showed good agreement with experimental measurements, and more importantly, indicated the importance of optimizing the unit cell geometries of the micro-cell pellets for greater increases in thermal conductivity. Consequently, the micro-cell UO2sbnd Mo pellets (5 vol% Mo) with modified geometries increased the thermal conductivity of the current UO2 pellets by about 2.5 times, and lowered the temperature gradient within the pellets by 62.9% under a linear heat generation rate of 200 W/cm.
Design for pressure regulating components
NASA Technical Reports Server (NTRS)
Wichmann, H.
1973-01-01
The design development for Pressure Regulating Components included a regulator component trade-off study with analog computer performance verification to arrive at a final optimized regulator configuration for the Space Storable Propulsion Module, under development for a Jupiter Orbiter mission. This application requires the pressure regulator to be capable of long-term fluorine exposure. In addition, individual but basically identical (for purposes of commonality) units are required for separate oxidizer and fuel pressurization. The need for dual units requires improvement in the regulation accuracy over present designs. An advanced regulator concept was prepared featuring redundant bellows, all metallic/ceramic construction, friction-free guidance of moving parts, gas damping, and the elimination of coil springs normally used for reference forces. The activities included testing of actual size seat/poppet components to determine actual discharge coefficients and flow forces. The resulting data was inserted into the computer model of the regulator. Computer simulation of the propulsion module performance over two mission profiles indicated satisfactory minimization of propellant residual requirements imposed by regulator performance uncertainties.
Quantum supercharger library: hyper-parallelism of the Hartree-Fock method.
Fernandes, Kyle D; Renison, C Alicia; Naidoo, Kevin J
2015-07-05
We present here a set of algorithms that completely rewrites the Hartree-Fock (HF) computations common to many legacy electronic structure packages (such as GAMESS-US, GAMESS-UK, and NWChem) into a massively parallel compute scheme that takes advantage of hardware accelerators such as Graphical Processing Units (GPUs). The HF compute algorithm is core to a library of routines that we name the Quantum Supercharger Library (QSL). We briefly evaluate the QSL's performance and report that it accelerates a HF 6-31G Self-Consistent Field (SCF) computation by up to 20 times for medium sized molecules (such as a buckyball) when compared with mature Central Processing Unit algorithms available in the legacy codes in regular use by researchers. It achieves this acceleration by massive parallelization of the one- and two-electron integrals and optimization of the SCF and Direct Inversion in the Iterative Subspace routines through the use of GPU linear algebra libraries. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.
2013-12-01
Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that the optimized fw is best correlated linearly to soil water content at 5 to 10 cm depth. We also found that both the temporal scale or window size and the priori uncertainty of Vcmax (given as its standard deviation) are important in determining the seasonal trajectory of Vcmax. During the leaf expansion stage, an appropriate window size leads to reasonable estimate of Vcmax. In the summer, the fluctuation of optimized Vcmax is mainly caused by the uncertainties in Vcmax but not the window size. Our study suggests that a smooth Vcmax curve optimized from an optimal time window size is close to the reality though the RMSE of GPP at this window is not the minimum. It also suggests that for the accurate optimization of Vcmax, it is necessary to set appropriate levels of uncertainty of Vcmax in the spring and summer because the rate of leaf nitrogen concentration change is different over the season. Parameter optimizations for more sites and multi-years are in progress.
NASA Astrophysics Data System (ADS)
Gentry, D.; Whinnery, J. T.; Ly, V. T.; Travers, S. V.; Sagaga, J.; Dahlgren, R. P.
2017-12-01
Microorganisms play a major role in our biosphere due to their ability to alter water, carbon and other geochemical cycles. Fog and low-level cloud water can play a major role in dispersing and supporting such microbial diversity. An ideal region to gather these microorganisms for characterization is the central coast of California, where dense fog is common. Fog captured from an unmanned aerial vehicle (UAV) at different altitudes will be analyzed to better understand the nature of microorganisms in the lower atmosphere and their potential geochemical impacts. The capture design consists of a square-meter hydrophobic mesh that hangs from a carbon fiber rod attached to a UAV. The DJI M600, a hexacopter, will be utilized as the transport for the payload, the passive impactor collection unit (PICU). The M600 will hover in a fog bank at altitudes between 10 and 100 m collecting water samples via the PICU. A computational flow dynamics (CFD) model will optimize the PICU's size, shape and placement for maximum capture efficiency and to avoid contamination from the UAV downwash. On board, there will also be an altitude, temperature and barometric pressure sensor whose output is logged to an SD card. A scale model of the PICU has been tested with several different types of hydrophobic meshes in a fog chamber at 90-95% humidity; polypropylene was found to capture the fog droplets most efficiently at a rate of .0042 g/cm2/hour. If the amount collected is proportional to the area of mesh, the estimated amount of water collected under optimal fog and flight conditions by the impactor is 21.3 g. If successful, this work will help identify the organisms living in the lower atmosphere as well as their potential geochemical impacts.
Bansal, Sanjay; Beg, Sarwar; Asthana, Abhay; Garg, Babita; Asthana, Gyati Shilakari; Kapil, Rishi; Singh, Bhupinder
2016-01-01
The objectives of present studies were to develop the systematically optimized multiple-unit gastroretentive microballoons, i.e. hollow microspheres of itopride hydrochloride (ITH) employing quality by design (QbD)-based approach. Initially, the patient-centric QTPP and CQAs were earmarked, and preliminary studies were conducted to screen the suitable polymer, solvent, solvent ratio, pH and temperature conditions. Microspheres were prepared by non-aqueous solvent evaporation method employing Eudragit S-100. Risk assessment studies carried out by constructing Ishikawa cause-effect fish-bone diagram, and techniques like risk estimation matrix (REM) and failure mode effect analysis (FMEA) facilitated the selection of plausible factors affecting the drug product CQAs, i.e. percent yield, entrapment efficiency (EE) and percent buoyancy. A 3(3) Box-Behnken design (BBD) was employed for optimizing CMAs and CPPs selected during factor screening studies employing Taguchi design, i.e. drug-polymer ratio (X1), stirring temperature (X2) and stirring speed (X3). The hollow microspheres, as per BBD, were evaluated for EE, particle size and drug release characteristics. The optimum formulation was embarked upon using numerical desirability function yielding excellent floatation characteristics along with adequate drug release control. Drug-excipient compatibility studies employing FT-IR, DSC and powder XRD revealed absence of significant interaction among the formulation excipients. The SEM studies on the optimized formulation showed hollow and spherical nature of the prepared microspheres. In vivo X-ray imaging studies in rabbits confirmed the buoyant nature of the hollow microspheres for 8 h in the upper GI tract. In a nutshell, the current investigations report the successful development of gastroretentive floating microspheres for once-a-day administration of ITH.
The preliminary SOL (Sizing and Optimization Language) reference manual
NASA Technical Reports Server (NTRS)
Lucas, Stephen H.; Scotti, Stephen J.
1989-01-01
The Sizing and Optimization Language, SOL, a high-level special-purpose computer language has been developed to expedite application of numerical optimization to design problems and to make the process less error-prone. This document is a reference manual for those wishing to write SOL programs. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler and runtime library routines. An overview of SOL appears in NASA TM 100565.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
A thermal desorption mass spectrometer for freshly nucleated secondary aerosol particles
NASA Astrophysics Data System (ADS)
Held, A.; Gonser, S. G.
2012-04-01
Secondary aerosol formation in the atmosphere is observed in a large variety of locations worldwide, introducing new particles to the atmosphere which can grow to sizes relevant for health and climate effects of aerosols. The chemical reactions leading to atmospheric secondary aerosol formation are not yet fully understood. At the same time, analyzing the chemical composition of freshly nucleated particles is still a challenging task. We are currently finishing the development of a field portable aerosol mass spectrometer for nucleation particles with diameters smaller than 30 nm. This instrument consists of a custom-built aerosol sizing and collection unit coupled to a time-of-flight mass spectrometer (TOF-MS). The aerosol sizing and collection unit is composed of three major parts: (1) a unipolar corona aerosol charger, (2) a radial differential mobility analyzer (rDMA) for aerosol size separation, and (3) an electrostatic precipitator for aerosol collection. After collection, the aerosol sample is thermally desorbed, and the resulting gas sample is transferred to the TOF-MS for chemical analysis. The unipolar charger is based on corona discharge from carbon fibres (e.g. Han et al., 2008). This design allows efficient charging at voltages below 2 kV, thus eliminating the potential for ozone production which would interfere with the collected aerosol. With the current configuration the extrinsic charging efficiency for 20 nm particles is 32 %. The compact radial DMA similar to the design of Zhang et al. (1995) is optimized for a diameter range from 1 nm to 100 nm. Preliminary tests show that monodisperse aerosol samples (geometric standard deviation of 1.09) at 10 nm, 20 nm, and 30 nm can easily be separated from the ambient polydisperse aerosol population. Finally, the size-segregated aerosol sample is collected on a high-voltage biased metal filament. The collected sample is protected from contamination using a He sheath counterflow. Resistive heating of the filament allows temperature-controlled desorption of compounds of different volatility. We will present preliminary characterization experiments of the aerosol sizing and collection unit coupled to the mass spectrometer. Funding by the German Research Foundation (DFG) under grant DFG HE5214/3-1 is gratefully acknowledged. Han, B., Kim, H.J., Kim, Y.J., and Sioutas, C. (2008) Unipolar charging of ultrafine particles using carbon fiber ionizers. Aerosol Sci. Technol, 42, 793-800. Zhang, S.-H., Akutsu, Y., Russell, L.M., Flagan, R.C., and Seinfeld, J.H. (1995) Radial Differential Mobility Analyzer. Aerosol Sci. Technol, 23, 357-372.
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
Role of step size and max dwell time in anatomy based inverse optimization for prostate implants
Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha
2013-01-01
In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323
Wu, Tiee-Jian; Huang, Ying-Hsueh; Li, Lung-An
2005-11-15
Several measures of DNA sequence dissimilarity have been developed. The purpose of this paper is 3-fold. Firstly, we compare the performance of several word-based or alignment-based methods. Secondly, we give a general guideline for choosing the window size and determining the optimal word sizes for several word-based measures at different window sizes. Thirdly, we use a large-scale simulation method to simulate data from the distribution of SK-LD (symmetric Kullback-Leibler discrepancy). These simulated data can be used to estimate the degree of dissimilarity beta between any pair of DNA sequences. Our study shows (1) for whole sequence similiarity/dissimilarity identification the window size taken should be as large as possible, but probably not >3000, as restricted by CPU time in practice, (2) for each measure the optimal word size increases with window size, (3) when the optimal word size is used, SK-LD performance is superior in both simulation and real data analysis, (4) the estimate beta of beta based on SK-LD can be used to filter out quickly a large number of dissimilar sequences and speed alignment-based database search for similar sequences and (5) beta is also applicable in local similarity comparison situations. For example, it can help in selecting oligo probes with high specificity and, therefore, has potential in probe design for microarrays. The algorithm SK-LD, estimate beta and simulation software are implemented in MATLAB code, and are available at http://www.stat.ncku.edu.tw/tjwu
Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.
Optimizing Mississippi aggregates for concrete bridge decks.
DOT National Transportation Integrated Search
2012-12-01
AASHTO M 43 Standard Specification for Sizes of Aggregate for Road and Bridge Construction : addresses particle size distribution of material included in various maximum nominal size aggregates. This : particle size distribution requires additi...
ONLINE MINIMIZATION OF VERTICAL BEAM SIZES AT APS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yipeng
In this paper, online minimization of vertical beam sizes along the APS (Advanced Photon Source) storage ring is presented. A genetic algorithm (GA) was developed and employed for the online optimization in the APS storage ring. A total of 59 families of skew quadrupole magnets were employed as knobs to adjust the coupling and the vertical dispersion in the APS storage ring. Starting from initially zero current skew quadrupoles, small vertical beam sizes along the APS storage ring were achieved in a short optimization time of one hour. The optimization results from this method are briefly compared with the onemore » from LOCO (Linear Optics from Closed Orbits) response matrix correction.« less
Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E
2018-07-01
The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.
2008-05-01
High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.
Anthun, Kjartan Sarheim; Kittelsen, Sverre Andreas Campbell; Magnussen, Jon
2017-04-01
This paper analyses productivity growth in the Norwegian hospital sector over a period of 16 years, 1999-2014. This period was characterized by a large ownership reform with subsequent hospital reorganizations and mergers. We describe how technological change, technical productivity, scale efficiency and the estimated optimal size of hospitals have evolved during this period. Hospital admissions were grouped into diagnosis-related groups using a fixed-grouper logic. Four composite outputs were defined and inputs were measured as operating costs. Productivity and efficiency were estimated with bootstrapped data envelopment analyses. Mean productivity increased by 24.6% points from 1999 to 2014, an average annual change of 1.5%. There was a substantial growth in productivity and hospital size following the ownership reform. After the reform (2003-2014), average annual growth was <0.5%. There was no evidence of technical change. Estimated optimal size was smaller than the actual size of most hospitals, yet scale efficiency was high even after hospital mergers. However, the later hospital mergers have not been followed by similar productivity growth as around time of the reform. This study addresses the issues of both cross-sectional and longitudinal comparability of case mix between hospitals, and thus provides a framework for future studies. The study adds to the discussion on optimal hospital size. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.
2016-09-01
The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.
Effects of Planetary Gear Ratio on Mean Service Life
NASA Technical Reports Server (NTRS)
Savage, M.; Rubadeux, K. L.; Coe, H. H.
1996-01-01
Planetary gear transmissions are compact, high-power speed reductions which use parallel load paths. The range of possible reduction ratios is bounded from below and above by limits on the relative size of the planet gears. For a single plane transmission, the planet gear has no size at a ratio of two. As the ratio increases, so does the size of the planets relative to the sizes of the sun and ring. Which ratio is best for a planetary reduction can be resolved by studying a series of optimal designs. In this series, each design is obtained by maximizing the service life for a planetary with a fixed size, gear ratio, input speed power and materials. The planetary gear reduction service life is modeled as a function of the two-parameter Weibull distributed service lives of the bearings and gears in the reduction. Planet bearing life strongly influences the optimal reduction lives which point to an optimal planetary reduction ratio in the neighborhood of four to five.
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
NASA Astrophysics Data System (ADS)
Li, Qifan; Chen, Yajie; Harris, Vincent G.
2018-05-01
This letter reports an extended effective medium theory (EMT) including particle-size distribution functions to maximize the magnetic properties of magneto-dielectric composites. It is experimentally verified by Co-Ti substituted barium ferrite (BaCoxTixFe12-2xO19)/wax composites with specifically designed particle-size distributions. In the form of an integral equation, the extended EMT formula essentially takes the size-dependent parameters of magnetic particle fillers into account. It predicts the effective permeability of magneto-dielectric composites with various particle-size distributions, indicating an optimal distribution for a population of magnetic particles. The improvement of the optimized effective permeability is significant concerning magnetic particles whose properties are strongly size dependent.
24 CFR 886.125 - Overcrowded and underoccupied units.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Sanitary by reason of increase in Family size or that a Contract unit is larger than appropriate for the size of the Family in occupancy, housing assistance payments with respect to such unit will not be abated, unless the Owner fails to offer the Family a suitable unit as soon as one becomes vacant and...
NASA Astrophysics Data System (ADS)
Mohan Negi, Lalit; Jaggi, Manu; Talegaonkar, Sushama
2013-01-01
Development of an effective formulation involves careful optimization of a number of excipient and process variables. Sometimes the number of variables is so large that even the most efficient optimization designs require a very large number of trials which put stress on costs as well as time. A creative combination of a number of design methods leads to a smaller number of trials. This study was aimed at the development of nanostructured lipid carriers (NLCs) by using a combination of different optimization methods. A total of 11 variables were first screened using the Plackett-Burman design for their effects on formulation characteristics like size and entrapment efficiency. Four out of 11 variables were found to have insignificant effects on the formulation parameters and hence were screened out. Out of the remaining seven variables, four (concentration of tween-80, lecithin, sodium taurocholate, and total lipid) were found to have significant effects on the size of the particles while the other three (phase ratio, drug to lipid ratio, and sonication time) had a higher influence on the entrapment efficiency. The first four variables were optimized for their effect on size using the Taguchi L9 orthogonal array. The optimized values of the surfactants and lipids were kept constant for the next stage, where the sonication time, phase ratio, and drug:lipid ratio were varied using the Box-Behnken design response surface method to optimize the entrapment efficiency. Finally, by performing only 38 trials, we have optimized 11 variables for the development of NLCs with a size of 143.52 ± 1.2 nm, zeta potential of -32.6 ± 0.54 mV, and 98.22 ± 2.06% entrapment efficiency.
History of water-column anoxia in the Black Sea indicated by pyrite framboid size distributions
Wilkin, R.T.; Arthur, M.A.; Dean, W.E.
1997-01-01
A detailed study of size distributions of framboidal pyrite in Holocene Black Sea sediments establishes the timing of a change from deposition under an oxic water column to deposition under an anoxic and sulfidic water column. In the most recent carbonate-rich sediments (Unit I) and in the organic carbon-rich sapropel (Unit II), framboid size distributions are remarkably uniform (mean diameter= 5 ??m); over 95% of the framboids in Unit I and Unit II are < 7 ??m in diameter. These properties of framboidal pyrite are consistent with framboid nucleation and growth within an anoxic and sulfidic water column, followed by transport to the sediment-water interface, cessation of pyrite growth due to the exhaustion of reactive iron, and subsequent burial. In contrast, the organic carbon-poor sediments of lacustrine Unit III contain pyrite framboids that are generally much larger in size (mean diameter = 10 ??m). In Unit III, over 95% of the framboids are < 25 ??m in diameter, 40% of framboids are between 7 ??m and 25 ??m, and framboids up to 50 ??m in diameter are present. This distribution of sizes suggests framboid nucleation and growth within anoxic sediment porewaters. These new data on size distributions of framboidal pyrite confirm that the development of water-column anoxia in the Black Sea coincided with the initiation of deposition of laminated Unit II sapropels.
Man in Balance with the Environment: Pollution and the Optimal Population Size
ERIC Educational Resources Information Center
Ultsch, Gordon R.
1973-01-01
Discusses the relationship between population size and pollution, and suggests that the optimal population level toward which we should strive would be that level at which man is in balance with the biosphere in terms of pollution production and degradation, coupled with a harmless steady-state background pollution level. (JR)
Geometrical optimization of a local ballistic magnetic sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanda, Yuhsuke; Hara, Masahiro; Nomura, Tatsuya
2014-04-07
We have developed a highly sensitive local magnetic sensor by using a ballistic transport property in a two-dimensional conductor. A semiclassical simulation reveals that the sensitivity increases when the geometry of the sensor and the spatial distribution of the local field are optimized. We have also experimentally demonstrated a clear observation of a magnetization process in a permalloy dot whose size is much smaller than the size of an optimized ballistic magnetic sensor fabricated from a GaAs/AlGaAs two-dimensional electron gas.
Performance Optimization of Irreversible Air Heat Pumps Considering Size Effect
NASA Astrophysics Data System (ADS)
Bi, Yuehong; Chen, Lingen; Ding, Zemin; Sun, Fengrui
2018-06-01
Considering the size of an irreversible air heat pump (AHP), heating load density (HLD) is taken as thermodynamic optimization objective by using finite-time thermodynamics. Based on an irreversible AHP with infinite reservoir thermal-capacitance rate model, the expression of HLD of AHP is put forward. The HLD optimization processes are studied analytically and numerically, which consist of two aspects: (1) to choose pressure ratio; (2) to distribute heat-exchanger inventory. Heat reservoir temperatures, heat transfer performance of heat exchangers as well as irreversibility during compression and expansion processes are important factors influencing on the performance of an irreversible AHP, which are characterized with temperature ratio, heat exchanger inventory as well as isentropic efficiencies, respectively. Those impacts of parameters on the maximum HLD are thoroughly studied. The research results show that HLD optimization can make the size of the AHP system smaller and improve the compactness of system.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Topology synthesis and size optimization of morphing wing structures
NASA Astrophysics Data System (ADS)
Inoyama, Daisaku
This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Hany F; El Hariri, Mohamad; Elsayed, Ahmed
Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintainmore » a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.« less
Finite-size effect on optimal efficiency of heat engines.
Tajima, Hiroyasu; Hayashi, Masahito
2017-07-01
The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.
Testing of Strategies for the Acceleration of the Cost Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, Roberto; Vilim, Richard B.
The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) andmore » the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal one. The component physical dynamics are represented through suitable ramp constraints, which considerably simplify the numerical solving. In order to test the capabilities of the proposed approach, in the present report, the dispatch problem only is tackled, i.e. a reference unit configuration is assumed, and each one of the N-R HES unit components is assumed to have a fixed installed capacity. As for the next steps, the main improvement will concern the operation strategy of the ES facility. In particular, in order to describe a more realistic battery commitment strategy, the ES operation will be regulated according to the electricity price forecasts.« less
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Optimization benefits analysis in production process of fabrication components
NASA Astrophysics Data System (ADS)
Prasetyani, R.; Rafsanjani, A. Y.; Rimantho, D.
2017-12-01
The determination of an optimal number of product combinations is important. The main problem at part and service department in PT. United Tractors Pandu Engineering (shortened to PT.UTPE) Is the optimization of the combination of fabrication component products (known as Liner Plate) which influence to the profit that will be obtained by the company. Liner Plate is a fabrication component that serves as a protector of core structure for heavy duty attachment, such as HD Vessel, HD Bucket, HD Shovel, and HD Blade. The graph of liner plate sales from January to December 2016 has fluctuated and there is no direct conclusion about the optimization of production of such fabrication components. The optimal product combination can be achieved by calculating and plotting the amount of production output and input appropriately. The method that used in this study is linear programming methods with primal, dual, and sensitivity analysis using QM software for Windows to obtain optimal fabrication components. In the optimal combination of components, PT. UTPE provide the profit increase of Rp. 105,285,000.00 for a total of Rp. 3,046,525,000.00 per month and the production of a total combination of 71 units per unit variance per month.
NASA Astrophysics Data System (ADS)
L'Heureux, Zara E.
This thesis proposes that internal combustion piston engines can help clear the way for a transformation in the energy, chemical, and refining industries that is akin to the transition computer technology experienced with the shift from large mainframes to small personal computers and large farms of individually small, modular processing units. This thesis provides a mathematical foundation, multi-dimensional optimizations, experimental results, an engine model, and a techno-economic assessment, all working towards quantifying the value of repurposing internal combustion piston engines for new applications in modular, small-scale technologies, particularly for energy and chemical engineering systems. Many chemical engineering and power generation industries have focused on increasing individual unit sizes and centralizing production. This "bigger is better" concept makes it difficult to evolve and incorporate change. Large systems are often designed with long lifetimes, incorporate innovation slowly, and necessitate high upfront investment costs. Breaking away from this cycle is essential for promoting change, especially change happening quickly in the energy and chemical engineering industries. The ability to evolve during a system's lifetime provides a competitive advantage in a field dominated by large and often very old equipment that cannot respond to technology change. This thesis specifically highlights the value of small, mass-manufactured internal combustion piston engines retrofitted to participate in non-automotive system designs. The applications are unconventional and stem first from the observation that, when normalized by power output, internal combustion engines are one hundred times less expensive than conventional, large power plants. This cost disparity motivated a look at scaling laws to determine if scaling across both individual unit size and number of units produced would predict the two order of magnitude difference seen here. For the first time, this thesis provides a mathematical analysis of scaling with a combination of both changing individual unit size and varying the total number of units produced. Different paths to meet a particular cumulative capacity are analyzed and show that total costs are path dependent and vary as a function of the unit size and number of units produced. The path dependence identified is fairly weak, however, and for all practical applications, the underlying scaling laws seem unaffected. This analysis continues to support the interest in pursuing designs built around small, modular infrastructure. Building on the observation that internal combustion engines are an inexpensive power-producing unit, the first optimization in this thesis focuses on quantifying the value of engine capacity committing to deliver power in the day-ahead electricity and reserve markets, specifically based on pricing from the New York Independent System Operator (NYISO). An optimization was written in Python to determine, based on engine cost, fuel cost, engine wear, engine lifetime, and electricity prices, when and how much of an engine's power should be committed to a particular energy market. The optimization aimed to maximize profit for the engine and generator (engine genset) system acting as a price-taker. The result is an annual profit on the order of \\$30 per kilowatt. The most value in the engine genset is in its commitments to the spinning reserve market, where power is often committed but not always called on to deliver. This analysis highlights the benefits of modularity in energy generation and provides one example where the system is so inexpensive and short-lived, that the optimization views the engine replacement cost as a consumable operating expense rather than a capital cost. Having the opportunity to incorporate incremental technological improvements in a system's infrastructure throughout its lifetime allows introduction of new technology with higher efficiencies and better designs. An alternative to traditionally large infrastructure that locks in a design and today's state-of-the-art technology for the next 50 - 70 years, is a system designed to incorporate new technology in a modular fashion. The modular engine genset system used for power generation is one example of how this works in practice. The largest single component of this thesis is modeling, designing, retrofitting, and testing a reciprocating piston engine used as a compressor. Motivated again by the low cost of an internal combustion engine, this work looks at how an engine (which is, in its conventional form, essentially a reciprocating compressor) can be cost-effectively retrofitted to perform as a small-scale gas compressor. In the laboratory, an engine compressor was built by retrofitting a one-cylinder, 79 cc engine. Various retrofitting techniques were incorporated into the system design, and the engine compressor performance was quantified in each iteration. Because the retrofitted engine is now a power consumer rather than a power-producing unit, the engine compressor is driven in the laboratory with an electric motor. Experimentally, compressed air engine exhaust (starting at elevated inlet pressures) surpassed 650 psia (about 45 bar), which makes this system very attractive for many applications in chemical engineering and refining industries. A model of the engine compressor system was written in Python and incorporates experimentally-derived parameters to quantify gas leakage, engine friction, and flow (including backflow) through valves. The model as a whole was calibrated and verified with experimental data and is used to explore engine retrofits beyond what was tested in the laboratory. Along with the experimental and modeling work, a techno-economic assessment is included to compare the engine compressor system with state-of-the-art, commercially-available compressors. Included in the financial analysis is a case study where an engine compressor system is modeled to achieve specific compression needs. The result of the assessment is that, indeed, the low engine cost, even with the necessary retrofits, provides a cost advantage over incumbent compression technologies. Lastly, this thesis provides an algorithm and case study for another application of small-scale units in energy infrastructure, specifically in energy storage. This study focuses on quantifying the value of small-scale, onsite energy storage in shaving peak power demands. This case study focuses on university-level power demands. The analysis finds that, because peak power is so costly, even small amounts of energy storage, when dispatched optimally, can provide significant cost reductions. This provides another example of the value of small-scale implementations, particularly in energy infrastructure. While the study focuses on flywheels and batteries as the energy storage medium, engine gensets could also be used to deliver power and shave peak power demands. The overarching goal of this thesis is to introduce small-scale, modular infrastructure, with a particular focus on the opportunity to retrofit and repurpose inexpensive, mass-manufactured internal combustion engines in new and unconventional applications. The modeling and experimental work presented in this dissertation show very compelling results for engines incorporated into both energy generation infrastructure and chemical engineering industries via compression technologies. The low engine cost provides an opportunity to add retrofits whilst remaining cost competitive with the incumbent technology. This work supports the claim that modular infrastructure, built on the indivisible unit of an internal combustion engine, can revolutionize many industries by providing a low-cost mechanism for rapid change and promoting small-scale designs.
Empty tracks optimization based on Z-Map model
NASA Astrophysics Data System (ADS)
Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao
2017-12-01
For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.
Optical systems integrated modeling
NASA Technical Reports Server (NTRS)
Shannon, Robert R.; Laskin, Robert A.; Brewer, SI; Burrows, Chris; Epps, Harlan; Illingworth, Garth; Korsch, Dietrich; Levine, B. Martin; Mahajan, Vini; Rimmer, Chuck
1992-01-01
An integrated modeling capability that provides the tools by which entire optical systems and instruments can be simulated and optimized is a key technology development, applicable to all mission classes, especially astrophysics. Many of the future missions require optical systems that are physically much larger than anything flown before and yet must retain the characteristic sub-micron diffraction limited wavefront accuracy of their smaller precursors. It is no longer feasible to follow the path of 'cut and test' development; the sheer scale of these systems precludes many of the older techniques that rely upon ground evaluation of full size engineering units. The ability to accurately model (by computer) and optimize the entire flight system's integrated structural, thermal, and dynamic characteristics is essential. Two distinct integrated modeling capabilities are required. These are an initial design capability and a detailed design and optimization system. The content of an initial design package is shown. It would be a modular, workstation based code which allows preliminary integrated system analysis and trade studies to be carried out quickly by a single engineer or a small design team. A simple concept for a detailed design and optimization system is shown. This is a linkage of interface architecture that allows efficient interchange of information between existing large specialized optical, control, thermal, and structural design codes. The computing environment would be a network of large mainframe machines and its users would be project level design teams. More advanced concepts for detailed design systems would support interaction between modules and automated optimization of the entire system. Technology assessment and development plans for integrated package for initial design, interface development for detailed optimization, validation, and modeling research are presented.
Optimal Control Surface Layout for an Aeroservoelastic Wingbox
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2017-01-01
This paper demonstrates a technique for locating the optimal control surface layout of an aeroservoelastic Common Research Model wingbox, in the context of maneuver load alleviation and active utter suppression. The combinatorial actuator layout design is solved using ideas borrowed from topology optimization, where the effectiveness of a given control surface is tied to a layout design variable, which varies from zero (the actuator is removed) to one (the actuator is retained). These layout design variables are optimized concurrently with a large number of structural wingbox sizing variables and control surface actuation variables, in order to minimize the sum of structural weight and actuator weight. Results are presented that demonstrate interdependencies between structural sizing patterns and optimal control surface layouts, for both static and dynamic aeroelastic physics.
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho
2008-03-01
To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.
NASA Astrophysics Data System (ADS)
Yi, Gong; Jilin, Cheng; Lihua, Zhang; Rentian, Zhang
2010-06-01
According to different processes of tides and peak-valley electricity prices, this paper determines the optimal start up time in pumping station's 24 hours operation between the rating state and adjusting blade angle state respectively based on the optimization objective function and optimization model for single-unit pump's 24 hours operation taking JiangDu No.4 Pumping Station for example. In the meantime, this paper proposes the following regularities between optimal start up time of pumping station and the process of tides and peak-valley electricity prices each day within a month: (1) In the rating and adjusting blade angle state, the optimal start up time in pumping station's 24 hours operation which depends on the tide generation at the same day varies with the process of tides. There are mainly two kinds of optimal start up time which include the time at tide generation and 12 hours after it. (2) In the rating state, the optimal start up time on each day in a month exhibits a rule of symmetry from 29 to 28 of next month in the lunar calendar. The time of tide generation usually exists in the period of peak electricity price or the valley one. The higher electricity price corresponds to the higher minimum cost of water pumping at unit, which means that the minimum cost of water pumping at unit depends on the peak-valley electricity price at the time of tide generation on the same day. (3) In the adjusting blade angle state, the minimum cost of water pumping at unit in pumping station's 24 hour operation depends on the process of peak-valley electricity prices. And in the adjusting blade angle state, 4.85%˜5.37% of the minimum cost of water pumping at unit will be saved than that of in the rating state.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Fuzzy control based engine sizing optimization for a fuel cell/battery hybrid mini-bus
NASA Astrophysics Data System (ADS)
Kim, Minjin; Sohn, Young-Jun; Lee, Won-Yong; Kim, Chang-Soo
The fuel cell/battery hybrid vehicle has been focused for the alternative engine of the existing internal-combustion engine due to the following advantages of the fuel cell and the battery. Firstly, the fuel cell is highly efficient and eco-friendly. Secondly, the battery has the fast response for the changeable power demand. However, the competitive efficiency of the hybrid fuel cell vehicle is necessary to successfully alternate the conventional vehicles with the fuel cell hybrid vehicle. The most relevant factor which affects the overall efficiency of the hybrid fuel cell vehicle is the relative engine sizing between the fuel cell and the battery. Therefore the design method to optimize the engine sizing of the fuel cell hybrid vehicle has been proposed. The target system is the fuel cell/battery hybrid mini-bus and its power distribution is controlled based on the fuzzy logic. The optimal engine sizes are determined based on the simulator developed in this paper. The simulator includes the several models for the fuel cell, the battery, and the major balance of plants. After the engine sizing, the system efficiency and the stability of the power distribution are verified based on the well-known driving schedule. Consequently, the optimally designed mini-bus shows good performance.
Adoption and supply of a distributed energy technology
NASA Astrophysics Data System (ADS)
Strachan, Neil Douglas
2000-12-01
Technical and economic developments in distributed generation (DG) represent an opportunity for a radically different energy market paradigm, and potentially significant cuts in global carbon emissions. This thesis investigates DG along two interrelated themes: (1) Early adoption and supply of the DG technology of internal combustion (IC) engine cogeneration. (2) Private and social cost implications of DG for private investors and within an energy system. IC engine cogeneration of both power and heat has been a remarkable success in the Netherlands with over 5,000 installations and 1,500MWe of installed capacity by 1997. However, the technology has struggled in the UK with an installed capacity of 110Mwe, fulfilling only 10% of its large estimated potential. An investment simulation model of DG investments in the UK and Netherlands was used, together with analysis of site level data on all DG adoptions from 1985 through 1997. In the UK over 60% of the early installations were sized too small (<140kWe) to be economically attractive (suppliers made their money with maintenance contracts). In the Netherlands, most facilities were sized well above the economic size threshold of 100kWe (lower due to reduced operating and grid connection costs). Institutional players were key in improved sizing of DG. Aided by energy market and CO2 reduction regulatory policy, Dutch distributions utilities played a proactive role in DG. This involved joint ventures with engine cogen suppliers and users, offering improved electricity buy-back tariffs and lower connection costs. This has allowed flexible operation of distributed generation, especially in electricity sales to the grid. Larger units can be sized for on-site heat requirements with electricity export providing revenue and aiding in management of energy networks. A comparison of internal and external costs of three distributed and three centralized generation technologies over a range of heat to power ratios (HPR) was made. Micro-turbines were found to be the lowest cost technology, especially at higher heat loads. Engines are also very competitive providing their NOx and CO emissions are controlled. A cost optimization program was used to develop an optimal green-field supply mix for Florida and New York. (Abstract shortened by UMI.)
Kollipara, Sivacharan; Bende, Girish; Movva, Snehalatha; Saha, Ranendra
2010-11-01
Polymeric carrier systems of paclitaxel (PCT) offer advantages over only available formulation Taxol® in terms of enhancing therapeutic efficacy and eliminating adverse effects. The objective of the present study was to prepare poly (lactic-co-glycolic acid) nanoparticles containing PCT using emulsion solvent evaporation technique. Critical factors involved in the processing method were identified and optimized by scientific, efficient rotatable central composite design aiming at low mean particle size and high entrapment efficiency. Twenty different experiments were designed and each formulation was evaluated for mean particle size and entrapment efficiency. The optimized formulation was evaluated for in vitro drug release, and absorption characteristics were studied using in situ rat intestinal permeability study. Amount of polymer and duration of ultrasonication were found to have significant effect on mean particle size and entrapment efficiency. First-order interactions of amount of miglyol with amount of polymer were significant in case of mean particle size, whereas second-order interactions of polymer were significant in mean particle size and entrapment efficiency. The developed quadratic model showed high correlation (R(2) > 0.85) between predicted response and studied factors. The optimized formulation had low mean particle size (231.68 nm) and high entrapment efficiency (95.18%) with 4.88% drug content. The optimized formulation showed controlled release of PCT for more than 72 hours. In situ absorption study showed faster and enhanced extent of absorption of PCT from nanoparticles compared to pure drug. The poly (lactic-co-glycolic acid) nanoparticles containing PCT may be of clinical importance in enhancing its oral bioavailability.
Optimal Padding for the Two-Dimensional Fast Fourier Transform
NASA Technical Reports Server (NTRS)
Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.
2011-01-01
One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.
Hlushak, Stepan
2018-01-03
Temperature, pressure and pore-size dependences of the heat of adsorption, adsorption stress, and adsorption capacity of methane in simple models of slit and cylindrical carbon pores are studied using classical density functional theory (CDFT) and grand-canonical Monte-Carlo (MC) simulation. Studied properties depend nontrivially on the bulk pressure and the size of the pores. Heat of adsorption increases with loading, but only for sufficiently narrow pores. While the increase is advantageous for gas storage applications, it is less significant for cylindrical pores than for slits. Adsorption stress and the average adsorbed fluid density show oscillatory dependence on the pore size and increase with bulk pressure. Slit pores exhibit larger amplitude of oscillations of the normal adsorption stress with pore size increase than cylindrical pores. However, the increase of the magnitude of the adsorption stress with bulk pressure increase is more significant for cylindrical than for slit pores. Adsorption stress appears to be negative for a wide range of pore sizes and external conditions. The pore size dependence of the average delivered density of the gas is analyzed and the optimal pore sizes for storage applications are estimated. The optimal width of slit pore appears to be almost independent of storage pressure at room temperature and pressures above 10 bar. Similarly to the case of slit pores, the optimal radius of cylindrical pores does not exhibit much dependence on the storage pressure above 15 bar. Both optimal width and optimal radii of slit and cylindrical pores increase as the temperature decreases. A comparison of the results of CDFT theory and MC simulations reveals subtle but important differences in the underlying fluid models employed by the approaches. The differences in the high-pressure behaviour between the hard-sphere 2-Yukawa and Lennard-Jones models of methane, employed by the CDFT and MC approaches, respectively, result in an overestimation of the heat of adsorption by the CDFT theory at higher loadings. However, both adsorption stress and adsorption capacity appear to be much less sensitive to the differences between the models and demonstrate excellent agreement between the theory and the computer experiment.
24 CFR 882.509 - Overcrowded and under occupied units.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Family size, or that a Contract unit is larger than appropriate for the size of the Family in occupancy... Family a suitable alternative unit should one be available and the Family will be required to move. If the Owner does not have a suitable available unit, the PHA must assist the Family in locating other...
NASA Astrophysics Data System (ADS)
San-José, Luis A.; Sicilia, Joaquín; González-de-la-Rosa, Manuel; Febles-Acosta, Jaime
2018-07-01
In this article, a deterministic inventory model with a ramp-type demand depending on price and time is developed. The cumulative holding cost is assumed to be a nonlinear function of time. Shortages are allowed and are partially backlogged. Thus, the fraction of backlogged demand depends on the waiting time and on the stock-out period. The aim is to maximize the total profit per unit time. To do this, a procedure that determines the economic lot size, the optimal inventory cycle and the maximum profit is presented. The inventory system studied here extends diverse inventory models proposed in the literature. Finally, some numerical examples are provided to illustrate the theoretical results previously propounded.
Bryson, Ethan O; Aloysi, Amy S; Farber, Kate G; Kellner, Charles H
2017-06-01
Electroconvulsive therapy (ECT) remains an indispensable treatment for severe psychiatric illness. It is practiced extensively in the United States and around the world, yet there is little guidance for anesthesiologists involved with this common practice. Communication between the anesthesiologist and the proceduralist is particularly important for ECT, because the choice of anesthetic and management of physiologic sequelae of the therapeutic seizure can directly impact both the efficacy and safety of the treatment. In this review, we examine the literature on anesthetic management for ECT. A casual or "one-size-fits-all" approach may lead to less-than-optimal outcomes; customizing the anesthetic management for each patient is essential and can significantly increase treatment success rate and patient satisfaction.
Programmable DNA-Mediated Multitasking Processor.
Shu, Jian-Jun; Wang, Qi-Wen; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin
2015-04-30
Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.
Modeling snail breeding in a bioregenerative life support system
NASA Astrophysics Data System (ADS)
Kovalev, V. S.; Manukovsky, N. S.; Tikhomirov, A. A.; Kolmakova, A. A.
2015-07-01
The discrete-time model of snail breeding consists of two sequentially linked submodels: "Stoichiometry" and "Population". In both submodels, a snail population is split up into twelve age groups within one year of age. The first submodel is used to simulate the metabolism of a single snail in each age group via the stoichiometric equation; the second submodel is used to optimize the age structure and the size of the snail population. Daily intake of snail meat by crewmen is a guideline which specifies the population productivity. The mass exchange of the snail unit inhabited by land snails of Achatina fulica is given as an outcome of step-by-step modeling. All simulations are performed using Solver Add-In of Excel 2007.
Tchernev, Georgi; Pidakev, Ivan
2018-05-31
We report for a 70-year-old cachectic patient - 165 cm, 45 kg, who was admitted for the first time to the dematosurgical unit for a surgical removal of a tumor formation localized in the region of the back from more than 15 years (fig. 1a). In the last few months the lesion has increased significantly in size and causes burning sensation, mild pain and abundant secretion of bloody yellow secretions (fig. 1a). During the dermatological examination a tumor-like formation in the left scapular region was visualized, This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
An Optimization Framework for Driver Feedback Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malikopoulos, Andreas; Aguilar, Juan P.
2013-01-01
Modern vehicles have sophisticated electronic control units that can control engine operation with discretion to balance fuel economy, emissions, and power. These control units are designed for specific driving conditions (e.g., different speed profiles for highway and city driving). However, individual driving styles are different and rarely match the specific driving conditions for which the units were designed. In the research reported here, we investigate driving-style factors that have a major impact on fuel economy and construct an optimization framework to optimize individual driving styles with respect to these driving factors. In this context, we construct a set of polynomialmore » metamodels to reflect the responses produced in fuel economy by changing the driving factors. Then, we compare the optimized driving styles to the original driving styles and evaluate the effectiveness of the optimization framework. Finally, we use this proposed framework to develop a real-time feedback system, including visual instructions, to enable drivers to alter their driving styles in response to actual driving conditions to improve fuel efficiency.« less
Optimization of hydraulic turbine governor parameters based on WPA
NASA Astrophysics Data System (ADS)
Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao
2018-01-01
The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-10
... time the species was listed. The occurrence at Watkins Savannah (O'Berry Tract C) (EO 5.19) was found... are: (1) Unit 1: Watkins Savanna, (2) Unit 2: Haws Run Mitigation Site, (3) Unit 3: Maple Hill School... Type Size of Unit Acres Size of Unit Hectares 1 A Watkins Savanna, NCDPR 1.2 0.5 O'Berry, Tract A 1 B...
Packing Optimization of Sorbent Bed Containing Dissimilar and Irregular Shaped Media
NASA Technical Reports Server (NTRS)
Holland, Nathan; Guttromson, Jayleen; Piowaty, Hailey
2011-01-01
The Fire Cartridge is a packed bed air filter with two different and separate layers of media designed to provide respiratory protection from combustion products after a fire event on the International Space Station (ISS). The first layer of media is a carbon monoxide catalyst and the second layer of media is universal carbon. During development of Fire Cartridge prototypes, the two media beds were noticed to have shifted inside the cartridge. The movement of media within the cartridge can cause mixing of the bed layers, air voids, and channeling, which could cause preferential air flow and allow contaminants to pass through without removal. An optimally packed bed mitigates these risks and ensures effective removal of contaminants from the air. In order to optimally pack each layer, vertical, horizontal, and orbital agitations were investigated and a packed bulk density was calculated for each method. Packed bulk density must be calculated for each media type to accommodate variations in particle size, shape, and density. Additionally, the optimal vibration parameters must be re-evaluated for each batch of media due to variations in particle size distribution between batches. For this application it was determined that orbital vibrations achieve an optimal pack density and the two media layers can be packed by the same method. Another finding was media with a larger size distribution of particles achieve an optimal bed pack easier than media with a smaller size distribution of particles.
Tailored Magnetic Nanoparticles for Optimizing Magnetic Fluid Hyperthermia
Khandhar, Amit; Ferguson, R. Matthew; Simon, Julian A.; Krishnan, Kannan M.
2011-01-01
Magnetic Fluid Hyperthermia (MFH) is a promising approach towards adjuvant cancer therapy that is based on the localized heating of tumors using the relaxation losses of iron oxide magnetic nanoparticles (MNPs) in alternating magnetic fields (AMF). In this study, we demonstrate optimization of MFH by tailoring MNP size to an applied AMF frequency. Unlike conventional aqueous synthesis routes, we use organic synthesis routes that offer precise control over MNP size (diameter ~ 10–25 nm), size distribution and phase purity. Furthermore, the particles are successfully transferred to the aqueous phase using a biocompatible amphiphilic polymer, and demonstrate long-term shelf life. A rigorous characterization protocol ensures that the water-stable MNPs meet all the critical requirements: (1) uniform shape and monodispersity, (2) phase purity, (3) stable magnetic properties approaching that of the bulk, (4) colloidal stability, (5) substantial shelf life and (6) pose no significant in vitro toxicity. Using a dedicated hyperthermia system, we then identified that 16 nm monodisperse MNPs (σ ~ 0.175) respond optimally to our chosen AMF conditions (f = 373 kHz, Ho = 14 kA/m); however, with a broader size distribution (σ ~ 0.284) the Specific Loss Power (SLP) decreases by 30%. Finally, we show that these tailored MNPs demonstrate maximum hyperthermia efficiency by reducing viability of Jurkat cells in vitro, suggesting our optimization translates truthfully to cell populations. In summary, we present a way to intrinsically optimize MFH by tailoring the MNPs to any applied AMF, a required precursor to optimize dose and time of treatment. PMID:22213652
The Sizing and Optimization Language, (SOL): Computer language for design problems
NASA Technical Reports Server (NTRS)
Lucas, Stephen H.; Scotti, Stephen J.
1988-01-01
The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.
All plastic ultra-small size imaging lens unit fabrication and evaluation for endoscope
NASA Astrophysics Data System (ADS)
Ishii, Kenta; Okamoto, Dai; Ushio, Makoto; Tai, Hidetoshi; Nishihara, Atsuhiko; Tokuda, Kimio; Kawai, Shinsuke; Kitagawa, Seiichiro
2017-02-01
There is demand for small-size lens units for endoscope and industrial applications. Polished glass lenses with a diameter of 1 - 2mm exist, however plastic lenses similar in size are not commonplace. For low-cost, light-weight, and mass production, plastic lens fabrication is extremely beneficial. Especially, in the medical field, there is strong demand for disposable lens unit for endoscopes which prevent contamination due to reuse of the lens. Therefore, high mass producible and low cost becomes increasingly important. This paper reports our findings on injection-molded ultra-small size plastic lens units with a diameter of 1.3mm and total thickness of 1.4mm. We performed optical design, injection molding, and lens unit assembly for injection moldable, high imaging performance ultra-small sized lens units. We prioritize a robust product design, considering injection molding properties and lens unit assembly, with feedback from molding simulations reflected into the optical design. A mold capable of high precision lens positioning is used to fabricate the lenses and decrease the variability of the assembly. The geometric dimensions of the resulting lenses, are measured and used in the optical simulation to validate the optical performance, and a high agreement is reported. The injection molding of the lens and the assembly of the lens unit is performed with high precision, and results in high optical performance.
Marseille, Elliot; Dandona, Lalit; Marshall, Nell; Gaist, Paul; Bautista-Arredondo, Sergio; Rollins, Brandi; Bertozzi, Stefano M; Coovadia, Jerry; Saba, Joseph; Lioznov, Dmitry; Du Plessis, Jo-Ann; Krupitsky, Evgeny; Stanley, Nicci; Over, Mead; Peryshkina, Alena; Kumar, S G Prem; Muyingo, Sowedi; Pitter, Christian; Lundberg, Mattias; Kahn, James G
2007-07-12
Economic theory and limited empirical data suggest that costs per unit of HIV prevention program output (unit costs) will initially decrease as small programs expand. Unit costs may then reach a nadir and start to increase if expansion continues beyond the economically optimal size. Information on the relationship between scale and unit costs is critical to project the cost of global HIV prevention efforts and to allocate prevention resources efficiently. The "Prevent AIDS: Network for Cost-Effectiveness Analysis" (PANCEA) project collected 2003 and 2004 cost and output data from 206 HIV prevention programs of six types in five countries. The association between scale and efficiency for each intervention type was examined for each country. Our team characterized the direction, shape, and strength of this association by fitting bivariate regression lines to scatter plots of output levels and unit costs. We chose the regression forms with the highest explanatory power (R2). Efficiency increased with scale, across all countries and interventions. This association varied within intervention and within country, in terms of the range in scale and efficiency, the best fitting regression form, and the slope of the regression. The fraction of variation in efficiency explained by scale ranged from 26-96%. Doubling in scale resulted in reductions in unit costs averaging 34.2% (ranging from 2.4% to 58.0%). Two regression trends, in India, suggested an inflection point beyond which unit costs increased. Unit costs decrease with scale across a wide range of service types and volumes. These country and intervention-specific findings can inform projections of the global cost of scaling up HIV prevention efforts.
Importance of Unit Cells in Accurate Evaluation of the Characteristics of Graphene
NASA Astrophysics Data System (ADS)
Sabzyan, Hassan; Sadeghpour, Narges
2016-04-01
Effects of the size of the unit cell on energy, atomic charges, and phonon frequencies of graphene at the Γ point of the Brillouin zone are studied in the absence and presence of an electric field using density functional theory (DFT) methods (LDA and DFT-PBE functionals with Goedecker-Teter-Hutter (GTH) and Troullier-Martins (TM) norm-conserving pseudopotentials). Two types of unit cells containing nC=4-28 carbon atoms are considered. Results show that stability of graphene increases with increasing size of the unit cell. Energy, atomic charges, and phonon frequencies all converge above nC=24 for all functional-pseudopotentials used. Except for the LDA-GTH calculations, application of an electric field of 0.4 and 0.9 V/nm strengths does not change the trends with the size of the unit cell but instead slightly decreases the binding energy of graphene. Results of this study show that the choice of unit cell size and type is critical for calculation of reliable characteristics of graphene.
Design of a modulated orthovoltage stereotactic radiosurgery system.
Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S
2017-07-01
To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.
Achieving optimal growth: lessons from simple metabolic modules
NASA Astrophysics Data System (ADS)
Goyal, Sidhartha; Chen, Thomas; Wingreen, Ned
2009-03-01
Metabolism is a universal property of living organisms. While the metabolic network itself has been well characterized, the logic of its regulation remains largely mysterious. Recent work has shown that growth rates of microorganisms, including the bacterium Escherichia coli, correlate well with optimal growth rates predicted by flux-balance analysis (FBA), a constraint-based computational method. How difficult is it for cells to achieve optimal growth? Our analysis of representative metabolic modules drawn from real metabolism shows that, in all cases, simple feedback inhibition allows nearly optimal growth. Indeed, product-feedback inhibition is found in every biosynthetic pathway and constitutes about 80% of metabolic regulation. However, we find that product-feedback systems designed to approach optimal growth necessarily produce large pool sizes of metabolites, with potentially detrimental effects on cells via toxicity and osmotic imbalance. Interestingly, the sizes of metabolite pools can be strongly restricted if the feedback inhibition is ultrasensitive (i.e. with high Hill coefficient). The need for ultrasensitive mechanisms to limit pool sizes may therefore explain some of the ubiquitous, puzzling complexity found in metabolic feedback regulation at both the transcriptional and post-transcriptional levels.
NASA Astrophysics Data System (ADS)
Sohn, Jung Woo; Jeon, Juncheol; Nguyen, Quoc Hung; Choi, Seung-Bok
2015-08-01
In this paper, a disc-type magneto-rheological (MR) brake is designed for a mid-sized motorcycle and its performance is experimentally evaluated. The proposed MR brake consists of an outer housing, a rotating disc immersed in MR fluid, and a copper wire coiled around a bobbin to generate a magnetic field. The structural configuration of the MR brake is first presented with consideration of the installation space for the conventional hydraulic brake of a mid-sized motorcycle. The design parameters of the proposed MR brake are optimized to satisfy design requirements such as the braking torque, total mass of the MR brake, and cruising temperature caused by the magnetic-field friction of the MR fluid. In the optimization procedure, the braking torque is calculated based on the Herschel-Bulkley rheological model, which predicts MR fluid behavior well at high shear rate. An optimization tool based on finite element analysis is used to obtain the optimized dimensions of the MR brake. After manufacturing the MR brake, mechanical performances regarding the response time, braking torque and cruising temperature are experimentally evaluated.
Code of Federal Regulations, 2010 CFR
2010-01-01
... GOALS AND MISSION Housing Goals § 1282.17 Affordability—Income level definitions—family size and income..., for rental housing, family size and income information for the dwelling unit is known to the... sizes: Number of persons in family Percentageof area median income 1 70 2 80 3 90 4 100 5 or more...
NASA Astrophysics Data System (ADS)
Kumar, Amit; Dorodnikov, Maxim; Splettstößer, Thomas; Kuzyakov, Yakov; Pausch, Johanna
2017-04-01
Soil aggregation and microbial activities within the aggregates are important factors regulating soil carbon (C) turnover. A reliable and sensitive proxy for microbial activity is activity of extracellular enzymes (EEA). In the present study, effects of soil aggregates on EEA were investigated under three maize plant densities (Low, Normal, and High). Bulk soil was fractionated into three aggregate size classes (>2000 µm large macroaggregates; 2000-250 µm small macroaggregates; <250 µm microaggregates) by optimal-moisture sieving. Microbial biomass and EEA (β-1,4-glucosidase (BG), β-1,4-N-acetylglucosaminidase (NAG), L-leucine aminopeptidase (LAP) and acid phosphatase (acP)) catalyzing soil organic matter (SOM) decomposition were measured in rooted soil of maize and soil from bare fallow. Microbial biomass C (Cmic) decreased with decreasing aggregate size classes. Potential and specific EEA (per unit of Cmic) increased from macro- to microaggregates. In comparison with bare fallow soil, specific EEA of microaggregates in rooted soil was higher by up to 73%, 31%, 26%, and 92% for BG, NAG, acP and LAP, respectively. Moreover, high plant density decreased macroaggregates by 9% compared to bare fallow. Enhanced EEA in three aggregate size classes demonstrated activation of microorganisms by roots. Strong EEA in microaggregates can be explained by microaggregates' localization within the soil. Originally adhering to surfaces of macroaggregates, microaggregates were preferentially exposed to C substrates and nutrients, thereby promoting microbial activity.
Li, An; Guo, Shuai; Wazir, Nasrullah; Chai, Ke; Liang, Liang; Zhang, Min; Hao, Yan; Nan, Pengfei; Liu, Ruibin
2017-10-30
The inevitable problems in laser induced breakdown spectroscopy are matrix effect and statistical fluctuation of the spectral signal, which can be partly avoided by utilizing a proper confined unit. The dependences of spectral signal enhancement on relative permittivity were studied by varying materials to confine the plasma, which include polytetrafluoroethylene(PTFE), nylon/dacron, silicagel, and nitrile-butadiene rubber (NBR) with the relative permittivity 2.2, ~3.3, 3.6, 8~13, 15~22. We found that higher relative permittivity rings induce stronger enhancement ability, which restricts the energy dissipation of plasma better and due to the reflected electromagnetic wave from the wall of different materials, the electromagnetic field of plasma can be well confined and makes the distribution of plasma more orderly. The spectral intensities of the characteristic lines Si I 243.5 nm and Si I 263.1 nm increased approximately 2 times with relative permittivity values from 2.2 to ~20. The size dependent enhancement of PTFE was further checked and the maximum gain was realized by using a confinement ring with a diameter size of 5 mm and a height of 3 mm (D5mmH3mm), and the rings with D2mmH1mm and D3mmH2mm also show higher enhancement factor. In view of peak shift, peak lost and accidental peaks in the obtained spectra were properly treated in data progressing; the spectral fluctuation decreased drastically for various materials with different relative permittivities as confined units, which means the core of plasma is stabilized, attributing to the confinement effect. Furthermore, the quantitative analysis in coal shows wonderful results-the prediction fitting coefficient R 2 reaches 0.98 for ash and 0.99 for both volatile and carbon.
[History of pharmaceutical packaging in modern Japan. II--Package size of pharmaceuticals].
Hattori, Akira
2014-01-01
When planning pharmaceutical packaging, the package size for the product is important for determining the basic package concept. Initially, the sales unit for herbal medicines was the weight; however in 1868, around the early part of the Meiji era, Japanese and Western units were being used and the sales unit was confusing. Since the Edo era, the packing size for OTC medicines was adopted using weight, numbers, dosage or treatment period. These were devised in various ways in consideration of convenience for the consumer, but the concept was not simple. In 1887, from the time that the first edition of the Japanese Pharmacopoeia came out, use of the metric system began to spread in Japan. Its use spread gradually for use in the package size of pharmaceutical products. At the time, the number of pharmaceutical units (i.e., tablets), became the sales unit, which is easy to understand by the purchaser.
Framework for computationally efficient optimal irrigation scheduling using ant colony optimization
USDA-ARS?s Scientific Manuscript database
A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...
Zhang, Xuan; Yao, Jiahao; Liu, Bin; Yan, Jun; Lu, Lei; Li, Yi; Gao, Huajian; Li, Xiaoyan
2018-06-14
Mechanical metamaterials with three-dimensional micro- and nano-architectures exhibit unique mechanical properties, such as high specific modulus, specific strength and energy absorption. However, a conflict exists between strength and recoverability in nearly all the mechanical metamaterials reported recently, in particular the architected micro-/nanolattices, which restricts the applications of these materials in energy storage/absorption and mechanical actuation. Here, we demonstrated the fabrication of three-dimensional architected composite nanolattices that overcome the strength-recoverability trade-off. The nanolattices under study are made up of a high entropy alloy coated (14.2-126.1 nm in thickness) polymer strut (approximately 260 nm in the characteristic size) fabricated via two-photon lithography and magnetron sputtering deposition. In situ uniaxial compression inside a scanning electron microscope showed that these composite nanolattices exhibit a high specific strength of 0.027 MPa/kg m3, an ultra-high energy absorption per unit volume of 4.0 MJ/m3, and nearly complete recovery after compression under strains exceeding 50%, thus overcoming the traditional strength-recoverability trade-off. During multiple compression cycles, the composite nanolattices exhibit a high energy loss coefficient (converged value after multiple cycles) of 0.5-0.6 at a compressive strain beyond 50%, surpassing the coefficients of all the micro-/nanolattices fabricated recently. Our experiments also revealed that for a given unit cell size, the composite nanolattices coated with a high entropy alloy with thickness in the range of 14-50 nm have the optimal specific modulus, specific strength and energy absorption per unit volume, which is related to a transition of the dominant deformation mechanism from local buckling to brittle fracture of the struts.
DEEM, a versatile platform of FRD measurement for highly multiplexed fibre systems in astronomy
NASA Astrophysics Data System (ADS)
Yan, Yunxiang; Yan, Qi; Wang, Gang; Sun, Weimin; Luo, A.-Li; Ma, Zhenyu; Zhang, Qiong; Li, Jian; Wang, Shuqing
2018-06-01
We present a new method of DEEM, the direct energy encircling method, for characterizing the performance of fibres in most astronomical spectroscopic applications. It is a versatile platform to measure focal ratio degradation (FRD), throughput, and point spread function. The principle of DEEM and the relation between the encircled energy and the spot size were derived and simulated based on the power distribution model (PDM). We analysed the errors of DEEM and pointed out the major error source for better understanding and optimization. The validation of DEEM has been confirmed by comparing the results with conventional method which shows that DEEM has good robustness with high accuracy in both stable and complex experiment environments. Applications on the integral field unit (IFU) show that the FRD of 50 μm core fibre is substandard for the requirement which requires the output focal ratio to be slower than 4.5. The homogeneity of throughput is acceptable and higher than 85 per cent. The prototype IFU of the first generation helps to find out the imperfections to optimize the new design of the next generation based on the staggered structure with 35 μm core fibres of N.A. = 0.12, which can improve the FRD performance. The FRD dependence on wavelength and core size is revealed that higher output focal ratio occurs at shorter wavelengths for large core fibres, which is in agreement with the prediction of PDM. But the dependence of the observed data is weaker than the prediction.
NASA Astrophysics Data System (ADS)
Pavan Kumar Naik, S.; Bai, V. Seshu
2017-02-01
In the present work, with the aim of improving the local flux pinning at the unit cell level in the YBa2Cu3O7-δ (YBCO) bulk superconductors, 20 wt% of nanoscale Sm2O3 and micron sized (Nd, Sm, Gd)2BaCuO5 secondary phase particles were added to YBCO and processed in oxygen controlled preform optimized infiltration growth process. Nano Dispersive Sol Casting method is employed to homogeneously distribute the nano Sm2O3 particles of 30-50 nm without any agglomeration in the precursor powder. Microstructural investigations on doped samples show the chemical fluctuations as annuli cores in the 211 phase particles. The introduction of mixed rare earth elements at Y-site resulted in compositional fluctuations in the superconducting matrix. The associated lattice mismatch defects have provided flux pinning up to large magnetic fields. Magnetic field dependence of current density (Jc(H)) at different temperatures revealed that the dominant pinning mechanism is caused by spatial variations of critical temperatures, due to the spatial fluctuations in the matrix composition. As the number of rare earth elements increased in the YBCO, the peak field position in the scaling of the normalized pinning force density (Fp/Fp max) significantly gets shifted towards the higher fields. The curves of Jc(H) and Fp/Fp max at different temperatures clearly indicate the LRE substitution for LRE' or Ba-sites for δTc pinning.
Karam, Amanda L; McMillan, Catherine C; Lai, Yi-Chun; de Los Reyes, Francis L; Sederoff, Heike W; Grunden, Amy M; Ranjithan, Ranji S; Levis, James W; Ducoste, Joel J
2017-06-14
The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software.
Denholm, Paul; Sioshansi, Ramteen
2009-05-05
In this paper, we examine the potential advantages of co-locating wind and energy storage to increase transmission utilization and decrease transmission costs. Co-location of wind and storage decreases transmission requirements, but also decreases the economic value of energy storage compared to locating energy storage at the load. This represents a tradeoff which we examine to estimate the transmission costs required to justify moving storage from load-sited to wind-sited in three different locations in the United States. We examined compressed air energy storage (CAES) in three “wind by wire” scenarios with a variety of transmission and CAES sizes relative to amore » given amount of wind. In the sites and years evaluated, the optimal amount of transmission ranges from 60% to 100% of the wind farm rating, with the optimal amount of CAES equal to 0–35% of the wind farm rating, depending heavily on wind resource, value of electricity in the local market, and the cost of natural gas.« less
Karam, Amanda L.; McMillan, Catherine C.; Lai, Yi-Chun; de los Reyes, Francis L.; Sederoff, Heike W.; Grunden, Amy M.; Ranjithan, Ranji S.; Levis, James W.; Ducoste, Joel J.
2017-01-01
The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software. PMID:28654054
High-Efficiency Hall Thruster Discharge Power Converter
NASA Technical Reports Server (NTRS)
Jaquish, Thomas
2015-01-01
Busek Company, Inc., is designing, building, and testing a new printed circuit board converter. The new converter consists of two series or parallel boards (slices) intended to power a high-voltage Hall accelerator (HiVHAC) thruster or other similarly sized electric propulsion devices. The converter accepts 80- to 160-V input and generates 200- to 700-V isolated output while delivering continually adjustable 300-W to 3.5-kW power. Busek built and demonstrated one board that achieved nearly 94 percent efficiency the first time it was turned on, with projected efficiency exceeding 97 percent following timing software optimization. The board has a projected specific mass of 1.2 kg/kW, achieved through high-frequency switching. In Phase II, Busek optimized to exceed 97 percent efficiency and built a second prototype in a form factor more appropriate for flight. This converter then was integrated with a set of upgraded existing boards for powering magnets and the cathode. The program culminated with integrating the entire power processing unit and testing it on a Busek thruster and on NASA's HiVHAC thruster.
Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste
2013-03-01
Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupledmore » cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.« less
Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction
Agulleiro, Jose-Ignacio; Fernández, José Jesús
2012-01-01
Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768
NASA Astrophysics Data System (ADS)
Roberson, Nicole; Denmark, Daniel; Witanachchi, Sarath
Hybrid drug delivery systems composed of thermoresponsive polymers and magnetic nanoparticles have been developed using chemical methods to deliver controlled amounts of a biotherapeutic to target tissue. These methods can be expensive, time intensive, and produce impure composites due to the use of surfactants during polymer synthesis. In this study, UV aerosol photopolymerization is used to synthesize N-isoplopylacrylamide (NIPAM) monomers, N,N-methylenebisacrylamide (MBA) crosslinker, and irgacure 2959 photoinitiator into the transporting microcapsule for drug delivery. The method of UV aerosol photopolymerization allows for the continuous, cost effective, and time efficient synthesis of a high concentration of pure polymers in a short amount of time; toxic surfactants are not necessary. Optimal NIPAM monomer, MBA crosslinker, and irgacure 2959 photoinitiator concentrations were tested and analyzed to synthesize a microcapsule with optimal conditions for controlled drug delivery. Scanning Electron Microscope (SEM) imaging reveals that synthesis of polymer microcapsules of about 30 micrometers in size is effective through UV aerosol photopolymerization. Findings will contribute greatly to the field of emergency medicine. This work was supported by the United States Army (Grant No. W81XWH1020101/3349).
Albuquerque, Fabio; Beier, Paul
2015-01-01
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from <1 ha to 2,500 km2. On average, RWR solutions were more efficient than Zonation solutions. Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.
A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Kottalam, Jey; Yang, Jiyan
We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with themore » fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.« less
Wason, James M. S.; Mander, Adrian P.
2012-01-01
Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118
[Analysis of visible extinction spectrum of particle system and selection of optimal wavelength].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-09-01
In the total light scattering particle sizing technique, the extinction spectrum of particle system contains some information about the particle size and refractive index. The visible extinction spectra of the common monomodal and biomodal R-R particle size distribution were computed, and the variation in the visible extinction spectrum with the particle size and refractive index was analyzed. The corresponding wavelengths were selected as the measurement wavelengths at which the second order differential extinction spectrum was discontinuous. Furthermore, the minimum and the maximum wavelengths in the visible region were also selected as the measurement wavelengths. The genetic algorithm was used as the inversion method under the dependent model The computer simulation and experiments illustrate that it is feasible to make an analysis of the extinction spectrum and use this selection method of the optimal wavelength in the total light scattering particle sizing. The rough contour of the particle size distribution can be determined after the analysis of visible extinction spectrum, so the search range of the particle size parameter is reduced in the optimal algorithm, and then a more accurate inversion result can be obtained using the selection method. The inversion results of monomodal and biomodal distribution are all still satisfactory when 1% stochastic noise is put in the transmission extinction measurement values.
Optimal Harvesting in a Periodic Food Chain Model with Size Structures in Predators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Feng-Qin, E-mail: zhafq@263.net; Liu, Rong; Chen, Yuming, E-mail: ychen@wlu.ca
In this paper, we investigate a periodic food chain model with harvesting, where the predators have size structures and are described by first-order partial differential equations. First, we establish the existence of a unique non-negative solution by using the Banach fixed point theorem. Then, we provide optimality conditions by means of normal cone and adjoint system. Finally, we derive the existence of an optimal strategy by means of Ekeland’s variational principle. Here the objective functional represents the net economic benefit yielded from harvesting.
Temperature Scaling Law for Quantum Annealing Optimizers.
Albash, Tameem; Martin-Mayor, Victor; Hen, Itay
2017-09-15
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 3: User manual
NASA Technical Reports Server (NTRS)
1972-01-01
The two-fold purpose of the Performance Analysis and Design Synthesis (PADS) computer program is discussed. The program can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general purpose branched trajectory optimization program. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent. The second module uses the method of quasi-linearization, which requires a starting solution from the first trajectory module.
NASA Astrophysics Data System (ADS)
Schlögel, R.; Marchesini, I.; Alvioli, M.; Reichenbach, P.; Rossi, M.; Malet, J.-P.
2018-01-01
We perform landslide susceptibility zonation with slope units using three digital elevation models (DEMs) of varying spatial resolution of the Ubaye Valley (South French Alps). In so doing, we applied a recently developed algorithm automating slope unit delineation, given a number of parameters, in order to optimize simultaneously the partitioning of the terrain and the performance of a logistic regression susceptibility model. The method allowed us to obtain optimal slope units for each available DEM spatial resolution. For each resolution, we studied the susceptibility model performance by analyzing in detail the relevance of the conditioning variables. The analysis is based on landslide morphology data, considering either the whole landslide or only the source area outline as inputs. The procedure allowed us to select the most useful information, in terms of DEM spatial resolution, thematic variables and landslide inventory, in order to obtain the most reliable slope unit-based landslide susceptibility assessment.
Hirst, Theodore C; Ribchester, Richard R
2013-01-01
Connectomic analysis of the nervous system aims to discover and establish principles that underpin normal and abnormal neural connectivity and function. Here we performed image analysis of motor unit connectivity in the fourth deep lumbrical muscle (4DL) of mice, using transgenic expression of fluorescent protein in motor neurones as a morphological reporter. We developed a method that accelerated segmentation of confocal image projections of 4DL motor units, by applying high resolution (63×, 1.4 NA objective) imaging or deconvolution only where either proved necessary, in order to resolve axon crossings that produced ambiguities in the correct assignment of axon terminals to identified motor units imaged at lower optical resolution (40×, 1.3 NA). The 4DL muscles contained between 4 and 9 motor units and motor unit sizes ranged in distribution from 3 to 111 motor nerve terminals per unit. Several structural properties of the motor units were consistent with those reported in other muscles, including suboptimal wiring length and distribution of motor unit size. Surprisingly, however, small motor units were confined to a region of the muscle near the nerve entry point, whereas their larger counterparts were progressively more widely dispersed, suggesting a previously unrecognised form of segregated motor innervation in this muscle. We also found small but significant differences in variance of motor endplate length in motor units, which correlated weakly with their motor unit size. Thus, our connectomic analysis has revealed a pattern of concentric innervation that may perhaps also exist in other, cylindrical muscles that have not previously been thought to show segregated motor unit organisation. This organisation may be the outcome of competition during postnatal development based on intrinsic neuronal differences in synaptic size or synaptic strength that generates a territorial hierarchy in motor unit size and disposition. PMID:23940381
Jin, Junchen
2016-01-01
The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998
Development of a biorefinery optimized biofuel supply curve for the western United States
Nathan Parker; Peter Tittmann; Quinn Hart; Richard Nelson; Ken Skog; Anneliese Schmidt; Edward Gray; Bryan Jenkins
2010-01-01
A resource assessment and biorefinery siting optimization model was developed and implemented to assess potential biofuel supply across the Western United States from agricultural, forest, urban, and energy crop biomass. Spatial information including feedstock resources, existing and potential refinery locations and a transportation network model is provided to a mixed...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-02
... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,554] International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA, San Jose, CA; Notice of... determination of the TAA petition filed on behalf of workers at International Business Machines (IBM), Software...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klymenko, M. V.; Remacle, F., E-mail: fremacle@ulg.ac.be
2014-10-28
A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables formore » the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.« less
Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stander, Nielen; Basudhar, Anirban; Basu, Ushnish
2015-06-15
Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less
Costa, Filippo; Monorchio, Agostino; Manara, Giuliano
2016-01-01
A methodology to obtain wideband scattering diffusion based on periodic artificial surfaces is presented. The proposed surfaces provide scattering towards multiple propagation directions across an extremely wide frequency band. They comprise unit cells with an optimized geometry and arranged in a periodic lattice characterized by a repetition period larger than one wavelength which induces the excitation of multiple Floquet harmonics. The geometry of the elementary unit cell is optimized in order to minimize the reflection coefficient of the fundamental Floquet harmonic over a wide frequency band. The optimization of FSS geometry is performed through a genetic algorithm in conjunction with periodic Method of Moments. The design method is verified through full-wave simulations and measurements. The proposed solution guarantees very good performance in terms of bandwidth-thickness ratio and removes the need of a high-resolution printing process. PMID:27181841
Optimal Distinctiveness Signals Membership Trust.
Leonardelli, Geoffrey J; Loyd, Denise Lewin
2016-07-01
According to optimal distinctiveness theory, sufficiently small minority groups are associated with greater membership trust, even among members otherwise unknown, because the groups are seen as optimally distinctive. This article elaborates on the prediction's motivational and cognitive processes and tests whether sufficiently small minorities (defined by relative size; for example, 20%) are associated with greater membership trust relative to mere minorities (45%), and whether such trust is a function of optimal distinctiveness. Two experiments, examining observers' perceptions of minority and majority groups and using minimal groups and (in Experiment 2) a trust game, revealed greater membership trust in minorities than majorities. In Experiment 2, participants also preferred joining minorities over more powerful majorities. Both effects occurred only when minorities were 20% rather than 45%. In both studies, perceptions of optimal distinctiveness mediated effects. Discussion focuses on the value of relative size and optimal distinctiveness, and when membership trust manifests. © 2016 by the Society for Personality and Social Psychology, Inc.
Load management as a smart grid concept for sizing and designing of hybrid renewable energy systems
NASA Astrophysics Data System (ADS)
Eltamaly, Ali M.; Mohamed, Mohamed A.; Al-Saud, M. S.; Alolah, Abdulrahman I.
2017-10-01
Optimal sizing of hybrid renewable energy systems (HRES) to satisfy load requirements with the highest reliability and lowest cost is a crucial step in building HRESs to supply electricity to remote areas. Applying smart grid concepts such as load management can reduce the size of HRES components and reduce the cost of generated energy considerably. In this article, sizing of HRES is carried out by dividing the load into high- and low-priority parts. The proposed system is formed by a photovoltaic array, wind turbines, batteries, fuel cells and a diesel generator as a back-up energy source. A smart particle swarm optimization (PSO) algorithm using MATLAB is introduced to determine the optimal size of the HRES. The simulation was carried out with and without division of the load to compare these concepts. HOMER software was also used to simulate the proposed system without dividing the loads to verify the results obtained from the proposed PSO algorithm. The results show that the percentage of division of the load is inversely proportional to the cost of the generated energy.
Statistical Analysis of Solar PV Power Frequency Spectrum for Optimal Employment of Building Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Sharma, Isha; Kuruganti, Teja
In this paper, a statistical analysis of the frequency spectrum of solar photovoltaic (PV) power output is conducted. This analysis quantifies the frequency content that can be used for purposes such as developing optimal employment of building loads and distributed energy resources. One year of solar PV power output data was collected and analyzed using one-second resolution to find ideal bounds and levels for the different frequency components. The annual, seasonal, and monthly statistics of the PV frequency content are computed and illustrated in boxplot format. To examine the compatibility of building loads for PV consumption, a spectral analysis ofmore » building loads such as Heating, Ventilation and Air-Conditioning (HVAC) units and water heaters was performed. This defined the bandwidth over which these devices can operate. Results show that nearly all of the PV output (about 98%) is contained within frequencies lower than 1 mHz (equivalent to ~15 min), which is compatible for consumption with local building loads such as HVAC units and water heaters. Medium frequencies in the range of ~15 min to ~1 min are likely to be suitable for consumption by fan equipment of variable air volume HVAC systems that have time constants in the range of few seconds to few minutes. This study indicates that most of the PV generation can be consumed by building loads with the help of proper control strategies, thereby reducing impact on the grid and the size of storage systems.« less
Quantitative grading of a human blastocyst: optimal inner cell mass size and shape.
Richter, K S; Harris, D C; Daneshmand, S T; Shapiro, B S
2001-12-01
To investigate the predictive value of quantitative measurements of blastocyst morphology on subsequent implantation rates after transfer. Prospective observational study. Private assisted reproductive technology center. One hundred seventy-four IVF patients receiving transfers of expanded blastocyst-stage embryos on day 5 (n = 112) or day 6 (n = 62) after oocyte retrieval. None. Blastocyst diameter, number of trophectoderm cells, inner cell mass (ICM) size, ICM shape, and implantation and pregnancy rates. Blastocyst diameter and trophectoderm cell numbers were unrelated to implantation rates. Day 5 expanded blastocysts with ICMs of >4,500 microm(2) implanted at a higher rate than did those with smaller ICMs (55% vs. 31%). Day 5 expanded blastocysts with slightly oval ICMs implanted at a higher rate (58%) compared with those with either rounder ICMs (7%) or more elongated ICMs (33%). Implantation rates were highest (71%) for embryos with both optimal ICM size and shape. Pregnancy rates were higher for day 5 transfers of optimally shaped ICMs compared with day 5 transfers of optimally sized ICMs. Quantitative measurements of the inner cell mass are highly indicative of blastocyst implantation potential. Blastocysts with relatively large and/or slightly oval ICMs are more likely to implant than other blastocysts.
Scalable Heuristics for Planning, Placement and Sizing of Flexible AC Transmission System Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, Vladmir; Backhaus, Scott N.; Chertkov, Michael
Aiming to relieve transmission grid congestion and improve or extend feasibility domain of the operations, we build optimization heuristics, generalizing standard AC Optimal Power Flow (OPF), for placement and sizing of Flexible Alternating Current Transmission System (FACTS) devices of the Series Compensation (SC) and Static VAR Compensation (SVC) type. One use of these devices is in resolving the case when the AC OPF solution does not exist because of congestion. Another application is developing a long-term investment strategy for placement and sizing of the SC and SVC devices to reduce operational cost and improve power system operation. SC and SVCmore » devices are represented by modification of the transmission line inductances and reactive power nodal corrections respectively. We find one placement and sizing of FACTs devices for multiple scenarios and optimal settings for each scenario simultaneously. Our solution of the nonlinear and nonconvex generalized AC-OPF consists of building a convergent sequence of convex optimizations containing only linear constraints and shows good computational scaling to larger systems. The approach is illustrated on single- and multi-scenario examples of the Matpower case-30 model.« less
Enhancing cancer therapeutics using size-optimized magnetic fluid hyperthermia
NASA Astrophysics Data System (ADS)
Khandhar, Amit P.; Ferguson, R. Matthew; Simon, Julian A.; Krishnan, Kannan M.
2012-04-01
Magnetic fluid hyperthermia (MFH) employs heat dissipation from magnetic nanoparticles to elicit a therapeutic outcome in tumor sites, which results in either cell death (>42 °C) or damage (<42 °C) depending on the localized rise in temperature. We investigated the therapeutic effect of MFH in immortalized T lymphocyte (Jurkat) cells using monodisperse magnetite (Fe3O4) nanoparticles (MNPs) synthesized in organic solvents and subsequently transferred to aqueous phase using a biocompatible amphiphilic polymer. Monodisperse MNPs, ˜16 nm diameter, show maximum heating efficiency, or specific loss power (watts/g Fe3O4) in a 373 kHz alternating magnetic field. Our in vitro results, for 15 min of heating, show that only 40% of cells survive for a relatively low dose (490 μg Fe/ml) of these size-optimized MNPs, compared to 80% and 90% survival fraction for 12 and 13 nm MNPs at 600 μg Fe/ml. The significant decrease in cell viability due to MNP-induced hyperthermia from only size-optimized nanoparticles demonstrates the central idea of tailoring size for a specific frequency in order to intrinsically improve the therapeutic potency of MFH by optimizing both dose and time of application.
Design of piezoelectric transformer for DC/DC converter with stochastic optimization method
NASA Astrophysics Data System (ADS)
Vasic, Dejan; Vido, Lionel
2016-04-01
Piezoelectric transformers were adopted in recent year due to their many inherent advantages such as safety, no EMI problem, low housing profile, and high power density, etc. The characteristics of the piezoelectric transformers are well known when the load impedance is a pure resistor. However, when piezoelectric transformers are used in AC/DC or DC/DC converters, there are non-linear electronic circuits connected before and after the transformer. Consequently, the output load is variable and due to the output capacitance of the transformer the optimal working point change. This paper starts from modeling a piezoelectric transformer connected to a full wave rectifier in order to discuss the design constraints and configuration of the transformer. The optimization method adopted here use the MOPSO algorithm (Multiple Objective Particle Swarm Optimization). We start with the formulation of the objective function and constraints; then the results give different sizes of the transformer and the characteristics. In other word, this method is looking for a best size of the transformer for optimal efficiency condition that is suitable for variable load. Furthermore, the size and the efficiency are found to be a trade-off. This paper proposes the completed design procedure to find the minimum size of PT in need. The completed design procedure is discussed by a given specification. The PT derived from the proposed design procedure can guarantee both good efficiency and enough range for load variation.
Moghddam, Seyedeh Marziyeh Mahdavi; Ahad, Abdul; Aqil, Mohd; Imam, Syed Sarim; Sultana, Yasmin
2017-05-01
The aim of the present study was to develop and optimize topically applied nimesulide-loaded nanostructured lipid carriers. Box-Behnken experimental design was applied for optimization of nanostructured lipid carriers. The independent variables were ratio of stearic acid: oleic acid (X 1 ), poloxamer 188 concentration (X 2 ) and lecithin concentration (X 3 ) while particle size (Y 1 ) and entrapment efficiency (Y 2 ) were the chosen responses. Further, skin penetration study, in vitro release, confocal laser scanning microscopy and stability study were also performed. The optimized nanostructured lipid carriers of nimesulide provide reasonable particle size, flux, and entrapment efficiency. Optimized formulation (F9) with mean particle size of 214.4 ± 11 nm showed 89.4 ± 3.40% entrapment efficiency and achieved mean flux 2.66 ± 0.09 μg/cm 2 /h. In vitro release study showed prolonged drug release from the optimized formulation following Higuchi release kinetics with R 2 value of 0.984. Confocal laser scanning microscopy revealed an enhanced penetration of Rhodamine B-loaded nanostructured lipid carriers to the deeper layers of the skin. The stability study confirmed that the optimized formulation was considerably stable at refrigerator temperature as compared to room temperature. Our results concluded that nanostructured lipid carriers are an efficient carrier for topical delivery of nimesulide.
NASA Astrophysics Data System (ADS)
Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David
2017-04-01
We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
Method to optimize optical switch topology for photonic network-on-chip
NASA Astrophysics Data System (ADS)
Zhou, Ting; Jia, Hao
2018-04-01
In this paper, we propose a method to optimize the optical switch by substituting optical waveguide crossings for optical switching units and an optimizing algorithm to complete the optimization automatically. The functionality of the optical switch remains constant under optimization. With this method, we simplify the topology of optical switch, which means the insertion loss and power consumption of the whole optical switch can be effectively minimized. Simulation result shows that the number of switching units of the optical switch based on Spanke-Benes can be reduced by 16.7%, 20%, 20%, 19% and 17.9% for the scale from 4 × 4 to 8 × 8 respectively. As a proof of concept, the experimental demonstration of an optimized six-port optical switch based on Spanke-Benes structure by means of silicon photonics chip is reported.
Robust optimization in lung treatment plans accounting for geometric uncertainty.
Zhang, Xin; Rong, Yi; Morrill, Steven; Fang, Jian; Narayanasamy, Ganesh; Galhardo, Edvaldo; Maraboyina, Sanjay; Croft, Christopher; Xia, Fen; Penagaricano, Jose
2018-05-01
Robust optimization generates scenario-based plans by a minimax optimization method to find optimal scenario for the trade-off between target coverage robustness and organ-at-risk (OAR) sparing. In this study, 20 lung cancer patients with tumors located at various anatomical regions within the lungs were selected and robust optimization photon treatment plans including intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were generated. The plan robustness was analyzed using perturbed doses with setup error boundary of ±3 mm in anterior/posterior (AP), ±3 mm in left/right (LR), and ±5 mm in inferior/superior (IS) directions from isocenter. Perturbed doses for D 99 , D 98 , and D 95 were computed from six shifted isocenter plans to evaluate plan robustness. Dosimetric study was performed to compare the internal target volume-based robust optimization plans (ITV-IMRT and ITV-VMAT) and conventional PTV margin-based plans (PTV-IMRT and PTV-VMAT). The dosimetric comparison parameters were: ITV target mean dose (D mean ), R 95 (D 95 /D prescription ), Paddick's conformity index (CI), homogeneity index (HI), monitor unit (MU), and OAR doses including lung (D mean , V 20 Gy and V 15 Gy ), chest wall, heart, esophagus, and maximum cord doses. A comparison of optimization results showed the robust optimization plan had better ITV dose coverage, better CI, worse HI, and lower OAR doses than conventional PTV margin-based plans. Plan robustness evaluation showed that the perturbed doses of D 99 , D 98 , and D 95 were all satisfied at least 99% of the ITV to received 95% of prescription doses. It was also observed that PTV margin-based plans had higher MU than robust optimization plans. The results also showed robust optimization can generate plans that offer increased OAR sparing, especially for normal lungs and OARs near or abutting the target. Weak correlation was found between normal lung dose and target size, and no other correlation was observed in this study. © 2018 University of Arkansas for Medical Sciences. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Circuit-level optimisation of a:Si TFT-based AMOLED pixel circuits for maximum hold current
NASA Astrophysics Data System (ADS)
Foroughi, Aidin; Mehrpoo, Mohammadreza; Ashtiani, Shahin J.
2013-11-01
Design of AMOLED pixel circuits has manifold constraints and trade-offs which provides incentive for circuit designers to seek optimal solutions for different objectives. In this article, we present a discussion on the viability of an optimal solution to achieve the maximum hold current. A compact formula for component sizing in a conventional 2T1C pixel is, therefore, derived. Compared to SPICE simulation results, for several pixel sizes, our predicted optimum sizing yields maximum currents with errors less than 0.4%.
Polishing parameter optimization for end-surface of chalcogenide glass fiber connector
NASA Astrophysics Data System (ADS)
Guo, Fangxia; Dai, Shixun; Tang, Junzhou; Wang, Xunsi; Li, Xing; Xu, Yinsheng; Wu, Yuehao; Liu, Zijun
2017-11-01
We have investigated the optimization parameters for polishing end-surface of chalcogenide glass fiber connector in the paper. Six SiC abrasive particles of different sizes were used to polish the fiber in order of size from large to small. We analyzed the effects of polishing parameters such as particle sizes, grinding speeds and polishing durations on the quality of the fiber end surface and determined the optimized polishing parameters. We found that, high-quality fiber end surface can be achieved using only three different SiC abrasives. The surface roughness of the final ChG fiber end surface is about 48 nm without any scratches, spots and cracks. Such polishing processes could reduce the average insertion loss of the connector to about 3.4 dB.
Evaluating Suit Fit Using Performance Degradation
NASA Technical Reports Server (NTRS)
Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar
2012-01-01
The Mark III planetary technology demonstrator space suit can be tailored to an individual by swapping the modular components of the suit, such as the arms, legs, and gloves, as well as adding or removing sizing inserts in key areas. A method was sought to identify the transition from an ideal suit fit to a bad fit and how to quantify this breakdown using a metric of mobility-based human performance data. To this end, the degradation of the range of motion of the elbow and wrist of the suit as a function of suit sizing modifications was investigated to attempt to improve suit fit. The sizing range tested spanned optimal and poor fit and was adjusted incrementally in order to compare each joint angle across five different sizing configurations. Suited range of motion data were collected using a motion capture system for nine isolated and functional tasks utilizing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm by itself. Findings indicated that no single joint drives the performance of the arm as a function of suit size; instead it is based on the interaction of multiple joints along a limb. To determine a size adjustment range where an individual can operate the suit at an acceptable level, a performance detriment limit was set. This user-selected limit reveals the task-dependent tolerance of the suit fit around optimal size. For example, the isolated joint motion indicated that the suit can deviate from optimal by as little as -0.6 in to -2.6 in before experiencing a 10% performance drop in the wrist or elbow joint. The study identified a preliminary method to quantify the impact of size on performance and developed a new way to gauge tolerances around optimal size.
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-10-01
This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Closed-loop optimization of chromatography column sizing strategies in biopharmaceutical manufacture
Allmendinger, Richard; Simaria, Ana S; Turner, Richard; Farid, Suzanne S
2014-01-01
BACKGROUND This paper considers a real-world optimization problem involving the identification of cost-effective equipment sizing strategies for the sequence of chromatography steps employed to purify biopharmaceuticals. Tackling this problem requires solving a combinatorial optimization problem subject to multiple constraints, uncertain parameters, and time-consuming fitness evaluations. RESULTS An industrially-relevant case study is used to illustrate that evolutionary algorithms can identify chromatography sizing strategies with significant improvements in performance criteria related to process cost, time and product waste over the base case. The results demonstrate also that evolutionary algorithms perform best when infeasible solutions are repaired intelligently, the population size is set appropriately, and elitism is combined with a low number of Monte Carlo trials (needed to account for uncertainty). Adopting this setup turns out to be more important for scenarios where less time is available for the purification process. Finally, a data-visualization tool is employed to illustrate how user preferences can be accounted for when it comes to selecting a sizing strategy to be implemented in a real industrial setting. CONCLUSION This work demonstrates that closed-loop evolutionary optimization, when tuned properly and combined with a detailed manufacturing cost model, acts as a powerful decisional tool for the identification of cost-effective purification strategies. © 2013 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:25506115
NASA Astrophysics Data System (ADS)
Sengupta, Avery; Gupta, Surashree Sen; Ghosh, Mahua
2013-03-01
The purpose of the present study was to obtain optimal processing for preparation of uniform-sized nanoemulsion of conjugated linolenic acid (CLnA) rich oil to increase the oxidative stability of CLnA by using a high-speed disperser (HSD) and ultrasonication. The emulsifiers used were egg phospholipid and soya protein isolate. The effects of oil concentration [0.05 to 1.25 % (w/w)], emulsifier ratio [0.1:0.9 to 0.9:0.1 (phospholipid:protein)], speed of the HSD (2,000 to 12,000 rpm) and times of HSD and sonication treatments (10 to 50 min) were observed. Optimization was performed with and without response surface methodology (RSM). The optimum compositional variables i.e. concentration of oil was 1 % and phospholipid:protein molar ratio was 0.5:0.5. Maximum size reduction occurred at 10,000 rpm speed of HSD. HSD should be administered for 40 min followed by 40 min ultrasonication. The range of the size of the droplets in the nanoemulsion was between 173 ± 1.20 and 183 ± 0.94 nm. Nanoemulsion is a size reduction technique where the oil present in the emulsion can be easily stabilized which increases the shelf-life of the oil. The present study derived the reaction parameters were optimized using RSM to produce nanoemulsion of CLnA rich oils of minimum size to obtain maximum stability.
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.
Application of Box-Behnken design to prepare gentamicin-loaded calcium carbonate nanoparticles.
Maleki Dizaj, Solmaz; Lotfipour, Farzaneh; Barzegar-Jalali, Mohammad; Zarrintan, Mohammad-Hossein; Adibkia, Khosro
2016-09-01
The aim of this research was to prepare and optimize calcium carbonate (CaCO3) nanoparticles as carriers for gentamicin sulfate. A chemical precipitation method was used to prepare the gentamicin sulfate-loaded CaCO3 nanoparticles. A 3-factor, 3-level Box-Behnken design was used for the optimization procedure, with the molar ratio of CaCl2: Na2CO3 (X1), the concentration of drug (X2), and the speed of homogenization (X3) as the independent variables. The particle size and entrapment efficiency were considered as response variables. Mathematical equations and response surface plots were used, along with the counter plots, to relate the dependent and independent variables. The results indicated that the speed of homogenization was the main variable contributing to particle size and entrapment efficiency. The combined effect of all three independent variables was also evaluated. Using the response optimization design, the optimized Xl-X3 levels were predicted. An optimized formulation was then prepared according to these levels, resulting in a particle size of 80.23 nm and an entrapment efficiency of 30.80%. It was concluded that the chemical precipitation technique, together with the Box-Behnken experimental design methodology, could be successfully used to optimize the formulation of drug-incorporated calcium carbonate nanoparticles.
Baek, Sang-Soo; Choi, Dong-Ho; Jung, Jae-Woon; Lee, Hyung-Jin; Lee, Hyuk; Yoon, Kwang-Sik; Cho, Kyung Hwa
2015-12-01
Currently, continued urbanization and development result in an increase of impervious areas and surface runoff including pollutants. Also one of the greatest issues in pollutant emissions is the first flush effect (FFE), which implies a greater discharge rate of pollutant mass in the early part in the storm. Low impact development (LID) practices have been mentioned as a promising strategy to control urban stormwater runoff and pollution in the urban ecosystem. However, this requires many experimental and modeling efforts to test LID characteristics and propose an adequate guideline for optimizing LID management. In this study, we propose a novel methodology to optimize the sizes of different types of LID by conducting intensive stormwater monitoring and numerical modeling in a commercial site in Korea. The methodology proposed optimizes LID size in an attempt to moderate FFE on a receiving waterbody. Thereby, the main objective of the optimization is to minimize mass first flush (MFF), which is an indicator for quantifying FFE. The optimal sizes of 6 different LIDs ranged from 1.2 mm to 3.0 mm in terms of runoff depths, which significantly moderate the FFE. We hope that the new proposed methodology can be instructive for establishing LID strategies to mitigate FFE. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tailored magnetic nanoparticles for optimizing magnetic fluid hyperthermia.
Khandhar, Amit P; Ferguson, R Matthew; Simon, Julian A; Krishnan, Kannan M
2012-03-01
Magnetic Fluid Hyperthermia (MFH) is a promising approach towards adjuvant cancer therapy that is based on the localized heating of tumors using the relaxation losses of iron oxide magnetic nanoparticles (MNPs) in alternating magnetic fields (AMF). In this study, we demonstrate optimization of MFH by tailoring MNP size to an applied AMF frequency. Unlike conventional aqueous synthesis routes, we use organic synthesis routes that offer precise control over MNP size (diameter ∼10 to 25 nm), size distribution, and phase purity. Furthermore, the particles are successfully transferred to the aqueous phase using a biocompatible amphiphilic polymer, and demonstrate long-term shelf life. A rigorous characterization protocol ensures that the water-stable MNPs meet all the critical requirements: (1) uniform shape and monodispersity, (2) phase purity, (3) stable magnetic properties approaching that of the bulk, (4) colloidal stability, (5) substantial shelf life, and (6) pose no significant in vitro toxicity. Using a dedicated hyperthermia system, we then identified that 16 nm monodisperse MNPs (σ-0.175) respond optimally to our chosen AMF conditions (f = 373 kHz, H₀ = 14 kA/m); however, with a broader size distribution (σ-0.284) the Specific Loss Power (SLP) decreases by 30%. Finally, we show that these tailored MNPs demonstrate maximum hyperthermia efficiency by reducing viability of Jurkat cells in vitro, suggesting our optimization translates truthfully to cell populations. In summary, we present a way to intrinsically optimize MFH by tailoring the MNPs to any applied AMF, a required precursor to optimize dose and time of treatment. Copyright © 2011 Wiley Periodicals, Inc.
Swanson, William H.; Horner, Douglas G.; Dul, Mitchell W.; Malinovsky, Victor E.
2014-01-01
Purpose To develop guidelines for engineering perimetric stimuli to reduce test-retest variability in glaucomatous defects. Methods Perimetric testing was performed on one eye for 62 patients with glaucoma and 41 age-similar controls on size III and frequency-doubling perimetry and three custom tests with Gaussian blob and Gabor sinusoid stimuli. Stimulus range was controlled by values for ceiling (maximum sensitivity) and floor (minimum sensitivity). Bland-Altman analysis was used to derive 95% limits of agreement on test and retest, and bootstrap analysis was used to test the hypotheses about peak variability. Results Limits of agreement for the three custom stimuli were similar in width (0.72 to 0.79 log units) and peak variability (0.22 to 0.29 log units) for a stimulus range of 1.7 log units. The width of the limits of agreement for size III decreased from 1.78 to 1.37 to 0.99 log units for stimulus ranges of 3.9, 2.7, and 1.7 log units, respectively (F = 3.23, P < 0.001); peak variability was 0.99, 0.54, and 0.34 log units, respectively (P < 0.01). For a stimulus range of 1.3 log units, limits of agreement were narrowest with Gabor and widest with size III stimuli, and peak variability was lower (P < 0.01) with Gabor (0.18 log units) and frequency-doubling perimetry (0.24 log units) than with size III stimuli (0.38 log units). Conclusions Test-retest variability in glaucomatous visual field defects was substantially reduced by engineering the stimuli. Translational Relevance The guidelines should allow developers to choose from a wide range of stimuli. PMID:25371855
Swanson, William H; Horner, Douglas G; Dul, Mitchell W; Malinovsky, Victor E
2014-09-01
To develop guidelines for engineering perimetric stimuli to reduce test-retest variability in glaucomatous defects. Perimetric testing was performed on one eye for 62 patients with glaucoma and 41 age-similar controls on size III and frequency-doubling perimetry and three custom tests with Gaussian blob and Gabor sinusoid stimuli. Stimulus range was controlled by values for ceiling (maximum sensitivity) and floor (minimum sensitivity). Bland-Altman analysis was used to derive 95% limits of agreement on test and retest, and bootstrap analysis was used to test the hypotheses about peak variability. Limits of agreement for the three custom stimuli were similar in width (0.72 to 0.79 log units) and peak variability (0.22 to 0.29 log units) for a stimulus range of 1.7 log units. The width of the limits of agreement for size III decreased from 1.78 to 1.37 to 0.99 log units for stimulus ranges of 3.9, 2.7, and 1.7 log units, respectively ( F = 3.23, P < 0.001); peak variability was 0.99, 0.54, and 0.34 log units, respectively ( P < 0.01). For a stimulus range of 1.3 log units, limits of agreement were narrowest with Gabor and widest with size III stimuli, and peak variability was lower ( P < 0.01) with Gabor (0.18 log units) and frequency-doubling perimetry (0.24 log units) than with size III stimuli (0.38 log units). Test-retest variability in glaucomatous visual field defects was substantially reduced by engineering the stimuli. The guidelines should allow developers to choose from a wide range of stimuli.
Simulation optimization of spherical non-polar guest recognition by deep-cavity cavitands
Wanjari, Piyush P.; Gibb, Bruce C.; Ashbaugh, Henry S.
2013-01-01
Biomimetic deep-cavity cavitand hosts possess unique recognition and encapsulation properties that make them capable of selectively binding a range of non-polar guests within their hydrophobic pocket. Adamantane based derivatives which snuggly fit within the pocket of octa-acid deep cavity cavitands exhibit some of the strongest host binding. Here we explore the roles of guest size and attractiveness on optimizing guest binding to form 1:1 complexes with octa-acid cavitands in water. Specifically we simulate the water-mediated interactions of the cavitand with adamantane and a range of simple Lennard-Jones guests of varying diameter and attractive well-depth. Initial simulations performed with methane indicate hydrated methanes preferentially reside within the host pocket, although these guests frequently trade places with water and other methanes in bulk solution. The interaction strength of hydrophobic guests increases with increasing size from sizes slightly smaller than methane to Lennard-Jones guests comparable in size to adamantane. Over this guest size range the preferential guest binding location migrates from the bottom of the host pocket upwards. For guests larger than adamantane, however, binding becomes less favorable as the minimum in the potential-of-mean force shifts to the cavitand face around the portal. For a fixed guest diameter, the Lennard-Jones well-depth is found to systematically shift the guest-host potential-of-mean force to lower free energies, however, the optimal guest size is found to be insensitive to increasing well-depth. Ultimately our simulations show that adamantane lies within the optimal range of guest sizes with significant attractive interactions to match the most tightly bound Lennard-Jones guests studied. PMID:24359375
Optimal tree increment models for the Northeastern United Statesq
Don C. Bragg
2003-01-01
used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...
Optimal Tree Increment Models for the Northeastern United States
Don C. Bragg
2005-01-01
I used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...
Stochastic Price Models and Optimal Tree Cutting: Results for Loblolly Pine
Robert G. Haight; Thomas P. Holmes
1991-01-01
An empirical investigation of stumpage price models and optimal harvest policies is conducted for loblolly pine plantations in the southeastern United States. The stationarity of monthly and quarterly series of sawtimber prices is analyzed using a unit root test. The statistical evidence supports stationary autoregressive models for the monthly series and for the...
Factors Contributing to Optimal Human Functioning in People of Color in the United States
ERIC Educational Resources Information Center
Constantine, Madonna G.; Sue, Derald Wing
2006-01-01
Many conceptualizations of optimal human functioning are based on Western European notions of healthy and unhealthy development and daily living. When applied to people of color in the United States, however, these conceptualizations may prove inapplicable because of their Western culture--bound nature. The authors explore the role that cultural…
USDA-ARS?s Scientific Manuscript database
The instantaneous transpiration efficiency (ITE, the ratio of photosynthesis rate to transpiration) is an important variable for crops, because it ultimately affects dry mass production per unit of plant water lost to the atmosphere. The theory that stomata optimize carbon uptake per unit water used...
24 CFR 982.555 - Informal hearing for participant.
Code of Federal Regulations, 2010 CFR
2010-04-01
... allowance schedule. (iii) A determination of the family unit size under the PHA subsidy standards. (iv) A... appropriate for the family unit size under the PHA subsidy standards, or the PHA determination to deny the... with HQS because of the family size. (8) A determination by the PHA to exercise or not to exercise any...
A genetic algorithm solution to the unit commitment problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.
1996-02-01
This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 unitsmore » and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.« less
Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed
NASA Astrophysics Data System (ADS)
Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy
2015-09-01
Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.
NASA Astrophysics Data System (ADS)
Meng, Chao; Zhou, Hong; Cong, Dalong; Wang, Chuanwei; Zhang, Peng; Zhang, Zhihui; Ren, Luquan
2012-06-01
The thermal fatigue behavior of hot-work tool steel processed by a biomimetic coupled laser remelting process gets a remarkable improvement compared to untreated sample. The 'dowel pin effect', the 'dam effect' and the 'fence effect' of non-smooth units are the main reason of the conspicuous improvement of the thermal fatigue behavior. In order to get a further enhancement of the 'dowel pin effect', the 'dam effect' and the 'fence effect', this study investigated the effect of different unit morphologies (including 'prolate', 'U' and 'V' morphology) and the same unit morphology in different sizes on the thermal fatigue behavior of H13 hot-work tool steel. The results showed that the 'U' morphology unit had the optimum thermal fatigue behavior, then the 'V' morphology which was better than the 'prolate' morphology unit; when the unit morphology was identical, the thermal fatigue behavior of the sample with large unit sizes was better than that of the small sizes.
24 CFR 884.219 - Overcrowded and underoccupied units.
Code of Federal Regulations, 2010 CFR
2010-04-01
... assisted under this part is not Decent, Safe, and Sanitary by reason of increase in Family size, or that a Contract unit is larger than appropriate for the size of the Family in occupancy, housing assistance payments with respect to such unit will not be abated, unless the Owner fails to offer the Family a...
Size of households and income disparities.
Kuznets, S
1981-01-01
The author examines "the relation between differentials in size of households, (preponderantly family households including one-person units) and disparities in income per household, per person, or per some version of consuming unit." The analysis is based on data for the United States, the Federal Republic of Germany, Israel, Taiwan, the Philippines, and Thailand. excerpt
Preparation and Characterization of a Lecithin Nanoemulsion as a Topical Delivery System
NASA Astrophysics Data System (ADS)
Zhou, Huafeng; Yue, Yang; Liu, Guanlan; Li, Yan; Zhang, Jing; Gong, Qiu; Yan, Zemin; Duan, Mingxing
2010-01-01
Purpose of this study was to establish a lecithin nanoemulsion (LNE) without any synthetic surfactant as a topical delivery vehicle and to evaluate its topical delivery potential by the following factors: particle size, morphology, viscosity, stability, skin hydration and skin penetration. Experimental results demonstrated that an increasing concentration of soybean lecithin and glycerol resulted in a smaller size LNE droplet and increasing viscosity, respectively. The droplet size of optimized LNE, with the glycerol concentration above 75% (w/w), changed from 92 (F10) to 58 nm (F14). Additionally, LNE, incorporated into o/w cream, improved the skin hydration capacity of the cream significantly with about 2.5-fold increase when the concentration of LNE reached 10%. LNE was also demonstrated to improve the penetrability of Nile red (NR) dye into the dermis layer, when an o/w cream, incorporated with NR-loaded LNE, applied on the abdominal skin of rat in vivo. Specifically, the arbitrary unit (ABU) of fluorescence in the dermis layer that had received the cream with a NR-loaded LNE was about 9.9-fold higher than the cream with a NR-loaded general emulsion (GE). These observations suggest that LNE could be used as a promising topical delivery vehicle for lipophilic compounds.
Distributed shared memory for roaming large volumes.
Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno
2006-01-01
We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.
Morel, O; Monceau, E; Tran, N; Malartic, C; Morel, F; Barranger, E; Côté, J F; Gayat, E; Chavatte-Palmer, P; Cabrol, D; Tsatsaris, V
2009-06-01
To evaluate radiofrequency (RF) efficiency and safety for the ablation of retained placenta in humans, using a pregnant sheep model. Experimental study. Laboratory of Surgery School, Nancy, France. Three pregnant ewes/ten human placentas. Various RF procedures were tested in pregnant ewes on 50 placentomes (individual placental units). Reproducibility of the best procedure was then evaluated in a further 20 placentomes and on ten human term placentas in vitro after delivery. Placental tissues destruction, lesions' size, myometrial lesions. Low power (100 W) and low target temperatures (60 degrees C) lead to homogenous tissue destruction, without myometrial lesion. No significant difference was observed in terms of lesion size and procedure duration for in the placentomes of pregnant ewe in vivo and in human placentas in vitro. The diameter of the ablation could be correlated with the tines deployment. The placental tissue structure is very permissive to RF energy, which suggests that RF could be used for the ablation of retained placenta, providing optimal control of tissue destruction. These results call for further experimental evaluations.
NASA Astrophysics Data System (ADS)
Yang, Mei; Jiao, Fengjun; Li, Shulian; Li, Hengqiang; Chen, Guangwen
2015-08-01
A self-sustained, complete and miniaturized methanol fuel processor has been developed based on modular integration and microreactor technology. The fuel processor is comprised of one methanol oxidative reformer, one methanol combustor and one two-stage CO preferential oxidation unit. Microchannel heat exchanger is employed to recover heat from hot stream, miniaturize system size and thus achieve high energy utilization efficiency. By optimized thermal management and proper operation parameter control, the fuel processor can start up in 10 min at room temperature without external heating. A self-sustained state is achieved with H2 production rate of 0.99 Nm3 h-1 and extremely low CO content below 25 ppm. This amount of H2 is sufficient to supply a 1 kWe proton exchange membrane fuel cell. The corresponding thermal efficiency of whole processor is higher than 86%. The size and weight of the assembled reactors integrated with microchannel heat exchangers are 1.4 L and 5.3 kg, respectively, demonstrating a very compact construction of the fuel processor.
Larger cages with housing unit environment enrichment improve the welfare of marmosets.
Yoshimoto, Takuro; Takahashi, Eiki; Yamashita, Shunji; Ohara, Kiichi; Niimi, Kimie
2018-02-09
The provision of adequate space for laboratory animals is essential not only for good welfare but accurate studies. For example, housing conditions for primates used in biomedical research may negatively affect welfare and thus the reliability of findings. In common marmosets (Callithrix jacchus), an appropriate cage size enables a socially harmonious family environment and optimizes reproductive potential. In this study, we investigated the effects of cage size on body weight (BW), behavior, and nursing succession in the common marmoset. Large cages (LCs) with environment enrichment led to an increase in BW while small cages (SCs) caused stereotypic behaviors that were not observed in LCs. In addition, the BW of infants increased with aging in LCs. Our findings indicate that the welfare of marmosets was enhanced by living in LCs. Research on non-human primates is essential for understanding the human brain and developing knowledge-based strategies for the diagnosis and treatment of psychiatric and neurological disorders. Thus, the present findings are important because they indicate that different cages may influence emotional and behavioral phenotypes.
BlochSolver: A GPU-optimized fast 3D MRI simulator for experimentally compatible pulse sequences
NASA Astrophysics Data System (ADS)
Kose, Ryoichi; Kose, Katsumi
2017-08-01
A magnetic resonance imaging (MRI) simulator, which reproduces MRI experiments using computers, has been developed using two graphic-processor-unit (GPU) boards (GTX 1080). The MRI simulator was developed to run according to pulse sequences used in experiments. Experiments and simulations were performed to demonstrate the usefulness of the MRI simulator for three types of pulse sequences, namely, three-dimensional (3D) gradient-echo, 3D radio-frequency spoiled gradient-echo, and gradient-echo multislice with practical matrix sizes. The results demonstrated that the calculation speed using two GPU boards was typically about 7 TFLOPS and about 14 times faster than the calculation speed using CPUs (two 18-core Xeons). We also found that MR images acquired by experiment could be reproduced using an appropriate number of subvoxels, and that 3D isotropic and two-dimensional multislice imaging experiments for practical matrix sizes could be simulated using the MRI simulator. Therefore, we concluded that such powerful MRI simulators are expected to become an indispensable tool for MRI research and development.
GPU Accelerated Vector Median Filter
NASA Technical Reports Server (NTRS)
Aras, Rifat; Shen, Yuzhong
2011-01-01
Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .
Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Shi, Ya-jun; Zhang, Xiao-feil; Guo, Qiu-ting
2015-12-01
To develop a procedure for preparing paclitaxel encapsulated PEGylated liposomes. The membrane hydration followed extraction method was used to prepare PEGylated liposomes. The process and formulation variables were optimized by "Box-Behnken Design (BBD)" of response surface methodology (RSM) with the amount of Soya phosphotidylcholine (SPC) and PEG2000-DSPE as well as the rate of SPC to drug as independent variables and entrapment efficiency as dependent variables for optimization of formulation variables while temperature, pressure and cycle times as independent variables and particle size and polydispersion index as dependent variables for process variables. The optimized liposomal formulation was characterized for particle size, Zeta potential, morphology and in vitro drug release. For entrapment efficiency, particle size, polydispersion index, Zeta potential, and in vitro drug release of PEGylated liposomes was found to be 80.3%, (97.15 ± 14.9) nm, 0.117 ± 0.019, (-30.3 ± 3.7) mV, and 37.4% in 24 h, respectively. The liposomes were found to be small, unilamellar and spherical with smooth surface as seen in transmission electron microscopy. The Box-Behnken response surface methodology facilitates the formulation and optimization of paclitaxel PEGylated liposomes.
Monte Carlo Optimization of Crystal Configuration for Pixelated Molecular SPECT Scanners
NASA Astrophysics Data System (ADS)
Mahani, Hojjat; Raisali, Gholamreza; Kamali-Asl, Alireza; Ay, Mohammad Reza
2017-02-01
Resolution-sensitivity-PDA tradeoff is the most challenging problem in design and optimization of pixelated preclinical SPECT scanners. In this work, we addressed such a challenge from a crystal point-of-view by looking for an optimal pixelated scintillator using GATE Monte Carlo simulation. Various crystal configurations have been investigated and the influence of different pixel sizes, pixel gaps, and three scintillators on tomographic resolution, sensitivity, and PDA of the camera were evaluated. The crystal configuration was then optimized using two objective functions: the weighted-sum and the figure-of-merit methods. The CsI(Na) reveals the highest sensitivity of the order of 43.47 cps/MBq in comparison to the NaI(Tl) and the YAP(Ce), for a 1.5×1.5 mm2 pixel size and 0.1 mm gap. The results show that the spatial resolution, in terms of FWHM, improves from 3.38 to 2.21 mm while the sensitivity simultaneously deteriorates from 42.39 cps/MBq to 27.81 cps/MBq when pixel size varies from 2×2 mm2 to 0.5×0.5 mm2 for a 0.2 mm gap, respectively. The PDA worsens from 0.91 to 0.42 when pixel size decreases from 0.5×0.5 mm2 to 1×1 mm2 for a 0.2 mm gap at 15° incident-angle. The two objective functions agree that the 1.5×1.5 mm2 pixel size and 0.1 mm Epoxy gap CsI(Na) configuration provides the best compromise for small-animal imaging, using the HiReSPECT scanner. Our study highlights that crystal configuration can significantly affect the performance of the camera, and thereby Monte Carlo optimization of pixelated detectors is mandatory in order to achieve an optimal quality tomogram.
Optimization of Driving Styles for Fuel Economy Improvement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malikopoulos, Andreas; Aguilar, Juan P.
2012-01-01
Modern vehicles have sophisticated electronic control units, particularly to control engine operation with respect to a balance between fuel economy, emissions, and power. These control units are designed for specific driving conditions and testing. However, each individual driving style is different and rarely meets those driving conditions. In the research reported here we investigate those driving style factors that have a major impact on fuel economy. An optimization framework is proposed with the aim of optimizing driving styles with respect to these driving factors. A set of polynomial metamodels are constructed to reflect the responses produced by changes of themore » driving factors. Then we compare the optimized driving styles to the original ones and evaluate the efficiency and effectiveness of the optimization formulation.« less
Habitat Demonstration Unit (HDU) Vertical Cylinder Habitat
NASA Technical Reports Server (NTRS)
Howe, Alan; Kennedy, Kriss J.; Gill, Tracy R.; Tri, Terry O.; Toups, Larry; Howard, Robert I.; Spexarth, Gary R.; Cavanaugh, Stephen; Langford, William M.; Dorsey, John T.
2014-01-01
NASA's Constellation Architecture Team defined an outpost scenario optimized for intensive mobility that uses small, highly mobile pressurized rovers supported by portable habitat modules that can be carried between locations of interest on the lunar surface. A compact vertical cylinder characterizes the habitat concept, where the large diameter maximizes usable flat floor area optimized for a gravity environment and allows for efficient internal layout. The module was sized to fit into payload fairings for the Constellation Ares V launch vehicle, and optimized for surface transport carried by the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) mobility system. Launch and other loads are carried through the barrel to a top and bottom truss that interfaces with a structural support unit (SSU). The SSU contains self-leveling feet and docking interfaces for Tri-ATHLETE grasping and heavy lift. A pressurized module needed to be created that was appropriate for the lunar environment, could be easily relocated to new locations, and could be docked together in multiples for expanding pressurized volume in a lunar outpost. It was determined that horizontally oriented pressure vessels did not optimize floor area, which takes advantage of the gravity vector for full use. Hybrid hard-inflatable habitats added an unproven degree of complexity that may eventually be worked out. Other versions of vertically oriented pressure vessels were either too big, bulky, or did not optimize floor area. The purpose of the HDU vertical habitat module is to provide pressurized units that can be docked together in a modular way for lunar outpost pressurized volume expansion, and allow for other vehicles, rovers, and modules to be attached to the outpost to allow for IVA (intra-vehicular activity) transfer between them. The module is a vertically oriented cylinder with a large radius to allow for maximal floor area and use of volume. The modular, 5- m-diameter HDU vertical habitat module consists of a 2-m-high barrel with 0.6-mhigh end domes forming the 56-cubicmeter pressure vessel, and a 19-squaremeter floor area. The module has up to four docking ports located orthogonally from each other around the perimeter, and up to one docking port each on the top or bottom end domes. In addition, the module has mounting trusses top and bottom for equipment, and to allow docking with the ATHLETE mobility system. Novel or unique features of the HDU vertical habitat module include the nodelike function with multiple pressure hatches for docking with other versions of itself and other modules and vehicles; the capacity to be carried by an ATHLETE mobility system; and the ability to attach inflatable 'attic' domes to the top for additional pressurized volume.
Simplified Numerical Analysis of ECT Probe - Eddy Current Benchmark Problem 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikora, R.; Chady, T.; Gratkowski, S.
2005-04-09
In this paper a third eddy current benchmark problem is considered. The objective of the benchmark is to determine optimal operating frequency and size of the pancake coil designated for testing tubes made of Inconel. It can be achieved by maximization of the change in impedance of the coil due to a flaw. Approximation functions of the probe (coil) characteristic were developed and used in order to reduce number of required calculations. It results in significant speed up of the optimization process. An optimal testing frequency and size of the probe were achieved as a final result of the calculation.
Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size
Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa
2016-01-01
Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913
Integrated solar energy system optimization
NASA Astrophysics Data System (ADS)
Young, S. K.
1982-11-01
The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.
Optimizing Aspect-Oriented Mechanisms for Embedded Applications
NASA Astrophysics Data System (ADS)
Hundt, Christine; Stöhr, Daniel; Glesner, Sabine
As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.
Nondimensional Representations for Occulter Design and Performance Evaluation
NASA Technical Reports Server (NTRS)
Cady, Eric
2011-01-01
An occulter is a spacecraft with a precisely-shaped optical edges which ies in formation with a telescope, blocking light from a star while leaving light from nearby planets una ected. Using linear optimization, occulters can be designed for use with telescopes over a wide range of telescope aperture sizes, science bands, and starlight suppression levels. It can be shown that this optimization depends primarily on a small number of independent nondimensional parameters, which correspond to Fresnel numbers and physical scales and enter the optimization only as constraints. We show how these can be used to span the parameter space of possible optimized occulters; this data set can then be mined to determine occulter sizes for various mission scenarios and sets of engineering constraints.
Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.
Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue
2015-08-20
Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.
NASA Astrophysics Data System (ADS)
Spanò, P.; Tosh, I.; Chemla, F.
2010-07-01
OPTIMOS-EVE is a fiber-fed, high-multiplex, high-efficiency, large spectral coverage spectrograph for EELT covering visible and near-infrared simultaneously. More than 200 seeing-limited objects will be observed at the same time over the full 7 arcmin field of view of the telescope, feeding the spectrograph, asking for very large multiplexing at the spectrograph side. The spectrograph consists of two identical units. Each unit will have two optimized channels to observe both visible and near-infrared wavelengths at the same time, covering from 0.37 to 1.7 micron. To maximize the scientific return, a large simultaneous spectral coverage per exposure was required, up to 1/3 of the central wavelength. Moreover, different spectral resolution modes, spanning from 5'000 to 30'000, were defined to match very different sky targets. Many different optical solutions were generated during the initial study phase in order to select that one that will maximize performances within given constraints (mass, space, cost). Here we present the results of this study, with special attention to the baseline design. Efforts were done to keep size of the optical components well within present state-of-the-art technologies. For example, large glass blank sizes were limited to ~35 cm maximum diameter. VPH gratings were selected as dispersers, to improve efficiency, following their superblaze curve. This led to scanning gratings and cameras. Optical design will be described, together with expected performances.
Parallel Computer System for 3D Visualization Stereo on GPU
NASA Astrophysics Data System (ADS)
Al-Oraiqat, Anas M.; Zori, Sergii A.
2018-03-01
This paper proposes the organization of a parallel computer system based on Graphic Processors Unit (GPU) for 3D stereo image synthesis. The development is based on the modified ray tracing method developed by the authors for fast search of tracing rays intersections with scene objects. The system allows significant increase in the productivity for the 3D stereo synthesis of photorealistic quality. The generalized procedure of 3D stereo image synthesis on the Graphics Processing Unit/Graphics Processing Clusters (GPU/GPC) is proposed. The efficiency of the proposed solutions by GPU implementation is compared with single-threaded and multithreaded implementations on the CPU. The achieved average acceleration in multi-thread implementation on the test GPU and CPU is about 7.5 and 1.6 times, respectively. Studying the influence of choosing the size and configuration of the computational Compute Unified Device Archi-tecture (CUDA) network on the computational speed shows the importance of their correct selection. The obtained experimental estimations can be significantly improved by new GPUs with a large number of processing cores and multiprocessors, as well as optimized configuration of the computing CUDA network.
Sae-Leaw, Thanasak; Benjakul, Soottawat
2018-02-01
Lipase from liver of seabass (Lates calcarifer), with a molecular weight of 60kDa, was purified to homogeneity using ammonium sulfate precipitation and a series of chromatographies, including diethylaminoethyl sepharose (DEAE) and Sephadex G-75 size exclusion columns. The optimal pH and temperature were 8.0 and 50°C, respectively. Purified lipase had Michaelis-Menten constant (K m ) and catalytic constant (k cat ) of 0.30mM and 2.16s -1 , respectively, when p-nitrophenyl palmitate (p-NPP) was used as the substrate. When seabass skin was treated with crude lipase from seabass liver at various levels (0.15 and 0.30units/g dry skin) for 1-3h at 30°C, the skin treated with lipase at 0.30 units/g dry skin for 3h had the highest lipid removal (84.57%) with lower lipid distribution in skin. Efficacy in defatting was higher than when isopropanol was used. Thus, lipase from liver of seabass could be used to remove fat in fish skin. Copyright © 2017 Elsevier Ltd. All rights reserved.
Structure-activity relationships of polyphenols to prevent lipid oxidation in pelagic fish muscle.
Pazos, Manuel; Iglesias, Jacobo; Maestre, Rodrigo; Medina, Isabel
2010-10-27
The influence of polymerization (number of monomers) and galloylation (content of esterified gallates) of oligomeric catechins (proanthocyanidins) on their effectiveness to prevent lipid oxidation in pelagic fish muscle was evaluated. Non-galloylated oligomers of catechin with diverse mean polymerization (1.9-3.4 monomeric units) were extracted from pine (Pinus pinaster) bark. Homologous fractions with galloylation ranging from 0.25 to <1 gallate group per molecule were obtained from grape (Vitis vinifera) and witch hazel (Hamamelis virginiana). The results showed the convenience of proanthocyanidins with medium size (2-3 monomeric units) and low galloylation degree (0.15-0.25 gallate group/molecule) to inhibit lipid oxidation in pelagic fish muscle. These optimal structural characteristics of proanthocyanidins were similar to those lately reported in fish oil-in-water emulsions using phosphatidylcholine as emulsifier. This finding suggests that the antioxidant behavior of polyphenols in muscle-based foods can be mimicked in emulsions prepared with phospholipids as emulsifier agents. The present data give relevant information to achieve an optimum use of polyphenols in pelagic fish muscle.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
Awaad, Aziz; Nakamura, Michihiro; Ishimura, Kazunori
2012-07-01
We investigated size-dependent uptake of fluorescent thiol-organosilica particles by Peyer's patches (PPs). We performed an oral single-particle administration (95, 130, 200, 340, 695 and 1050 nm) and a simultaneous dual-particle administration using 2 kinds of particles. Histological imaging and quantitative analysis revealed that particles taken up by the PP subepithelial dome were size dependent, and there was an optimal size range for higher uptake. Quantitative analysis of simultaneous dual-particle administration revealed that the percentage of fluorescence areas for 95, 130, 200, 340, 695 and 1050 nm with respect to 110 nm area was 124.0, 89.1, 73.8, 20.2, 9.2 and 0.5%, respectively. Additionally, imaging using fluorescent thiol-organosilica particles could detect 2 novel pathways through mouse PP epithelium: the transcellular pathway and the paracellular pathway. The uptake of nanoparticles based on an optimal size range and 2 novel pathways could indicate a new approach for vaccine delivery and nanomedicine development. Studying various sizes of fluorescent organosilica particles and their uptake in Peyer's patches, this team of authors determined the optimal size range of administration. Two novel pathways through mouse Peyer's patch epithelium were detected, i.e., the transcellular pathway and the paracellular pathway. This observation may have important applications in future vaccine delivery and nano-drug delivery. Copyright © 2012 Elsevier Inc. All rights reserved.
In-Situ Resource Utilization Experiment for the Asteroid Redirect Crewed Mission
NASA Astrophysics Data System (ADS)
Elliott, J.; Fries, M.; Love, S.; Sellar, R. G.; Voecks, G.; Wilson, D.
2015-10-01
The Asteroid Redirect Crewed Mission (ARCM) represents a unique opportunity to perform in-situ testing of concepts that could lead to full-scale exploitation of asteroids for their valuable resources [1]. This paper describes a concept for an astronautoperated "suitcase" experiment to would demonstrate asteroid volatile extraction using a solar-heated oven and integral cold trap in a configuration scalable to full-size asteroids. Conversion of liberated water into H2 and O2 products would also be demonstrated through an integral processing and storage unit. The plan also includes development of a local prospecting system consisting of a suit-mounted multi-spectral imager to aid the crew in choosing optimal samples, both for In-Situ Resource Utilization (ISRU) and for potential return to Earth.
A new prize system for drug innovation.
Gandjour, Afschin; Chernyak, Nadja
2011-10-01
We propose a new prize (reward) system for drug innovation which pays a price based on the value of health benefits accrued over time. Willingness to pay for a unit of health benefit is determined based on the cost-effectiveness ratio of palliative/nursing care. We solve the problem of limited information on the value of health benefits by mathematically relating reward size to the uncertainty of information including information on potential drug overuse. The proposed prize system offers optimal incentives to invest in research and development because it rewards the innovator for the social value of drug innovation. The proposal is envisaged as a non-voluntary alternative to the current patent system and reduces excessive marketing of innovators and generic drug producers. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Design Optimization and Analysis of a Composite Honeycomb Intertank
NASA Technical Reports Server (NTRS)
Finckenor, Jeff; Spurrier, Mile
1999-01-01
Intertanks, the structure between tanks of launch vehicles, are prime candidates for weight reduction of rockets. This paper discusses the optimization and detailed follow up analysis and testing of a 96 in. diameter, 77 in. tall intertank. The structure has composite face sheets with an aluminum honeycomb core. The ends taper to a thick built up laminate for a double lap bolted splice joint interface. It is made in 8 full length panels joined with bonded double lap joints. The nominal load is 4000 lb/in. Optimization is by Genetic Algorithm and minimizes weight by varying core thickness, number and orientation of acreage and buildup plies, and the size, number and spacing of bolts. A variety of design cases were run with populations up to 2000 and chromosomes as long as 150 bits. Constraints were buckling; face stresses (normal, shear, wrinkling and dimpling); bolt stress; and bolt hole stresses (bearing, net tension, wedge splitting, shear out and tension/shear out). Analysis is by a combination of elasticity solutions and empirical data. After optimization, a series of coupon tests were performed in conjunction with a rigorous analysis involving a variety of finite element models. This analysis and testing resulted in several small changes to the optimized design. The equation used for hole bearing strength was found to be inadequate, resulting in thicker ends. The core thickness increased 0.05", and potting compound was added in the taper to strengthen the facesheet bond. The intertank has undergone a 250,000 lb limit load test and been mated with a composite liquid hydrogen tank. The tank/intertank unit is being installed in a test stand where it will see 200 thermal/load cycles. Afterwards the intertank will be demated and loaded in compression to failure.
Design Optimization and Analysis of a Composite Honeycomb Intertank
NASA Technical Reports Server (NTRS)
Finckenor, Jeffrey; Spurrier, Mike
1998-01-01
Intertanks, the structure between tanks of launch vehicles, are prime candidates for weight reduction of rockets. This paper discusses the optimization and detailed analysis of a 96 in (2.44 m) diameter, 77 in (1.85 m) tall intertank. The structure has composite face sheets and an aluminum honeycomb core. The ends taper to a thick built up laminate for a double lap bolted shear joint. It is made in 8 full length panels joined with bonded double lap joints. The nominal load is 4000 lb/in (7 x 10(exp 5) N/m). Optimization is by Genetic Algorithm and minimizes weight by varying C, core thickness, number and orientation of acreage and buildup plies, and the size, number and spacing of bolts. A variety of cases were run with populations up to 2000 and chromosomes as long as 150 bits. Constraints were buckling, face stresses (normal, shear, wrinkling and dimpling, bolt stress, and bolt hole stresses (bearing, net tension, wedge splitting, shear out and tension/shear out). Analysis is by a combination of theoretical solutions and empirical data. After optimization, a series of coupon tests were performed in conjunction with a rigorous analysis involving a variety of finite element models. The analysis and test resulted in several small changes to the optimized design. The intertank has undergone a 250,000 lb (1.1 x 10(exp 6) N) limit load test and been mated with a composite liquid hydrogen tank. The tank/intertank unit is being installed in a test stand where it will see 200 thermal/load cycles. Afterwards the intertank will be demated and loaded in compression to failure.
ILP-based co-optimization of cut mask layout, dummy fill, and timing for sub-14nm BEOL technology
NASA Astrophysics Data System (ADS)
Han, Kwangsoo; Kahng, Andrew B.; Lee, Hyein; Wang, Lutong
2015-10-01
Self-aligned multiple patterning (SAMP), due to its low overlay error, has emerged as the leading option for 1D gridded back-end-of-line (BEOL) in sub-14nm nodes. To form actual routing patterns from a uniform "sea of wires", a cut mask is needed for line-end cutting or realization of space between routing segments. Constraints on cut shapes and minimum cut spacing result in end-of-line (EOL) extensions and non-functional (i.e. dummy fill) patterns; the resulting capacitance and timing changes must be consistent with signoff performance analyses and their impacts should be minimized. In this work, we address the co-optimization of cut mask layout, dummy fill, and design timing for sub-14nm BEOL design. Our central contribution is an optimizer based on integer linear programming (ILP) to minimize the timing impact due to EOL extensions, considering (i) minimum cut spacing arising in sub-14nm nodes; (ii) cut assignment to different cut masks (color assignment); and (iii) the eligibility to merge two unit-size cuts into a bigger cut. We also propose a heuristic approach to remove dummy fills after the ILP-based optimization by extending the usage of cut masks. Our heuristic can improve critical path performance under minimum metal density and mask density constraints. In our experiments, we study the impact of number of cut masks, minimum cut spacing and metal density under various constraints. Our studies of optimized cut mask solutions in these varying contexts give new insight into the tradeoff of performance and cost that is afforded by cut mask patterning technology options.
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
Is patient size important in dose determination and optimization in cardiology?
NASA Astrophysics Data System (ADS)
Reay, J.; Chapple, C. L.; Kotre, C. J.
2003-12-01
Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization.
Evaluation of PCC long-term durability using intermediate sized gravels to optimize mix gradations.
DOT National Transportation Integrated Search
2010-04-01
With the implementation of the 2000 Q-MC specification, an incentive is provided to produce an optimized gradation to improve placement characteristics. Also, specifications for slip-formed barrier rail have changed to require an optimized gradation....
Analysis of Optimal Jitter Buffer Size for VoIP QoS under WiMAX Power-Saving Mode
NASA Astrophysics Data System (ADS)
Kim, Hyungsuk; Kim, Taehyoun
VoIP service is expected as one of the key applications of Mobile WiMAX, but the speech quality of VoIP service often suffers deterioration due to the fluctuating transmission delay called jitter. This is commonly ameliorated by a de-jitter buffer, and we aim to find the optimal size of de-jitter buffer to achieve speech quality comparable to PSTN. We developed a new model of the packet drops at the de-jitter buffer and the end-to-end packet delay which takes account of the additional delay introduced by the WiMAX power-saving mode. Using our model, we analyzed the optimal size of the de-jitter buffer for various network parameters, and showed that the results obtained by analysis accord with simulation results.
Tolstikhin, Valery; Saeidi, Shayan; Dolgaleva, Ksenia
2018-05-01
We report on the design optimization and tolerance analysis of a multistep lateral-taper spot-size converter based on indium phosphide (InP), performed using the Monte Carlo method. Being a natural fit to (and a key building block of) the regrowth-free taper-assisted vertical integration platform, such a spot-size converter enables efficient and displacement-tolerant fiber coupling to InP-based photonic integrated circuits at a wavelength of 1.31 μm. An exemplary four-step lateral-taper design featuring 0.35 dB coupling loss at optimal alignment of a standard single-mode fiber; ≥7 μm 1 dB displacement tolerance in any direction in a facet plane; and great stability against manufacturing variances is demonstrated.
Zieminski, Stephen; Khandekar, Melin; Wang, Yi
2018-03-01
This study compared the dosimetric performance of (a) volumetric modulated arc therapy (VMAT) with standard optimization (STD) and (b) multi-criteria optimization (MCO) to (c) intensity modulated radiation therapy (IMRT) with MCO for hippocampal avoidance whole brain radiation therapy (HA-WBRT) in RayStation treatment planning system (TPS). Ten HA-WBRT patients previously treated with MCO-IMRT or MCO-VMAT on an Elekta Infinity accelerator with Agility multileaf collimators (5-mm leaves) were re-planned for the other two modalities. All patients received 30 Gy in 15 fractions to the planning target volume (PTV), namely, PTV30 expanded with a 2-mm margin from the whole brain excluding hippocampus with margin. The patients all had metastatic lesions (up to 12) of variable sizes and proximity to the hippocampus, treated with an additional 7.5 Gy from a simultaneous integrated boost (SIB) to PTV37.5. The IMRT plans used eight to eleven non-coplanar fields, whereas the VMAT plans used two coplanar full arcs and a vertex half arc. The averaged target coverage, dose to organs-at-risk (OARs) and monitor unit provided by the three modalities were compared, and a Wilcoxon signed-rank test was performed. MCO-VMAT provided statistically significant reduction of D100 of hippocampus compared to STD-VMAT, and Dmax of cochleas compared to MCO-IMRT. With statistical significance, MCO-VMAT improved V30 of PTV30 by 14.2% and 4.8%, respectively, compared to MCO-IMRT and STD-VMAT. It also raised D95 of PTV37.5 by 0.4 Gy compared to both MCO-IMRT and STD-VMAT. Improved plan quality parameters such as a decrease in overall plan Dmax and total monitor units (MU) were also observed for MCO-VMAT. MCO-VMAT is found to be the optimal modality for HA-WBRT in terms of PTV coverage, OAR sparing and delivery efficiency, compared to MCO-IMRT or STD-VMAT. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
García-Calvo, Raúl; Guisado, J L; Diaz-Del-Rio, Fernando; Córdoba, Antonio; Jiménez-Morales, Francisco
2018-01-01
Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes-master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)-is carried out for this problem. Several procedures that optimize the use of the GPU's resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent sequential single-core implementation running on a recent Intel i7 CPU. This work can provide useful guidance to researchers in biology, medicine, or bioinformatics in how to take advantage of the parallelization on massively parallel devices and GPUs to apply novel metaheuristic algorithms powered by nature for real-world applications (like the method to solve the temporal dynamics of GRNs).
Harish, Varun; Raymond, Andrew P; Issler, Andrea C; Lajevardi, Sepehr S; Chang, Ling-Yun; Maitz, Peter K M; Kennedy, Peter
2015-02-01
The purpose of this study was to compare burn size estimation between referring centres and Burn Units in adult patients transferred to Burn Units in Sydney, Australia. A review of all adults transferred to Burn Units in Sydney, Australia between January 2009 and August 2013 was performed. The TBSA estimated by the referring institution was compared with the TBSA measured at the Burns Unit. There were 698 adults transferred to a Burns Unit. Equivalent TBSA estimation between the referring hospital and Burns Unit occurred in 30% of patients. Overestimation occurred at a ratio exceeding 3:1 with respect to underestimation, with the difference between the referring institutions and Burns Unit estimation being statistically significant (P<0.001). Significant overestimation occurs in the early transfer of burn-injured patients as well as in patients transferred more than 48h after the burn (P<0.005). Underestimation occurs with less frequency but rises with increasing time after the burn (P<0.005) and with increasing TBSA. Throughout the temporal spectrum of transferred patients, severe burns (≥20% TBSA) were found to have more satisfactory burn size estimations compared with less severe injuries (<20% TBSA; P<0.005). There are significant inaccuracies in burn size assessment by referring centres. The systemic tendency for overestimation occurs throughout the entire TBSA spectrum, and persists with increasing time after the burn. Underestimation occurs less frequently but rises with increasing time after the burn and with increasing TBSA. Severe burns (≥20% TBSA) are more accurately estimated by the referring hospital. The inaccuracies in burn size assessment have the potential to result in suboptimal treatment and inappropriate referral to specialised Burn Units. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.
2011-08-01
This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.
Significance of the model considering mixed grain-size for inverse analysis of turbidites
NASA Astrophysics Data System (ADS)
Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.
2016-12-01
A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.
ERIC Educational Resources Information Center
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
NASA Astrophysics Data System (ADS)
Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik
2015-05-01
In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.
Pneumatic System for Concentration of Micrometer-Size Lunar Soil
NASA Technical Reports Server (NTRS)
McKay, David; Cooper, Bonnie
2012-01-01
A report describes a size-sorting method to separate and concentrate micrometer- size dust from a broad size range of particles without using sieves, fluids, or other processes that may modify the composition or the surface properties of the dust. The system consists of four processing units connected in series by tubing. Samples of dry particulates such as lunar soil are introduced into the first unit, a fluidized bed. The flow of introduced nitrogen fluidizes the particulates and preferentially moves the finer grain sizes on to the next unit, a flat plate impactor, followed by a cyclone separator, followed by a Nuclepore polycarbonate filter to collect the dust. By varying the gas flow rate and the sizes of various orifices in the system, the size of the final and intermediate particles can be varied to provide the desired products. The dust can be collected from the filter. In addition, electron microscope grids can be placed on the Nuclepore filter for direct sampling followed by electron microscope characterization of the dust without further handling.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-04-19
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-01-01
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062
Factors affecting plant growth in membrane nutrient delivery
NASA Technical Reports Server (NTRS)
Dreschel, T. W.; Wheeler, R. M.; Sager, J. C.; Knott, W. M.
1990-01-01
The development of the tubular membrane plant growth unit for the delivery of water and nutrients to roots in microgravity has recently focused on measuring the effects of changes in physical variables controlling solution availability to the plants. Significant effects of membrane pore size and the negative pressure used to contain the solution were demonstrated. Generally, wheat grew better in units with a larger pore size but equal negative pressure and in units with the same pore size but less negative pressure. Lettuce also exhibited better plant growth at less negative pressure.
Influence of bubble size and thermal dissipation on compressive wave attenuation in liquid foams
NASA Astrophysics Data System (ADS)
Monloubou, M.; Saint-Jalmes, A.; Dollet, B.; Cantat, I.
2015-11-01
Acoustic or blast wave absorption by liquid foams is especially efficient and bubble size or liquid fraction optimization is an important challenge in this context. A resonant behavior of foams has recently been observed, but the main local dissipative process is still unknown. In this paper, we evidence the thermal origin of the dissipation, with an optimal bubble size close to the thermal boundary layer thickness. Using a shock tube, we produce typical pressure variation at time scales of the order of the millisecond, which propagates in the foam in linear and slightly nonlinear regimes.
Multi-Positioning Mathematics Class Size: Teachers' Views
ERIC Educational Resources Information Center
Handal, Boris; Watson, Kevin; Maher, Marguerite
2015-01-01
This paper explores mathematics teachers' perceptions about class size and the impact class size has on teaching and learning in secondary mathematics classrooms. It seeks to understand teachers' views about optimal class sizes and their thoughts about the education variables that influence these views. The paper draws on questionnaire responses…
Optimization of scaffold design for bone tissue engineering: A computational and experimental study.
Dias, Marta R; Guedes, José M; Flanagan, Colleen L; Hollister, Scott J; Fernandes, Paulo R
2014-04-01
In bone tissue engineering, the scaffold has not only to allow the diffusion of cells, nutrients and oxygen but also provide adequate mechanical support. One way to ensure the scaffold has the right properties is to use computational tools to design such a scaffold coupled with additive manufacturing to build the scaffolds to the resulting optimized design specifications. In this study a topology optimization algorithm is proposed as a technique to design scaffolds that meet specific requirements for mass transport and mechanical load bearing. Several micro-structures obtained computationally are presented. Designed scaffolds were then built using selective laser sintering and the actual features of the fabricated scaffolds were measured and compared to the designed values. It was possible to obtain scaffolds with an internal geometry that reasonably matched the computational design (within 14% of porosity target, 40% for strut size and 55% for throat size in the building direction and 15% for strut size and 17% for throat size perpendicular to the building direction). These results support the use of these kind of computational algorithms to design optimized scaffolds with specific target properties and confirm the value of these techniques for bone tissue engineering. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana
2011-01-01
Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Peter
2016-07-26
This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy ofmore » the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.« less
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Stochastic Optimization for Unit Commitment-A Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Qipeng P.; Wang, Jianhui; Liu, Andrew L.
2015-07-01
Optimization models have been widely used in the power industry to aid the decision-making process of scheduling and dispatching electric power generation resources, a process known as unit commitment (UC). Since UC's birth, there have been two major waves of revolution on UC research and real life practice. The first wave has made mixed integer programming stand out from the early solution and modeling approaches for deterministic UC, such as priority list, dynamic programming, and Lagrangian relaxation. With the high penetration of renewable energy, increasing deregulation of the electricity industry, and growing demands on system reliability, the next wave ismore » focused on transitioning from traditional deterministic approaches to stochastic optimization for unit commitment. Since the literature has grown rapidly in the past several years, this paper is to review the works that have contributed to the modeling and computational aspects of stochastic optimization (SO) based UC. Relevant lines of future research are also discussed to help transform research advances into real-world applications.« less
A Decision-making Model for a Two-stage Production-delivery System in SCM Environment
NASA Astrophysics Data System (ADS)
Feng, Ding-Zhong; Yamashiro, Mitsuo
A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.
NASA Technical Reports Server (NTRS)
Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.
1976-01-01
Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.
Cryogenic Tank Structure Sizing With Structural Optimization Method
NASA Technical Reports Server (NTRS)
Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.
2001-01-01
Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.
NASA Astrophysics Data System (ADS)
Jaggi, Chandra K.; Khanna, Aditi; Kishore, Aakanksha
2016-03-01
In order to sustain the challenges of maintaining good quality and perfect screening process, rework process becomes a rescue to compensate for the imperfections present in the production system. The proposed model attempts to explore the existing real-life situation with a more practical approach by incorporating the concept of imperfect rework as this occurs as an unavoidable problem to the firm due to irreparable disorders even in the reworked items. Hence, a production inventory model is formulated here to study the combined effect of imperfect quality items, faulty inspection process and imperfect rework process on the optimal production quantity and optimal backorder level. An analytical method is employed to maximize the expected total profit per unit time. Moreover, the results of several previous research articles namely Chiu et al (2006), Chiu et al (2005), Salameh and Hayek (2001), and classical EPQ with shortages are deduced as special cases. To demonstrate the applicability of the model, and to observe the effects of key parameters on the optimal replenishment policy, a numerical example along with a comprehensive sensitivity analysis has been presented. The pertinence of the model can be found in most of the manufacturing industries like textile, electronics, furniture, footwear, plastics etc. A production lot size model has been explored for defectives items with inspection errors and an imperfect rework process.
Optomechanical study and optimization of cantilever plate dynamics
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1995-06-01
Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.
Multi-GPU implementation of a VMAT treatment plan optimization algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun
Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less
Hooda, Aashima; Nanda, Arun; Jain, Manish; Kumar, Vikash; Rathee, Permender
2012-12-01
The current study involves the development and optimization of their drug entrapment and ex vivo bioadhesion of multiunit chitosan based floating system containing Ranitidine HCl by ionotropic gelation method for gastroretentive delivery. Chitosan being cationic, non-toxic, biocompatible, biodegradable and bioadhesive is frequently used as a material for drug delivery systems and used to transport a drug to an acidic environment where it enhances the transport of polar drugs across epithelial surfaces. The effect of various process variables like drug polymer ratio, concentration of sodium tripolyphosphate and stirring speed on various physiochemical properties like drug entrapment efficiency, particle size and bioadhesion was optimized using central composite design and analyzed using response surface methodology. The observed responses were coincided well with the predicted values given by the optimization technique. The optimized microspheres showed drug entrapment efficiency of 74.73%, particle size 707.26 μm and bioadhesion 71.68% in simulated gastric fluid (pH 1.2) after 8 h with floating lag time 40s. The average size of all the dried microspheres ranged from 608.24 to 720.80 μm. The drug entrapment efficiency of microspheres ranged from 41.67% to 87.58% and bioadhesion ranged from 62% to 86%. Accelerated stability study was performed on optimized formulation as per ICH guidelines and no significant change was found in drug content on storage. Copyright © 2012 Elsevier B.V. All rights reserved.
The choice of sample size: a mixed Bayesian / frequentist approach.
Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John
2009-04-01
Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui
2018-05-01
A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.
High dependency units in the UK: variable size, variable character, few in number.
Thompson, F. J.; Singer, M.
1995-01-01
An exploratory descriptive survey was conducted to determine the size and character of high dependency units (HDUs) in the UK. A telephone survey and subsequent postal questionnaire was sent to the 39 general HDUs in the UK determined by a recent survey from the Royal College of Anaesthetists; replies were received from 28. Most HDUs (82%, n = 23) were geographically distinct from the intensive care unit and varied in size from three to 13 beds, although only 64% (n = 18) reported that all beds were currently open. Nurse: patient ratios were at least 1:3. Fifty per cent of units had one or more designated consultants in charge, although only 11% (n = 3) had specifically designated consultant sessions. Junior medical cover was provided mainly by the on-call speciality term. Twenty units acted as a step-down facility for discharged intensive care unit patients and 21 offered a step-up facility for patients from general wards. Provision of facilities and levels of monitoring varied between these units. Few HDUs exist in the UK and they are variable in size and in the facilities and monitoring procedures which they provide. Future studies are urgently required to determine cost-effectiveness and outcome benefit of this intermediate care facility. Images p221-a PMID:7784281
High dependency units in the UK: variable size, variable character, few in number.
Thompson, F J; Singer, M
1995-04-01
An exploratory descriptive survey was conducted to determine the size and character of high dependency units (HDUs) in the UK. A telephone survey and subsequent postal questionnaire was sent to the 39 general HDUs in the UK determined by a recent survey from the Royal College of Anaesthetists; replies were received from 28. Most HDUs (82%, n = 23) were geographically distinct from the intensive care unit and varied in size from three to 13 beds, although only 64% (n = 18) reported that all beds were currently open. Nurse: patient ratios were at least 1:3. Fifty per cent of units had one or more designated consultants in charge, although only 11% (n = 3) had specifically designated consultant sessions. Junior medical cover was provided mainly by the on-call speciality term. Twenty units acted as a step-down facility for discharged intensive care unit patients and 21 offered a step-up facility for patients from general wards. Provision of facilities and levels of monitoring varied between these units. Few HDUs exist in the UK and they are variable in size and in the facilities and monitoring procedures which they provide. Future studies are urgently required to determine cost-effectiveness and outcome benefit of this intermediate care facility.
Nakayama, Mariko; Kinoshita, Sachiko; Verdonschot, Rinus G.
2016-01-01
Recent research has revealed that the way phonology is constructed during word production differs across languages. Dutch and English native speakers are suggested to incrementally insert phonemes into a metrical frame, whereas Mandarin Chinese speakers use syllables and Japanese speakers use a unit called the mora (often a CV cluster such as “ka” or “ki”). The present study is concerned with the question how bilinguals construct phonology in their L2 when the phonological unit size differs from the unit in their L1. Japanese–English bilinguals of varying proficiency read aloud English words preceded by masked primes that overlapped in just the onset (e.g., bark-BENCH) or the onset plus vowel corresponding to the mora-sized unit (e.g., bell-BENCH). Low-proficient Japanese–English bilinguals showed CV priming but did not show onset priming, indicating that they use their L1 phonological unit when reading L2 English words. In contrast, high-proficient Japanese–English bilinguals showed significant onset priming. The size of the onset priming effect was correlated with the length of time spent in English-speaking countries, which suggests that extensive exposure to L2 phonology may play a key role in the emergence of a language-specific phonological unit in L2 word production. PMID:26941669
The relationship of motor unit size, firing rate and force.
Conwit, R A; Stashuk, D; Tracy, B; McHugh, M; Brown, W F; Metter, E J
1999-07-01
Using a clinical electromyographic (EMG) protocol, motor units were sampled from the quadriceps femoris during isometric contractions at fixed force levels to examine how average motor unit size and firing rate relate to force generation. Mean firing rates (mFRs) and sizes (mean surface-detected motor unit action potential (mS-MUAP) area) of samples of active motor units were assessed at various force levels in 79 subjects. MS-MUAP size increased linearly with increased force generation, while mFR remained relatively constant up to 30% of a maximal force and increased appreciably only at higher force levels. A relationship was found between muscle force and mS-MUAP area (r2 = 0.67), mFR (r2 = 0.38), and the product of mS-MUAP area and mFR (mS-MUAP x mFR) (r2 = 0.70). The results support the hypothesis that motor units are recruited in an orderly manner during forceful contractions, and that in large muscles only at higher levels of contraction ( > 30% MVC) do mFRs increase appreciably. MS-MUAP and mFR can be assessed using clinical EMG techniques and they may provide a physiological basis for analyzing the role of motor units during muscle force generation.
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Scotti, S. J.
1989-01-01
The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.
Combining analysis with optimization at Langley Research Center. An evolutionary process
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1982-01-01
The evolutionary process of combining analysis and optimization codes was traced with a view toward providing insight into the long term goal of developing the methodology for an integrated, multidisciplinary software system for the concurrent analysis and optimization of aerospace structures. It was traced along the lines of strength sizing, concurrent strength and flutter sizing, and general optimization to define a near-term goal for combining analysis and optimization codes. Development of a modular software system combining general-purpose, state-of-the-art, production-level analysis computer programs for structures, aerodynamics, and aeroelasticity with a state-of-the-art optimization program is required. Incorporation of a modular and flexible structural optimization software system into a state-of-the-art finite element analysis computer program will facilitate this effort. This effort results in the software system used that is controlled with a special-purpose language, communicates with a data management system, and is easily modified for adding new programs and capabilities. A 337 degree-of-freedom finite element model is used in verifying the accuracy of this system.
Microeconomic principles explain an optimal genome size in bacteria.
Ranea, Juan A G; Grant, Alastair; Thornton, Janet M; Orengo, Christine A
2005-01-01
Bacteria can clearly enhance their survival by expanding their genetic repertoire. However, the tight packing of the bacterial genome and the fact that the most evolved species do not necessarily have the biggest genomes suggest there are other evolutionary factors limiting their genome expansion. To clarify these restrictions on size, we studied those protein families contributing most significantly to bacterial-genome complexity. We found that all bacteria apply the same basic and ancestral 'molecular technology' to optimize their reproductive efficiency. The same microeconomics principles that define the optimum size in a factory can also explain the existence of a statistical optimum in bacterial genome size. This optimum is reached when the bacterial genome obtains the maximum metabolic complexity (revenue) for minimal regulatory genes (logistic cost).
Piasecki, M; Ireland, A; Piasecki, J; Stashuk, D W; Swiecicka, A; Rutter, M K; Jones, D A; McPhee, J S
2018-05-01
The age-related loss of muscle mass is related to the loss of innervating motor neurons and denervation of muscle fibres. Not all denervated muscle fibres are degraded; some may be reinnervated by an adjacent surviving neuron, which expands the innervating motor unit proportional to the numbers of fibres rescued. Enlarged motor units have larger motor unit potentials when measured using electrophysiological techniques. We recorded much larger motor unit potentials in relatively healthy older men compared to young men, but the older men with the smallest muscles (sarcopenia) had smaller motor unit potentials than healthy older men. These findings suggest that healthy older men reinnervate large numbers of muscle fibres to compensate for declining motor neuron numbers, but a failure to do so contributes to muscle loss in sarcopenic men. Sarcopenia results from the progressive loss of skeletal muscle mass and reduced function in older age. It is likely to be associated with the well-documented reduction of motor unit numbers innervating limb muscles and the increase in size of surviving motor units via reinnervation of denervated fibres. However, no evidence exists to confirm the extent of motor unit remodelling in sarcopenic individuals. The aim of the present study was to compare motor unit size and number between young (n = 48), non-sarcopenic old (n = 13), pre-sarcopenic (n = 53) and sarcopenic (n = 29) men. Motor unit potentials (MUPs) were isolated from intramuscular and surface EMG recordings. The motor unit numbers were reduced in all groups of old compared with young men (all P < 0.001). MUPs were higher in non-sarcopenic and pre-sarcopenic men compared with young men (P = 0.039 and 0.001 respectively), but not in the vastus lateralis of sarcopenic old (P = 0.485). The results suggest that extensive motor unit remodelling occurs relatively early during ageing, exceeds the loss of muscle mass and precedes sarcopenia. Reinnervation of denervated muscle fibres probably expands the motor unit size in the non-sarcopenic and pre-sarcopenic old, but not in the sarcopenic old. These findings suggest that a failure to expand the motor unit size distinguishes sarcopenic from pre-sarcopenic muscles. © 2018 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Elimination of Bimodal Size in InAs/GaAs Quantum Dots for Preparation of 1.3-μm Quantum Dot Lasers
NASA Astrophysics Data System (ADS)
Su, Xiang-Bin; Ding, Ying; Ma, Ben; Zhang, Ke-Lu; Chen, Ze-Sheng; Li, Jing-Lun; Cui, Xiao-Ran; Xu, Ying-Qiang; Ni, Hai-Qiao; Niu, Zhi-Chuan
2018-02-01
The device characteristics of semiconductor quantum dot lasers have been improved with progress in active layer structures. Self-assembly formed InAs quantum dots grown on GaAs had been intensively promoted in order to achieve quantum dot lasers with superior device performances. In the process of growing high-density InAs/GaAs quantum dots, bimodal size occurs due to large mismatch and other factors. The bimodal size in the InAs/GaAs quantum dot system is eliminated by the method of high-temperature annealing and optimized the in situ annealing temperature. The annealing temperature is taken as the key optimization parameters, and the optimal annealing temperature of 680 °C was obtained. In this process, quantum dot growth temperature, InAs deposition, and arsenic (As) pressure are optimized to improve quantum dot quality and emission wavelength. A 1.3-μm high-performance F-P quantum dot laser with a threshold current density of 110 A/cm2 was demonstrated.
Elimination of Bimodal Size in InAs/GaAs Quantum Dots for Preparation of 1.3-μm Quantum Dot Lasers.
Su, Xiang-Bin; Ding, Ying; Ma, Ben; Zhang, Ke-Lu; Chen, Ze-Sheng; Li, Jing-Lun; Cui, Xiao-Ran; Xu, Ying-Qiang; Ni, Hai-Qiao; Niu, Zhi-Chuan
2018-02-21
The device characteristics of semiconductor quantum dot lasers have been improved with progress in active layer structures. Self-assembly formed InAs quantum dots grown on GaAs had been intensively promoted in order to achieve quantum dot lasers with superior device performances. In the process of growing high-density InAs/GaAs quantum dots, bimodal size occurs due to large mismatch and other factors. The bimodal size in the InAs/GaAs quantum dot system is eliminated by the method of high-temperature annealing and optimized the in situ annealing temperature. The annealing temperature is taken as the key optimization parameters, and the optimal annealing temperature of 680 °C was obtained. In this process, quantum dot growth temperature, InAs deposition, and arsenic (As) pressure are optimized to improve quantum dot quality and emission wavelength. A 1.3-μm high-performance F-P quantum dot laser with a threshold current density of 110 A/cm 2 was demonstrated.
NASA Astrophysics Data System (ADS)
Brzęczek, Mateusz; Bartela, Łukasz
2013-12-01
This paper presents the parameters of the reference oxy combustion block operating with supercritical steam parameters, equipped with an air separation unit and a carbon dioxide capture and compression installation. The possibility to recover the heat in the analyzed power plant is discussed. The decision variables and the thermodynamic functions for the optimization algorithm were identified. The principles of operation of genetic algorithm and methodology of conducted calculations are presented. The sensitivity analysis was performed for the best solutions to determine the effects of the selected variables on the power and efficiency of the unit. Optimization of the heat recovery from the air separation unit, flue gas condition and CO2 capture and compression installation using genetic algorithm was designed to replace the low-pressure section of the regenerative water heaters of steam cycle in analyzed unit. The result was to increase the power and efficiency of the entire power plant.
Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.
Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng
2013-01-01
Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.
Mutation Bias Favors Protein Folding Stability in the Evolution of Small Populations
Porto, Markus; Bastolla, Ugo
2010-01-01
Mutation bias in prokaryotes varies from extreme adenine and thymine (AT) in obligatory endosymbiotic or parasitic bacteria to extreme guanine and cytosine (GC), for instance in actinobacteria. GC mutation bias deeply influences the folding stability of proteins, making proteins on the average less hydrophobic and therefore less stable with respect to unfolding but also less susceptible to misfolding and aggregation. We study a model where proteins evolve subject to selection for folding stability under given mutation bias, population size, and neutrality. We find a non-neutral regime where, for any given population size, there is an optimal mutation bias that maximizes fitness. Interestingly, this optimal GC usage is small for small populations, large for intermediate populations and around 50% for large populations. This result is robust with respect to the definition of the fitness function and to the protein structures studied. Our model suggests that small populations evolving with small GC usage eventually accumulate a significant selective advantage over populations evolving without this bias. This provides a possible explanation to the observation that most species adopting obligatory intracellular lifestyles with a consequent reduction of effective population size shifted their mutation spectrum towards AT. The model also predicts that large GC usage is optimal for intermediate population size. To test these predictions we estimated the effective population sizes of bacterial species using the optimal codon usage coefficients computed by dos Reis et al. and the synonymous to non-synonymous substitution ratio computed by Daubin and Moran. We found that the population sizes estimated in these ways are significantly smaller for species with small and large GC usage compared to species with no bias, which supports our prediction. PMID:20463869
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Constraint factor in optimization of truss structures via flower pollination algorithm
NASA Astrophysics Data System (ADS)
Bekdaş, Gebrail; Nigdeli, Sinan Melih; Sayin, Baris
2017-07-01
The aim of the paper is to investigate the optimum design of truss structures by considering different stress and displacement constraints. For that reason, the flower pollination algorithm based methodology was applied for sizing optimization of space truss structures. Flower pollination algorithm is a metaheuristic algorithm inspired by the pollination process of flowering plants. By the imitation of cross-pollination and self-pollination processes, the randomly generation of sizes of truss members are done in two ways and these two types of optimization are controlled with a switch probability. In the study, a 72 bar space truss structure was optimized by using five different cases of the constraint limits. According to the results, a linear relationship between the optimum structure weight and constraint limits was observed.
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
NASA Astrophysics Data System (ADS)
Sudibyo, Aji, B. B.; Sumardi, S.; Mufakir, F. R.; Junaidi, A.; Nurjaman, F.; Karna, Aziza, Aulia
2017-01-01
Gold amalgamation process was widely used to treat gold ore. This process produces the tailing or amalgamation solid waste, which still contains gold at 8-9 ppm. Froth flotation is one of the promising methods to beneficiate gold from this tailing. However, this process requires optimal conditions which depends on the type of raw material. In this study, Taguchi method was used to optimize the optimum conditions of the froth flotation process. The Taguchi optimization shows that the gold recovery was strongly influenced by the particle size which is the best particle size at 150 mesh followed by the Potassium amyl xanthate concentration, pH and pine oil concentration at 1133.98, 4535.92 and 68.04 gr/ton amalgamation tailing, respectively.
The role of interactions between accommodation and vergence in human visual development
NASA Astrophysics Data System (ADS)
Teel, Danielle F. W.
Even in early infancy accommodation and vergence interact through neural coupling such that accommodation drives vergence (AC/A ratio) and vergence drives accommodation (CA/C ratio), to assist coordination and development of clear and single binocular vision. Infants have narrow inter-pupillary distances (IPD) requiring less vergence in angular units (degrees or prism diopters), and are typically hyperopic, requiring larger accommodative responses (diopters) than adults. The relative demands also change with emmetropization (decreasing hyperopia) and head growth (increasing IPD) over time. Therefore, adult-like couplings may not be optimal during development and the couplings may play a role in abnormality such as esotropia. A range of cues can drive accommodation and vergence. In addition to blur and disparity, proximity in the form of looming, size and perceived distance has been shown to influence the interactions between accommodation and vergence in adults. The role of this cue in measures of coupling is also poorly understood and may impact key clinical AC/A estimates in young children. Utilizing principles of eccentric photorefraction and Purkinje image eye tracking, this research examines the AC/A and CA/C ratios in infants, preschoolers and adults as a function of age, refractive error and interpupillary distance, plus the role proximity, specifically looming and size cues, plays in estimating the AC/A ratio in three year olds and adults. The AC/A (PD/D) was significantly higher in adults than three-year-olds or infants but similar across age groups in MA/D units. The CA/C was higher in infants than adults or three-year-olds (D/MA and D/PD). Although, not fully reciprocally related, a significant negative relationship was found between the response AC/A and CA/C. Similarly, higher AC/As (PD/D) and lower CA/Cs (D/PD) were associated with larger IPDs and less hyperopia. Although, not statistically significant the absence of proximity resulted in a trend toward a lower AC/A than in it's presence for children. These results provide insight into methods of measuring the AC/A ratio in children and determining whether the couplings are optimized to prevent over-convergence or under-accommodation during development and growth.
NASA Astrophysics Data System (ADS)
Zhang, Xianjun
The combined heat and power (CHP)-based distributed generation (DG) or dis-tributed energy resources (DERs) are mature options available in the present energy market, considered to be an effective solution to promote energy efficiency. In the urban environment, the electricity, water and natural gas distribution networks are becoming increasingly interconnected with the growing penetration of the CHP-based DG. Subsequently, this emerging interdependence leads to new topics meriting serious consideration: how much of the CHP-based DG can be accommodated and where to locate these DERs, and given preexisting constraints, how to quantify the mutual impacts on operation performances between these urban energy distribution networks and the CHP-based DG. The early research work was conducted to investigate the feasibility and design methods for one residential microgrid system based on existing electricity, water and gas infrastructures of a residential community, mainly focusing on the economic planning. However, this proposed design method cannot determine the optimal DG sizing and siting for a larger test bed with the given information of energy infrastructures. In this context, a more systematic as well as generalized approach should be developed to solve these problems. In the later study, the model architecture that integrates urban electricity, water and gas distribution networks, and the CHP-based DG system was developed. The proposed approach addressed the challenge of identifying the optimal sizing and siting of the CHP-based DG on these urban energy networks and the mutual impacts on operation performances were also quantified. For this study, the overall objective is to maximize the electrical output and recovered thermal output of the CHP-based DG units. The electricity, gas, and water system models were developed individually and coupled by the developed CHP-based DG system model. The resultant integrated system model is used to constrain the DG's electrical output and recovered thermal output, which are affected by multiple factors and thus analyzed in different case studies. The results indicate that the designed typical gas system is capable of supplying sufficient natural gas for the DG normal operation, while the present water system cannot support the complete recovery of the exhaust heat from the DG units.
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
Rotationally resolved colors of the targets of NASA's Lucy mission
NASA Astrophysics Data System (ADS)
Emery, Joshua; Mottola, Stefano; Brown, Mike; Noll, Keith; Binzel, Richard
2018-05-01
We propose rotationally resolved photometry at 3.6 and 4.5 um of 5 Trojan asteroids and one Main Belt asteroid - the targets of NASA's Lucy mission. The proposed Spitzer observations are designed to meet a combination of science goals and mission support objectives. Science goals 1) Search for signatures of volatiles and/or organics on the surfaces. a. This goal includes resolving a discrepancy between previous WISE and Spitzer measurements of Trojans 2) Provide new constraints on the cause of rotational spectral heterogeneity detected on 3548 Eurybates at shorter wavelengths a. Determine whether the heterogeneity (Fig 1) extends to the 3-5 um region 3) Assess the possibility for spectral heterogeneity on the other targets a. This goal will help test the hypothesis of Wong and Brown (2015) that the near-surface interiors of Trojans differ from their surfaces 4) Thermal data at 4.5 um for the Main Belt target Donaldjohanson will refine estimates of size, albedo, and provide the first estimate of thermal inertia Mission support objectives 1) Assess scientifically optimal encounter times (viewing geometries) for the fly-bys a. Characterizing rotational spectral units now will enable the team to choose the most scientifically valuable part of the asteroid to view 2) Gather data to optimize observing parameters for Lucy instruments a. Measuring brightness in the 3 - 5 um region and resolving the discrepancy between WISE and Spitzer will enable better planning of the Lucy spectral observations in this wavelength range 3) The size, albedo, and thermal inertia of Donaldjohanson are fundamental data for planning the encounter with that Main Belt asteroid
Takeuchi, Hiroshi
2018-05-08
Since searching for the global minimum on the potential energy surface of a cluster is very difficult, many geometry optimization methods have been proposed, in which initial geometries are randomly generated and subsequently improved with different algorithms. In this study, a size-guided multi-seed heuristic method is developed and applied to benzene clusters. It produces initial configurations of the cluster with n molecules from the lowest-energy configurations of the cluster with n - 1 molecules (seeds). The initial geometries are further optimized with the geometrical perturbations previously used for molecular clusters. These steps are repeated until the size n satisfies a predefined one. The method locates putative global minima of benzene clusters with up to 65 molecules. The performance of the method is discussed using the computational cost, rates to locate the global minima, and energies of initial geometries. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Evaluation of ultrasonics and optimized radiography for 2219-T87 aluminum weldments
NASA Technical Reports Server (NTRS)
Clotfelter, W. N.; Hoop, J. M.; Duren, P. C.
1975-01-01
Ultrasonic studies are described which are specifically directed toward the quantitative measurement of randomly located defects previously found in aluminum welds with radiography or with dye penetrants. Experimental radiographic studies were also made to optimize techniques for welds of the thickness range to be used in fabricating the External Tank of the Space Shuttle. Conventional and innovative ultrasonic techniques were applied to the flaw size measurement problem. Advantages and disadvantages of each method are discussed. Flaw size data obtained ultrasonically were compared to radiographic data and to real flaw sizes determined by destructive measurements. Considerable success was achieved with pulse echo techniques and with 'pitch and catch' techniques. The radiographic work described demonstrates that careful selection of film exposure parameters for a particular application must be made to obtain optimized flaw detectability. Thus, film exposure techniques can be improved even though radiography is an old weld inspection method.