Sample records for simple sizing optimization

  1. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  2. Achieving optimal growth: lessons from simple metabolic modules

    NASA Astrophysics Data System (ADS)

    Goyal, Sidhartha; Chen, Thomas; Wingreen, Ned

    2009-03-01

    Metabolism is a universal property of living organisms. While the metabolic network itself has been well characterized, the logic of its regulation remains largely mysterious. Recent work has shown that growth rates of microorganisms, including the bacterium Escherichia coli, correlate well with optimal growth rates predicted by flux-balance analysis (FBA), a constraint-based computational method. How difficult is it for cells to achieve optimal growth? Our analysis of representative metabolic modules drawn from real metabolism shows that, in all cases, simple feedback inhibition allows nearly optimal growth. Indeed, product-feedback inhibition is found in every biosynthetic pathway and constitutes about 80% of metabolic regulation. However, we find that product-feedback systems designed to approach optimal growth necessarily produce large pool sizes of metabolites, with potentially detrimental effects on cells via toxicity and osmotic imbalance. Interestingly, the sizes of metabolite pools can be strongly restricted if the feedback inhibition is ultrasensitive (i.e. with high Hill coefficient). The need for ultrasensitive mechanisms to limit pool sizes may therefore explain some of the ubiquitous, puzzling complexity found in metabolic feedback regulation at both the transcriptional and post-transcriptional levels.

  3. The effects of facilitation and competition on group foraging in patches

    PubMed Central

    Laguë, Marysa; Tania, Nessy; Heath, Joel; Edelstein-Keshet, Leah

    2012-01-01

    Significant progress has been made towards understanding the social behaviour of animal groups, but the patch model, a foundation of foraging theory, has received little attention in a social context. The effect of competition on the optimal time to leave a foraging patch was considered as early as the original formulation of the marginal value theorem, but surprisingly, the role of facilitation (where foraging in groups decreases the time to find food in patches), has not been incorporated. Here we adapt the classic patch model to consider how the trade-off between facilitation and competition influence optimal group size. Using simple assumptions about the effect of group size on the food-finding time and the sharing of resources, we find conditions for existence of optima in patch residence time and in group size. When patches are close together (low travel times), larger group sizes are optimal. Groups are predicted to exploit patches differently than individual foragers and the degree of patch depletion at departure depends on the details of the trade-off between competition and facilitation. A variety of currencies and group-size effects are also considered and compared. Using our simple formulation, we also study the effects of social foraging on patch exploitation which to date have received little empirical study. PMID:22743132

  4. The effects of facilitation and competition on group foraging in patches.

    PubMed

    Laguë, Marysa; Tania, Nessy; Heath, Joel; Edelstein-Keshet, Leah

    2012-10-07

    Significant progress has been made towards understanding the social behaviour of animal groups, but the patch model, a foundation of foraging theory, has received little attention in a social context. The effect of competition on the optimal time to leave a foraging patch was considered as early as the original formulation of the marginal value theorem, but surprisingly, the role of facilitation (where foraging in groups decreases the time to find food in patches), has not been incorporated. Here we adapt the classic patch model to consider how the trade-off between facilitation and competition influences optimal group size. Using simple assumptions about the effect of group size on the food-finding time and the sharing of resources, we find conditions for existence of optima in patch residence time and in group size. When patches are close together (low travel times), larger group sizes are optimal. Groups are predicted to exploit patches differently than individual foragers and the degree of patch depletion at departure depends on the details of the trade-off between competition and facilitation. A variety of currencies and group-size effects are also considered and compared. Using our simple formulation, we also study the effects of social foraging on patch exploitation which to date have received little empirical study. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. The effect of code expanding optimizations on instruction cache design

    NASA Technical Reports Server (NTRS)

    Chen, William Y.; Chang, Pohua P.; Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    It is shown that code expanding optimizations have strong and non-intuitive implications on instruction cache design. Three types of code expanding optimizations are studied: instruction placement, function inline expansion, and superscalar optimizations. Overall, instruction placement reduces the miss ratio of small caches. Function inline expansion improves the performance for small cache sizes, but degrades the performance of medium caches. Superscalar optimizations increases the cache size required for a given miss ratio. On the other hand, they also increase the sequentiality of instruction access so that a simple load-forward scheme effectively cancels the negative effects. Overall, it is shown that with load forwarding, the three types of code expanding optimizations jointly improve the performance of small caches and have little effect on large caches.

  6. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  7. Optimization of long circulating mixed polymeric micelles containing vinpocetine using simple lattice mixture design, in vitro and in vivo characterization.

    PubMed

    El-Dahmy, Rania Moataz; Elsayed, Ibrahim; Elshafeey, Ahmed Hassen; Gawad, Nabaweya Abdelaziz Abd El; El-Gazayerly, Omaima Naim

    2014-12-30

    The aim of this study was to increase the in vivo mean residence time of vinpocetine after IV injection utilizing long circulating mixed micellar systems. Mixed micelles were prepared using Pluronics L121, P123 and F127. The systems were characterized by testing their entrapment efficiency, particle size, polydispersity index, zeta potential, transmission electron microscopy and in vitro drug release. Simple lattice mixture design was planned for the optimization using Design-Expert(®) software. The optimized formula was lyophilized, sterilized and imaged by scanning electron microscope. Moreover, the in vivo behavior of the optimized formula was evaluated after IV injection in rabbits. The optimized formula, containing 68% w/w Pluronic L121 and 32% w/w Pluronic F127, had the highest desirability value (0.621). Entrapment efficiency, particle size, polydispersity index and zeta potential of the optimized formula were 50.74 ± 3.26%, 161.50 ± 7.39 nm, 0.21 ± 0.03 and -22.42 ± 1.72 mV, respectively. Lyophilization and sterilization did not affect the characteristics of the optimized formula. Upon in vivo investigation in rabbits, the optimized formula showed a significantly higher elimination half-life and mean residence time than the market product. Finally, mixed micelles could be considered as a promising long circulating nanocarrier for lipophilic drugs. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  9. Optimized random phase only holograms.

    PubMed

    Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto

    2018-02-15

    We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.

  10. The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials

    DOE PAGES

    Witman, Matthew; Ling, Sanliang; Jawahery, Sudi; ...

    2017-03-30

    For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less

  11. The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witman, Matthew; Ling, Sanliang; Jawahery, Sudi

    For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less

  12. Optimal deployment of thermal energy storage under diverse economic and climate conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, Nicholas; Mendes, Gonçalo; Stadler, Michael

    2014-04-01

    This paper presents an investigation of the economic benefit of thermal energy storage (TES) for cooling, across a range of economic and climate conditions. Chilled water TES systems are simulated for a large office building in four distinct locations, Miami in the U.S.; Lisbon, Portugal; Shanghai, China; and Mumbai, India. Optimal system size and operating schedules are determined using the optimization model DER-CAM, such that total cost, including electricity and amortized capital costs are minimized. The economic impacts of each optimized TES system is then compared to systems sized using a simple heuristic method, which bases system size as fractionmore » (50percent and 100percent) of total on-peak summer cooling loads. Results indicate that TES systems of all sizes can be effective in reducing annual electricity costs (5percent-15percent) and peak electricity consumption (13percent-33percent). The investigation also indentifies a number of criteria which drive TES investment, including low capital costs, electricity tariffs with high power demand charges and prolonged cooling seasons. In locations where these drivers clearly exist, the heuristically sized systems capture much of the value of optimally sized systems; between 60percent and 100percent in terms of net present value. However, in instances where these drivers are less pronounced, the heuristic tends to oversize systems, and optimization becomes crucial to ensure economically beneficial deployment of TES, increasing the net present value of heuristically sized systems by as much as 10 times in some instances.« less

  13. An optical fusion gate for W-states

    NASA Astrophysics Data System (ADS)

    Özdemir, Ş. K.; Matsunaga, E.; Tashima, T.; Yamamoto, T.; Koashi, M.; Imoto, N.

    2011-10-01

    We introduce a simple optical gate to fuse arbitrary-size polarization entangled W-states to prepare larger W-states. The gate requires a polarizing beam splitter (PBS), a half-wave plate (HWP) and two photon detectors. We study, numerically and analytically, the necessary resource consumption for preparing larger W-states by fusing smaller ones with the proposed fusion gate. We show analytically that resource requirement scales at most sub-exponentially with the increasing size of the state to be prepared. We numerically determine the resource cost for fusion without recycling where W-states of arbitrary size can be optimally prepared. Moreover, we introduce another strategy that is based on recycling and outperforms the optimal strategy for the non-recycling case.

  14. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  15. Size-assortative mating and sexual size dimorphism are predictable from simple mechanics of mate-grasping behavior

    PubMed Central

    2010-01-01

    Background A major challenge in evolutionary biology is to understand the typically complex interactions between diverse counter-balancing factors of Darwinian selection for size assortative mating and sexual size dimorphism. It appears that rarely a simple mechanism could provide a major explanation of these phenomena. Mechanics of behaviors can predict animal morphology, such like adaptations to locomotion in animals from various of taxa, but its potential to predict size-assortative mating and its evolutionary consequences has been less explored. Mate-grasping by males, using specialized adaptive morphologies of their forelegs, midlegs or even antennae wrapped around female body at specific locations, is a general mating strategy of many animals, but the contribution of the mechanics of this wide-spread behavior to the evolution of mating behavior and sexual size dimorphism has been largely ignored. Results Here, we explore the consequences of a simple, and previously ignored, fact that in a grasping posture the position of the male's grasping appendages relative to the female's body is often a function of body size difference between the sexes. Using an approach taken from robot mechanics we model coercive grasping of females by water strider Gerris gracilicornis males during mating initiation struggles. We determine that the male optimal size (relative to the female size), which gives the males the highest grasping force, properly predicts the experimentally measured highest mating success. Through field sampling and simulation modeling of a natural population we determine that the simple mechanical model, which ignores most of the other hypothetical counter-balancing selection pressures on body size, is sufficient to account for size-assortative mating pattern as well as species-specific sexual dimorphism in body size of G. gracilicornis. Conclusion The results indicate how a simple and previously overlooked physical mechanism common in many taxa is sufficient to account for, or importantly contribute to, size-assortative mating and its consequences for the evolution of sexual size dimorphism. PMID:21092131

  16. Modeling the Effects of Beam Size and Flaw Morphology on Ultrasonic Pulse/Echo Sizing of Delaminations in Carbon Composites

    NASA Technical Reports Server (NTRS)

    Margetan, Frank J.; Leckey, Cara A.; Barnard, Dan

    2012-01-01

    The size and shape of a delamination in a multi-layered structure can be estimated in various ways from an ultrasonic pulse/echo image. For example the -6dB contours of measured response provide one simple estimate of the boundary. More sophisticated approaches can be imagined where one adjusts the proposed boundary to bring measured and predicted UT images into optimal agreement. Such approaches require suitable models of the inspection process. In this paper we explore issues pertaining to model-based size estimation for delaminations in carbon fiber reinforced laminates. In particular we consider the influence on sizing when the delamination is non-planar or partially transmitting in certain regions. Two models for predicting broadband sonic time-domain responses are considered: (1) a fast "simple" model using paraxial beam expansions and Kirchhoff and phase-screen approximations; and (2) the more exact (but computationally intensive) 3D elastodynamic finite integration technique (EFIT). Model-to-model and model-to experiment comparisons are made for delaminations in uniaxial composite plates, and the simple model is then used to critique the -6dB rule for delamination sizing.

  17. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  18. Cryogenic Tank Structure Sizing With Structural Optimization Method

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.

    2001-01-01

    Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.

  19. Dimensions of design space: a decision-theoretic approach to optimal research design.

    PubMed

    Conti, Stefano; Claxton, Karl

    2009-01-01

    Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.

  20. Is patient size important in dose determination and optimization in cardiology?

    NASA Astrophysics Data System (ADS)

    Reay, J.; Chapple, C. L.; Kotre, C. J.

    2003-12-01

    Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization.

  1. Parametric Study of Biconic Re-Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Steele, Bryan; Banks, Daniel W.; Whitmore, Stephen A.

    2007-01-01

    An optimization based on hypersonic aerodynamic performance and volumetric efficiency was accomplished for a range of biconic configurations. Both axisymmetric and quasi-axisymmetric geometries (bent and flattened) were analyzed. The aerodynamic optimization wag based on hypersonic simple Incidence angle analysis tools. The range of configurations included those suitable for r lunar return trajectory with a lifting aerocapture at Earth and an overall volume that could support a nominal crew. The results yielded five configurations that had acceptable aerodynamic performance and met overall geometry and size limitations

  2. Controlled and tunable polymer particles' production using a single microfluidic device

    NASA Astrophysics Data System (ADS)

    Amoyav, Benzion; Benny, Ofra

    2018-04-01

    Microfluidics technology offers a new platform to control liquids under flow in small volumes. The advantage of using small-scale reactions for droplet generation along with the capacity to control the preparation parameters, making microfluidic chips an attractive technology for optimizing encapsulation formulations. However, one of the drawback in this methodology is the ability to obtain a wide range of droplet sizes, from sub-micron to microns using a single chip design. In fact, typically, droplet chips are used for micron-dimension particles, while nanoparticles' synthesis requires complex chips design (i.e., microreactors and staggered herringbone micromixer). Here, we introduce the development of a highly tunable and controlled encapsulation technique, using two polymer compositions, for generating particles ranging from microns to nano-size using the same simple single microfluidic chip design. Poly(lactic-co-glycolic acid) (PLGA 50:50) or PLGA/polyethylene glycol polymeric particles were prepared with focused-flow chip, yielding monodisperse particle batches. We show that by varying flow rate, solvent, surfactant and polymer composition, we were able to optimize particles' size and decrease polydispersity index, using simple chip designs with no further related adjustments or costs. Utilizing this platform, which offers tight tuning of particle properties, could offer an important tool for formulation development and can potentially pave the way towards a better precision nanomedicine.

  3. Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media

    USGS Publications Warehouse

    Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.

    2000-01-01

    To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.

  4. Online submicron particle sizing by dynamic light scattering using autodilution

    NASA Technical Reports Server (NTRS)

    Nicoli, David F.; Elings, V. B.

    1989-01-01

    Efficient production of a wide range of commercial products based on submicron colloidal dispersions would benefit from instrumentation for online particle sizing, permitting real time monitoring and control of the particle size distribution. Recent advances in the technology of dynamic light scattering (DLS), especially improvements in algorithms for inversion of the intensity autocorrelation function, have made it ideally suited to the measurement of simple particle size distributions in the difficult submicron region. Crucial to the success of an online DSL based instrument is a simple mechanism for automatically sampling and diluting the starting concentrated sample suspension, yielding a final concentration which is optimal for the light scattering measurement. A proprietary method and apparatus was developed for performing this function, designed to be used with a DLS based particle sizing instrument. A PC/AT computer is used as a smart controller for the valves in the sampler diluter, as well as an input-output communicator, video display and data storage device. Quantitative results are presented for a latex suspension and an oil-in-water emulsion.

  5. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  6. Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets

    PubMed Central

    Morvan, Camille; Maloney, Laurence T.

    2012-01-01

    Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. PMID:22319428

  7. Analysis and optimization of cross-immunity epidemic model on complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Zhang, Hao; Wu, Yin-Hua; Feng, Wei-Qiang; Zhang, Jian

    2015-09-01

    There are various infectious diseases in real world, and these diseases often spread on a network of population and compete for the limited hosts. Cross-immunity is an important disease competing pattern, which has attracted the attention of many researchers. In this paper, we discovered an important conclusion for two cross-immunity epidemics on a network. When the infectious ability of the second epidemic takes a fixed value, the infectious ability of the first epidemic has an optimal value which minimizes the sum of the infection sizes of the two epidemics. We also proposed a simple mathematical analysis method for the infection size of the second epidemic using the cavity method. The proposed method and conclusion are verified by simulation results. Minor inaccuracies of the existing mathematical methods for the infection size of the second epidemic are also found and discussed in experiments, which have not been noticed in existing research.

  8. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  9. Optimal Shape in Electromagnetic Scattering by Small Aspherical Particles

    NASA Astrophysics Data System (ADS)

    Kostinski, A. B.; Mongkolsittisilp, A.

    2013-12-01

    We consider the question of optimal shape for scattering by randomly oriented particles, e.g., shape causing minimal extinction among those of equal volume. Guided by the isoperimetric property of a sphere, relevant in the geometrical optics limit of scattering by large particles, we examine an analogous question in the low frequency (electrostatics) approximation, seeking to disentangle electric and geometric contributions. To that end, we survey the literature on shape functionals and focus on ellipsoids, giving a simple proof of spherical optimality for the coated ellipsoidal particle. Monotonic increase with asphericity in the low frequency regime for orientation-averaged induced dipole moments and scattering cross-sections is also established. Additional physical insight is obtained from the Rayleigh-Gans (transparent) limit and eccentricity expansions. We propose linking low and high frequency regime in a single minimum principle valid for all size parameters, provided that reasonable size distributions wash out the resonances for inter-mediate size parameters. This proposal is further supported by the sum rule for integrated extinction. Implications for spectro-polarimetric scattering are explicitly considered.

  10. Simulation optimization of spherical non-polar guest recognition by deep-cavity cavitands

    PubMed Central

    Wanjari, Piyush P.; Gibb, Bruce C.; Ashbaugh, Henry S.

    2013-01-01

    Biomimetic deep-cavity cavitand hosts possess unique recognition and encapsulation properties that make them capable of selectively binding a range of non-polar guests within their hydrophobic pocket. Adamantane based derivatives which snuggly fit within the pocket of octa-acid deep cavity cavitands exhibit some of the strongest host binding. Here we explore the roles of guest size and attractiveness on optimizing guest binding to form 1:1 complexes with octa-acid cavitands in water. Specifically we simulate the water-mediated interactions of the cavitand with adamantane and a range of simple Lennard-Jones guests of varying diameter and attractive well-depth. Initial simulations performed with methane indicate hydrated methanes preferentially reside within the host pocket, although these guests frequently trade places with water and other methanes in bulk solution. The interaction strength of hydrophobic guests increases with increasing size from sizes slightly smaller than methane to Lennard-Jones guests comparable in size to adamantane. Over this guest size range the preferential guest binding location migrates from the bottom of the host pocket upwards. For guests larger than adamantane, however, binding becomes less favorable as the minimum in the potential-of-mean force shifts to the cavitand face around the portal. For a fixed guest diameter, the Lennard-Jones well-depth is found to systematically shift the guest-host potential-of-mean force to lower free energies, however, the optimal guest size is found to be insensitive to increasing well-depth. Ultimately our simulations show that adamantane lies within the optimal range of guest sizes with significant attractive interactions to match the most tightly bound Lennard-Jones guests studied. PMID:24359375

  11. Analytical sizing methods for behind-the-meter battery storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael; Yang, Tao

    In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less

  12. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  13. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2015-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  14. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2016-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  15. Cost effective campaigning in social networks

    NASA Astrophysics Data System (ADS)

    Kotnis, Bhushan; Kuri, Joy

    2016-05-01

    Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.

  16. On optimal infinite impulse response edge detection filters

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1991-01-01

    The authors outline the design of an optimal, computationally efficient, infinite impulse response edge detection filter. The optimal filter is computed based on Canny's high signal to noise ratio, good localization criteria, and a criterion on the spurious response of the filter to noise. An expression for the width of the filter, which is appropriate for infinite-length filters, is incorporated directly in the expression for spurious responses. The three criteria are maximized using the variational method and nonlinear constrained optimization. The optimal filter parameters are tabulated for various values of the filter performance criteria. A complete methodology for implementing the optimal filter using approximating recursive digital filtering is presented. The approximating recursive digital filter is separable into two linear filters operating in two orthogonal directions. The implementation is very simple and computationally efficient, has a constant time of execution for different sizes of the operator, and is readily amenable to real-time hardware implementation.

  17. Heat of adsorption, adsorption stress, and optimal storage of methane in slit and cylindrical carbon pores predicted by classical density functional theory.

    PubMed

    Hlushak, Stepan

    2018-01-03

    Temperature, pressure and pore-size dependences of the heat of adsorption, adsorption stress, and adsorption capacity of methane in simple models of slit and cylindrical carbon pores are studied using classical density functional theory (CDFT) and grand-canonical Monte-Carlo (MC) simulation. Studied properties depend nontrivially on the bulk pressure and the size of the pores. Heat of adsorption increases with loading, but only for sufficiently narrow pores. While the increase is advantageous for gas storage applications, it is less significant for cylindrical pores than for slits. Adsorption stress and the average adsorbed fluid density show oscillatory dependence on the pore size and increase with bulk pressure. Slit pores exhibit larger amplitude of oscillations of the normal adsorption stress with pore size increase than cylindrical pores. However, the increase of the magnitude of the adsorption stress with bulk pressure increase is more significant for cylindrical than for slit pores. Adsorption stress appears to be negative for a wide range of pore sizes and external conditions. The pore size dependence of the average delivered density of the gas is analyzed and the optimal pore sizes for storage applications are estimated. The optimal width of slit pore appears to be almost independent of storage pressure at room temperature and pressures above 10 bar. Similarly to the case of slit pores, the optimal radius of cylindrical pores does not exhibit much dependence on the storage pressure above 15 bar. Both optimal width and optimal radii of slit and cylindrical pores increase as the temperature decreases. A comparison of the results of CDFT theory and MC simulations reveals subtle but important differences in the underlying fluid models employed by the approaches. The differences in the high-pressure behaviour between the hard-sphere 2-Yukawa and Lennard-Jones models of methane, employed by the CDFT and MC approaches, respectively, result in an overestimation of the heat of adsorption by the CDFT theory at higher loadings. However, both adsorption stress and adsorption capacity appear to be much less sensitive to the differences between the models and demonstrate excellent agreement between the theory and the computer experiment.

  18. JPARSS: A Java Parallel Network Package for Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jie; Akers, Walter; Chen, Ying

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size.more » This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services« less

  19. Minimum principles in electromagnetic scattering by small aspherical particles

    NASA Astrophysics Data System (ADS)

    Kostinski, Alex B.; Mongkolsittisilp, Ajaree

    2013-12-01

    We consider the question of optimal shapes, e.g., those causing minimal extinction among all shapes of equal volume. Guided by the isoperimetric property of a sphere, relevant in the geometrical optics limit of scattering by large particles, we examine an analogous question in the low frequency approximation, seeking to disentangle electric and geometric contributions. To that end, we survey the literature on shape functionals and focus on ellipsoids, giving a simple discussion of spherical optimality for the coated ellipsoidal particle. Monotonic increase with asphericity in the low frequency regime for orientation-averaged induced dipole moments and scattering cross-sections is also shown. Additional physical insight is obtained from the Rayleigh-Gans (transparent) limit and eccentricity expansions. We propose connecting low and high frequency regimes in a single minimum principle valid for all size parameters, provided that reasonable size distributions of randomly oriented aspherical particles wash out the resonances for intermediate size parameters. This proposal is further supported by the sum rule for integrated extinction.

  20. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  1. Plant genotyping using fluorescently tagged inter-simple sequence repeats (ISSRs): basic principles and methodology.

    PubMed

    Prince, Linda M

    2015-01-01

    Inter-simple sequence repeat PCR (ISSR-PCR) is a fast, inexpensive genotyping technique based on length variation in the regions between microsatellites. The method requires no species-specific prior knowledge of microsatellite location or composition. Very small amounts of DNA are required, making this method ideal for organisms of conservation concern, or where the quantity of DNA is extremely limited due to organism size. ISSR-PCR can be highly reproducible but requires careful attention to detail. Optimization of DNA extraction, fragment amplification, and normalization of fragment peak heights during fluorescent detection are critical steps to minimizing the downstream time spent verifying and scoring the data.

  2. Optimal size of pterygium excision for limbal conjunctival autograft using fibrin glue in primary pterygia.

    PubMed

    Hwang, Ho Sik; Cho, Kyong Jin; Rand, Gabriel; Chuck, Roy S; Kwon, Ji Won

    2018-06-07

    In our study we describe a method that optimizes size of excision and autografting for primary pterygia along with the use of intraoperative MMC and fibrin glue. Our objective is to propose a simple, optimizedpterygium surgical technique with excellent aesthetic outcomes and low rates of recurrence and otheradverse events. Retrospective chart review of 78 consecutive patients with stage III primary pterygia who underwent an optimal excision technique by three experienced surgeons. The technique consisted of removal of the pterygium head, excision of the pterygium body and Tenon's layer limited in proportion to the length of the head, application of intraoperative mitomycin C to the defect, harvest of superior bulbar limbal conjunctival graft, adherence of graft with fibrin glue. Outcomes included operative time, follow up period, pterygium recurrence, occurrences of incorrectly sized grafts, and other complications. All patients were followed up for more than a year. Of the 78 patients, there were 2 cases of pterygium recurrence (2.6%). There was one case of wound dehiscence secondary to small-sized donor conjunctivaand one case of over-sized donor conjunctiva, neither of which required surgical correction. There were no toxic complications associated with the use of mitomycin C. Correlating the excision of the pterygium body and underlying Tenon's layer to the length of the pterygium head, along with the use intraoperative mitomycin C, limbal conjunctival autografting, and fibrin adhesionresulted in excellent outcomes with a low rate of recurrence for primary pterygia.

  3. Optimal performance of generalized heat engines with finite-size baths of arbitrary multiple conserved quantities beyond independent-and-identical-distribution scaling

    NASA Astrophysics Data System (ADS)

    Ito, Kosuke; Hayashi, Masahito

    2018-01-01

    In quantum thermodynamics, effects of finiteness of the baths have been less considered. In particular, there is no general theory which focuses on finiteness of the baths of multiple conserved quantities. Then, we investigate how the optimal performance of generalized heat engines with multiple conserved quantities alters in response to the size of the baths. In the context of general theories of quantum thermodynamics, the size of the baths has been given in terms of the number of identical copies of a system, which does not cover even such a natural scaling as the volume. In consideration of the asymptotic extensivity, we deal with a generic scaling of the baths to naturally include the volume scaling. Based on it, we derive a bound for the performance of generalized heat engines reflecting finite-size effects of the baths, which we call fine-grained generalized Carnot bound. We also construct a protocol to achieve the optimal performance of the engine given by this bound. Finally, applying the obtained general theory, we deal with simple examples of generalized heat engines. As for an example of non-independent-and-identical-distribution scaling and multiple conserved quantities, we investigate a heat engine with two baths composed of an ideal gas exchanging particles, where the volume scaling is applied. The result implies that the mass of the particle explicitly affects the performance of this engine with finite-size baths.

  4. Design and performance of coded aperture optical elements for the CESR-TA x-ray beam size monitor

    NASA Astrophysics Data System (ADS)

    Alexander, J. P.; Chatterjee, A.; Conolly, C.; Edwards, E.; Ehrlichman, M. P.; Flanagan, J. W.; Fontes, E.; Heltsley, B. K.; Lyndaker, A.; Peterson, D. P.; Rider, N. T.; Rubin, D. L.; Seeley, R.; Shanks, J.

    2014-12-01

    We describe the design and performance of optical elements for an x-ray beam size monitor (xBSM), a device measuring e+ and e- beam sizes in the CESR-TA storage ring. The device can measure vertical beam sizes of 10 - 100 μm on a turn-by-turn, bunch-by-bunch basis at e± beam energies of 2 - 5 GeV. x-rays produced by a hard-bend magnet pass through a single- or multiple-slit (coded aperture) optical element onto a detector. The coded aperture slit pattern and thickness of masking material forming that pattern can both be tuned for optimal resolving power. We describe several such optical elements and show how well predictions of simple models track measured performances.

  5. Particle Transport and Size Sorting in Bubble Microstreaming Flow

    NASA Astrophysics Data System (ADS)

    Thameem, Raqeeb; Rallabandi, Bhargav; Wang, Cheng; Hilgenfeldt, Sascha

    2014-11-01

    Ultrasonic driving of sessile semicylindrical bubbles results in powerful steady streaming flows that are robust over a wide range of driving frequencies. In a microchannel, this flow field pattern can be fine-tuned to achieve size-sensitive sorting and trapping of particles at scales much smaller than the bubble itself; the sorting mechanism has been successfully described based on simple geometrical considerations. We investigate the sorting process in more detail, both experimentally (using new parameter variations that allow greater control over the sorting) and theoretically (incorporating the device geometry as well as the superimposed channel flow into an asymptotic theory). This results in optimized criteria for size sorting and a theoretical description that closely matches the particle behavior close to the bubble, the crucial region for size sorting.

  6. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    NASA Astrophysics Data System (ADS)

    Deufel, Christopher L.; Furutani, Keith M.

    2014-02-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.

  7. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  8. Battery energy storage sizing when time of use pricing is applied.

    PubMed

    Carpinelli, Guido; Khormali, Shahab; Mottola, Fabio; Proto, Daniela

    2014-01-01

    Battery energy storage systems (BESSs) are considered a key device to be introduced to actuate the smart grid paradigm. However, the most critical aspect related to the use of such device is its economic feasibility as it is a still developing technology characterized by high costs and limited life duration. Particularly, the sizing of BESSs must be performed in an optimized way in order to maximize the benefits related to their use. This paper presents a simple and quick closed form procedure for the sizing of BESSs in residential and industrial applications when time-of-use tariff schemes are applied. A sensitivity analysis is also performed to consider different perspectives in terms of life span and future costs.

  9. Battery Energy Storage Sizing When Time of Use Pricing Is Applied

    PubMed Central

    Khormali, Shahab

    2014-01-01

    Battery energy storage systems (BESSs) are considered a key device to be introduced to actuate the smart grid paradigm. However, the most critical aspect related to the use of such device is its economic feasibility as it is a still developing technology characterized by high costs and limited life duration. Particularly, the sizing of BESSs must be performed in an optimized way in order to maximize the benefits related to their use. This paper presents a simple and quick closed form procedure for the sizing of BESSs in residential and industrial applications when time-of-use tariff schemes are applied. A sensitivity analysis is also performed to consider different perspectives in terms of life span and future costs. PMID:25295309

  10. Inferring Soil Moisture Memory from Streamflow Observations Using a Simple Water Balance Model

    NASA Technical Reports Server (NTRS)

    Orth, Rene; Koster, Randal Dean; Seneviratne, Sonia I.

    2013-01-01

    Soil moisture is known for its integrative behavior and resulting memory characteristics. Soil moisture anomalies can persist for weeks or even months into the future, making initial soil moisture a potentially important contributor to skill in weather forecasting. A major difficulty when investigating soil moisture and its memory using observations is the sparse availability of long-term measurements and their limited spatial representativeness. In contrast, there is an abundance of long-term streamflow measurements for catchments of various sizes across the world. We investigate in this study whether such streamflow measurements can be used to infer and characterize soil moisture memory in respective catchments. Our approach uses a simple water balance model in which evapotranspiration and runoff ratios are expressed as simple functions of soil moisture; optimized functions for the model are determined using streamflow observations, and the optimized model in turn provides information on soil moisture memory on the catchment scale. The validity of the approach is demonstrated with data from three heavily monitored catchments. The approach is then applied to streamflow data in several small catchments across Switzerland to obtain a spatially distributed description of soil moisture memory and to show how memory varies, for example, with altitude and topography.

  11. Selective recovery of silver from waste low-temperature co-fired ceramic and valorization through silver nanoparticle synthesis.

    PubMed

    Swain, Basudev; Shin, Dongyoon; Joo, So Yeong; Ahn, Nak Kyoon; Lee, Chan Gi; Yoon, Jin-Ho

    2017-11-01

    Considering the value of silver metal and silver nanoparticles, the waste generated during manufacturing of low temperature co-fired ceramic (LTCC) were recycled through the simple yet cost effective process by chemical-metallurgy. Followed by leaching optimization, silver was selectively recovered through precipitation. The precipitated silver chloride was valorized though silver nanoparticle synthesis by a simple one-pot greener synthesis route. Through leaching-precipitation optimization, quantitative selective recovery of silver chloride was achieved, followed by homogeneous pure silver nanoparticle about 100nm size were synthesized. The reported recycling process is a simple process, versatile, easy to implement, requires minimum facilities and no specialty chemicals, through which semiconductor manufacturing industry can treat the waste generated during manufacturing of LTCC and reutilize the valorized silver nanoparticles in manufacturing in a close loop process. Our reported process can address issues like; (i) waste disposal, as well as value-added silver recovery, (ii) brings back the material to production stream and address the circular economy, and (iii) can be part of lower the futuristic carbon economy and cradle-to-cradle technology management, simultaneously. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Single element injector cold flow testing for STME swirl coaxial injector element design

    NASA Technical Reports Server (NTRS)

    Hulka, J.; Schneider, J. A.

    1993-01-01

    An oxidizer-swirled coaxial element injector is being investigated for application in the Space Transportation Main Engine (STME). Single element cold flow experiments were conducted to provide characterization of the STME injector element for future analysis, design, and optimization. All tests were conducted to quiescent, ambient backpressure conditions. Spray angle, circumferential spray uniformity, dropsize, and dropsize distribution were measured in water-only and water/nitrogen flows. Rupe mixing efficiency was measured using water/sucrose solution flows with a large grid patternator for simple comparative evaluation of mixing. Factorial designs of experiment were used for statistical evaluation of injector geometrical design features and propellant flow conditions on mixing and atomization. Increasing the free swirl angle of the liquid oxidizer had the greatest influence on increasing the mixing efficiency. The addition of gas assistance had the most significant effect on reducing oxidizer droplet size parameters and increasing droplet size distribution. Increasing the oxidizer injection velocity had the greatest influence for reducing oxidizer droplet size parameters and increasing size distribution for non-gas assisted flows. Single element and multi-element subscale hot fire testing are recommended to verify optimized designs before committing to the STME design.

  13. Optimizing the robustness of electrical power systems against cascading failures.

    PubMed

    Zhang, Yingrui; Yağan, Osman

    2016-06-21

    Electrical power systems are one of the most important infrastructures that support our society. However, their vulnerabilities have raised great concern recently due to several large-scale blackouts around the world. In this paper, we investigate the robustness of power systems against cascading failures initiated by a random attack. This is done under a simple yet useful model based on global and equal redistribution of load upon failures. We provide a comprehensive understanding of system robustness under this model by (i) deriving an expression for the final system size as a function of the size of initial attacks; (ii) deriving the critical attack size after which system breaks down completely; (iii) showing that complete system breakdown takes place through a first-order (i.e., discontinuous) transition in terms of the attack size; and (iv) establishing the optimal load-capacity distribution that maximizes robustness. In particular, we show that robustness is maximized when the difference between the capacity and initial load is the same for all lines; i.e., when all lines have the same redundant space regardless of their initial load. This is in contrast with the intuitive and commonly used setting where capacity of a line is a fixed factor of its initial load.

  14. Hydroxy propyl cellulose capped silver nanoparticles produced by simple dialysis process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Francis, L.; Balakrishnan, A.; Sanosh, K.P.

    2010-08-15

    Silver (Ag) nanoparticles ({approx}6 nm) were synthesized using a novel dialysis process. Silver nitrate was used as a starting precursor, ethylene glycol as solvent and hydroxy propyl cellulose (HPC) introduced as a capping agent. Different batches of reaction mixtures were prepared with different concentrations of silver nitrate (AgNO{sub 3}). After the reduction and aging, these solutions were subjected to ultra-violet visible spectroscopy (UVS). Optimized solution, containing 250 mg AgNO{sub 3} revealed strong plasmon resonance peak at {approx}410 nm in the spectrum indicating good colloidal state of Ag nanoparticles in the diluted solution. The optimized solution was subjected to dialysis processmore » to remove any unreacted solvent. UVS of the optimized solution after dialysis showed the plasmon resonance peak shifting to {approx}440 nm indicating the reduction of Ag ions into zero-valent Ag. This solution was dried at 80 {sup o}C and the resultant HPC capped Ag (HPC/Ag) nanoparticles were studied using transmission electron microscopy (TEM) for their particle size and morphology. The particle size distribution (PSD) analysis of these nanoparticles showed skewed distribution plot with particle size ranging from 3 to 18 nm. The nanoparticles were characterized for phase composition using X-ray diffractrometry (XRD) and Fourier transform infrared spectroscopy (FT-IR).« less

  15. SERS of Individual Nanoparticles on a Mirror: Size Does Matter, but so Does Shape

    PubMed Central

    2016-01-01

    Coupling noble metal nanoparticles by a 1 nm gap to an underlying gold mirror confines light to extremely small volumes, useful for sensing on the nanoscale. Individually measuring 10 000 of such gold nanoparticles of increasing size dramatically shows the different scaling of their optical scattering (far-field) and surface-enhanced Raman emission (SERS, near-field). Linear red-shifts of the coupled plasmon modes are seen with increasing size, matching theory. The total SERS from the few hundred molecules under each nanoparticle dramatically increases with increasing size. This scaling shows that maximum SERS emission is always produced from the largest nanoparticles, irrespective of tuning to any plasmonic resonances. Changes of particle facet with nanoparticle size result in vastly weaker scaling of the near-field SERS, without much modifying the far-field, and allows simple approaches for optimizing practical sensing. PMID:27223478

  16. SERS of Individual Nanoparticles on a Mirror: Size Does Matter, but so Does Shape.

    PubMed

    Benz, Felix; Chikkaraddy, Rohit; Salmon, Andrew; Ohadi, Hamid; de Nijs, Bart; Mertens, Jan; Carnegie, Cloudy; Bowman, Richard W; Baumberg, Jeremy J

    2016-06-16

    Coupling noble metal nanoparticles by a 1 nm gap to an underlying gold mirror confines light to extremely small volumes, useful for sensing on the nanoscale. Individually measuring 10 000 of such gold nanoparticles of increasing size dramatically shows the different scaling of their optical scattering (far-field) and surface-enhanced Raman emission (SERS, near-field). Linear red-shifts of the coupled plasmon modes are seen with increasing size, matching theory. The total SERS from the few hundred molecules under each nanoparticle dramatically increases with increasing size. This scaling shows that maximum SERS emission is always produced from the largest nanoparticles, irrespective of tuning to any plasmonic resonances. Changes of particle facet with nanoparticle size result in vastly weaker scaling of the near-field SERS, without much modifying the far-field, and allows simple approaches for optimizing practical sensing.

  17. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.

  18. Disease and disaster: Optimal deployment of epidemic control facilities in a spatially heterogeneous population with changing behaviour.

    PubMed

    Gaythorpe, Katy; Adams, Ben

    2016-05-21

    Epidemics of water-borne infections often follow natural disasters and extreme weather events that disrupt water management processes. The impact of such epidemics may be reduced by deployment of transmission control facilities such as clinics or decontamination plants. Here we use a relatively simple mathematical model to examine how demographic and environmental heterogeneities, population behaviour, and behavioural change in response to the provision of facilities, combine to determine the optimal configurations of limited numbers of facilities to reduce epidemic size, and endemic prevalence. We show that, if the presence of control facilities does not affect behaviour, a good general rule for responsive deployment to minimise epidemic size is to place them in exactly the locations where they will directly benefit the most people. However, if infected people change their behaviour to seek out treatment then the deployment of facilities offering treatment can lead to complex effects that are difficult to foresee. So careful mathematical analysis is the only way to get a handle on the optimal deployment. Behavioural changes in response to control facilities can also lead to critical facility numbers at which there is a radical change in the optimal configuration. So sequential improvement of a control strategy by adding facilities to an existing optimal configuration does not always produce another optimal configuration. We also show that the pre-emptive deployment of control facilities has conflicting effects. The configurations that minimise endemic prevalence are very different to those that minimise epidemic size. So cost-benefit analysis of strategies to manage endemic prevalence must factor in the frequency of extreme weather events and natural disasters. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Scaling behavior of ground-state energy cluster expansion for linear polyenes

    NASA Astrophysics Data System (ADS)

    Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.

    Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.

  20. Design of a Telescopic Linear Actuator Based on Hollow Shape Memory Springs

    NASA Astrophysics Data System (ADS)

    Spaggiari, Andrea; Spinella, Igor; Dragoni, Eugenio

    2011-07-01

    Shape memory alloys (SMAs) are smart materials exploited in many applications to build actuators with high power to mass ratio. Typical SMA drawbacks are: wires show poor stroke and excessive length, helical springs have limited mechanical bandwidth and high power consumption. This study is focused on the design of a large-scale linear SMA actuator conceived to maximize the stroke while limiting the overall size and the electric consumption. This result is achieved by adopting for the actuator a telescopic multi-stage architecture and using SMA helical springs with hollow cross section to power the stages. The hollow geometry leads to reduced axial size and mass of the actuator and to enhanced working frequency while the telescopic design confers to the actuator an indexable motion, with a number of different displacements being achieved through simple on-off control strategies. An analytical thermo-electro-mechanical model is developed to optimize the device. Output stroke and force are maximized while total size and power consumption are simultaneously minimized. Finally, the optimized actuator, showing good performance from all these points of view, is designed in detail.

  1. Offspring size effects mediate competitive interactions in a colonial marine invertebrate.

    PubMed

    Marshall, Dustin J; Cook, Carly N; Emlet, Richard B

    2006-01-01

    Over the past 30 years, numerous attempts to understand the relationship between offspring size and fitness have been made, and it has become clear that this critical relationship is strongly affected by environmental heterogeneity. For marine invertebrates, there has been a long-standing interest in the evolution of offspring size, but there have been very few empirical and theoretical examinations of post-metamorphic offspring size effects, and almost none have considered the effect of environmental heterogeneity on the offspring size/fitness relationship. We investigated the post-metamorphic effects of offspring size in the field for the colonial marine invertebrate Botrylloides violaceus. We also examined how the relationship between offspring size and performance was affected by three different types of intraspecific competition. We found strong and persistent effects of offspring size on survival and growth, but these effects depended on the level and type of intraspecific competition. Generally, competition strengthened the advantages of increasing maternal investment. Interestingly, we found that offspring size determined the outcome of competitive interaction: juveniles that had more maternal investment were more likely to encroach on another juvenile's territory. This suggests that mothers have the previously unrecognized potential to influence the outcome of competitive interactions in benthic marine invertebrates. We created a simple optimality model, which utilized the data generated from our field experiments, and found that increasing intraspecific competition resulted in an increase in predicted optimal size. Our results suggest that the relationship between offspring size and fitness is highly variable in the marine environment and strongly dependent on the density of conspecifics.

  2. Accomplishing simple, solubility-based separations of rare earth elements with complexes bearing size-sensitive molecular apertures

    PubMed Central

    Bogart, Justin A.; Cole, Bren E.; Boreen, Michael A.; Lippincott, Connor A.; Manor, Brian C.; Carroll, Patrick J.; Schelter, Eric J.

    2016-01-01

    Rare earth (RE) metals are critical components of electronic materials and permanent magnets. Recycling of consumer materials is a promising new source of rare REs. To incentivize recycling, there is a clear need for the development of simple methods for targeted separations of mixtures of RE metal salts. Metal complexes of a tripodal hydroxylaminato ligand, TriNOx3–, featured a size-sensitive aperture formed of its three η2-(N,O) ligand arms. Exposure of cations in the aperture induced a self-associative equilibrium comprising RE(TriNOx)THF and [RE(TriNOx)]2 species. Differences in the equilibrium constants Kdimer for early and late metals enabled simple separations through leaching. Separations were performed on RE1/RE2 mixtures, where RE1 = La–Sm and RE2 = Gd–Lu, with emphasis on Eu/Y separations for potential applications in the recycling of phosphor waste from compact fluorescent light bulbs. Using the leaching method, separations factors approaching 2,000 were obtained for early–late RE combinations. Following solvent optimization, >95% pure samples of Eu were obtained with a 67% recovery for the technologically relevant Eu/Y separation. PMID:27956636

  3. Accomplishing simple, solubility-based separations of rare earth elements with complexes bearing size-sensitive molecular apertures.

    PubMed

    Bogart, Justin A; Cole, Bren E; Boreen, Michael A; Lippincott, Connor A; Manor, Brian C; Carroll, Patrick J; Schelter, Eric J

    2016-12-27

    Rare earth (RE) metals are critical components of electronic materials and permanent magnets. Recycling of consumer materials is a promising new source of rare REs. To incentivize recycling, there is a clear need for the development of simple methods for targeted separations of mixtures of RE metal salts. Metal complexes of a tripodal hydroxylaminato ligand, TriNOx 3- , featured a size-sensitive aperture formed of its three η 2 -(N,O) ligand arms. Exposure of cations in the aperture induced a self-associative equilibrium comprising RE(TriNOx)THF and [RE(TriNOx)] 2 species. Differences in the equilibrium constants K dimer for early and late metals enabled simple separations through leaching. Separations were performed on RE1/RE2 mixtures, where RE1 = La-Sm and RE2 = Gd-Lu, with emphasis on Eu/Y separations for potential applications in the recycling of phosphor waste from compact fluorescent light bulbs. Using the leaching method, separations factors approaching 2,000 were obtained for early-late RE combinations. Following solvent optimization, >95% pure samples of Eu were obtained with a 67% recovery for the technologically relevant Eu/Y separation.

  4. Electrode Mass Balancing as an Inexpensive and Simple Method to Increase the Capacitance of Electric Double-Layer Capacitors

    PubMed Central

    Andres, Britta; Engström, Ann-Christine; Blomquist, Nicklas; Forsberg, Sven; Dahlström, Christina; Olin, Håkan

    2016-01-01

    Symmetric electric double-layer capacitors (EDLCs) have equal masses of the same active material in both electrodes. However, having equal electrode masses may prevent the EDLC to have the largest possible specific capacitance if the sizes of the hydrated anions and cations in the electrolyte differ because the electrodes and the electrolyte may not be completely utilized. Here we demonstrate how this issue can be resolved by mass balancing. If the electrode masses are adjusted according to the size of the ions, one can easily increase an EDLC’s specific capacitance. To that end, we performed galvanostatic cycling to measure the capacitances of symmetric EDLCs with different electrode mass ratios using four aqueous electrolytes— Na2SO4, H2SO4, NaOH, and KOH (all with a concentration of 1 M)—and compared these to the theoretical optimal electrode mass ratio that we calculated using the sizes of the hydrated ions. Both the theoretical and experimental values revealed lower-than-1 optimal electrode ratios for all electrolytes except KOH. The largest increase in capacitance was obtained for EDLCs with NaOH as electrolyte. Specifically, we demonstrate an increase of the specific capacitance by 8.6% by adjusting the electrode mass ratio from 1 to 0.86. Our findings demonstrate that electrode mass balancing is a simple and inexpensive method to increase the capacitance of EDLCs. Furthermore, our results imply that one can reduce the amount of unused material in EDLCs and thus decrease their weight, volume and cost. PMID:27658253

  5. A Practical Scoring System to Select Optimally Sized Devices for Percutaneous Patent Foramen Ovale Closure.

    PubMed

    Venturini, Joseph M; Retzer, Elizabeth M; Estrada, J Raider; Mediratta, Anuj; Friant, Janet; Nathan, Sandeep; Paul, Jonathan D; Blair, John; Lang, Roberto M; Shah, Atman P

    2016-10-01

    Patent foramen ovale (PFO) has been linked to cryptogenic stroke, and closure has been reported to improve clinical outcomes. However, there are no clear guidelines to direct device sizing. This study sought to use patient characteristics and echocardiographic findings to create a prediction score for device sizing. This was a retrospective review of patients undergoing percutaneous PFO closure at our institution between July 2010 and December 2014. Demographic and clinical characteristics were recorded, and all pre- and intraprocedural echocardiography results were evaluated. Thirty-six patients underwent percutaneous PFO closure during the study period. All procedures were performed using an Amplatzer Septal Occluder "Cribriform" (ASOC) device in one of three disc diameters: 25, 30, or 35 mm. Closure was indicated for cryptogenic stroke/transient ischemic attack in 75% of cases. Every case (100%) was successful with durable shunt correction at the 6-month follow-up without complications of erosion or device embolization. The presence of atrial septal aneurysm (ASA) ( p = 0.027) and PFO tunnel length >10 mm ( p = 0.038) were independently associated with increased device size. A scoring system of 1 point for male sex, 1 point for ASA, and 1 point for PFO tunnel >10 mm long was associated with the size of closure device implanted ( p = 0.006). A simple scoring system may be used to select an optimally sized device for percutaneous PFO closure using the ASOC device.

  6. Theoretical analysis of the effect of particle size and support on the kinetics of oxygen reduction reaction on platinum nanoparticles

    NASA Astrophysics Data System (ADS)

    Viswanathan, Venkatasubramanian; Wang, Frank Yi-Fei

    2012-07-01

    We perform a first-principles based computational analysis of the effect of particle size and support material on the electrocatalytic activity of platinum nanoparticles. Using a mechanism for oxygen reduction that accounts for electric field effects and stabilization from the water layer on the (111) and (100) facets, we show that the model used agrees well with linear sweep voltammetry and rotating ring disk electrode experiments. We find that the per-site activity of the nanoparticle saturates for particles larger than 5 nm and we show that the optimal particle size is in the range of 2.5-3.5 nm, which agrees well with recent experimental work. We examine the effect of support material and show that the perimeter sites on the metal-support interface are important in determining the overall activity of the nanoparticles. We also develop simple geometric estimates for the activity which can be used for determining the activity of other particle shapes and sizes.We perform a first-principles based computational analysis of the effect of particle size and support material on the electrocatalytic activity of platinum nanoparticles. Using a mechanism for oxygen reduction that accounts for electric field effects and stabilization from the water layer on the (111) and (100) facets, we show that the model used agrees well with linear sweep voltammetry and rotating ring disk electrode experiments. We find that the per-site activity of the nanoparticle saturates for particles larger than 5 nm and we show that the optimal particle size is in the range of 2.5-3.5 nm, which agrees well with recent experimental work. We examine the effect of support material and show that the perimeter sites on the metal-support interface are important in determining the overall activity of the nanoparticles. We also develop simple geometric estimates for the activity which can be used for determining the activity of other particle shapes and sizes. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr30572k

  7. A new method to optimize natural convection heat sinks

    NASA Astrophysics Data System (ADS)

    Lampio, K.; Karvinen, R.

    2017-08-01

    The performance of a heat sink cooled by natural convection is strongly affected by its geometry, because buoyancy creates flow. Our model utilizes analytical results of forced flow and convection, and only conduction in a solid, i.e., the base plate and fins, is solved numerically. Sufficient accuracy for calculating maximum temperatures in practical applications is proved by comparing the results of our model with some simple analytical and computational fluid dynamics (CFD) solutions. An essential advantage of our model is that it cuts down on calculation CPU time by many orders of magnitude compared with CFD. The shorter calculation time makes our model well suited for multi-objective optimization, which is the best choice for improving heat sink geometry, because many geometrical parameters with opposite effects influence the thermal behavior. In multi-objective optimization, optimal locations of components and optimal dimensions of the fin array can be found by simultaneously minimizing the heat sink maximum temperature, size, and mass. This paper presents the principles of the particle swarm optimization (PSO) algorithm and applies it as a basis for optimizing existing heat sinks.

  8. Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Lee, Charles H.

    2012-01-01

    We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.

  9. Construction and cellular uptake behavior of redox-sensitive docetaxel prodrug-loaded liposomes.

    PubMed

    Ren, Guolian; Jiang, Mengjuan; Guo, Weiling; Sun, Bingjun; Lian, He; Wang, Yongjun; He, Zhonggui

    2018-01-01

    A redox-responsive docetaxel (DTX) prodrug consisting of a disulfide linkage between DTX and vitamin E (DTX-SS-VE) was synthesized in our laboratory and was successfully formulated into liposomes. The aim of this study was to optimize the formulation and investigate the cellular uptake of DTX prodrug-loaded liposomes (DPLs). The content of DTX-SS-VE was determined by ultrahigh-performance liquid chromatography (UPLC). The formulation and process were optimized using entrapment efficiency (EE), drug-loading (DL), particle size and polydispersity index (PDI) as the evaluation indices. The optimal formulation was as follows: drug/lipid ratio of 1:12, cholesterol/lipid ratio of 1:10, hydration temperature of 40 °C, sonication power and time of 400 W and 5 min. The EE, DL and particle size of the optimized DPLs were 97.60 ± 0.03%, 7.09 ± 0.22% and 93.06 ± 0.72 nm, respectively. DPLs had good dilution stability under the physiological conditions over 24 h. In addition, DPLs were found to enter tumor cells via different pathways and released DTX from the prodrug to induce apoptosis. Taken together, the optimized formulation and process were found to be a simple, stable and applicable method for the preparation of DPLs that could successfully escape from lysosomes.

  10. Demonstration of surface-enhanced Raman scattering by tunable, plasmonic gallium nanoparticles

    PubMed Central

    Wu, Pae C; Khoury, Christopher G.; Kim, Tong-Ho; Yang, Yang; Losurdo, Maria; Bianco, Giuseppe V.; Vo-Dinh, Tuan; Brown, April S.; Everitt, Henry O.

    2009-01-01

    Size-controlled gallium nanoparticles deposited on sapphire are explored as alternative substrates to enhance Raman spectral signatures. Gallium’s resilience following oxidation is inherently advantageous compared to silver for practical ex vacuo, non-solution applications. Ga nanoparticles are grown using a simple, molecular beam epitaxy-based fabrication protocol, and by monitoring their corresponding surface plasmon resonance energy through in situ spectroscopic ellipsometry, the nanoparticles are easily controlled for size. Raman spectroscopy performed on cresyl fast violet (CFV) deposited on substrates of differing mean nanoparticle size represents the first demonstration of enhanced Raman signals from reproducibly tunable self-assembled Ga nanoparticles. Non-optimized aggregate enhancement factors of ~80 were observed from the substrate with the smallest Ga nanoparticles for CFV dye solutions down to a dilution of 10 ppm. PMID:19655747

  11. Determining optimal selling price and lot size with process reliability and partial backlogging considerations

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsu-Pang; Cheng, Mei-Chuan; Dye, Chung-Yuan; Ouyang, Liang-Yuh

    2011-01-01

    In this article, we extend the classical economic production quantity (EPQ) model by proposing imperfect production processes and quality-dependent unit production cost. The demand rate is described by any convex decreasing function of the selling price. In addition, we allow for shortages and a time-proportional backlogging rate. For any given selling price, we first prove that the optimal production schedule not only exists but also is unique. Next, we show that the total profit per unit time is a concave function of price when the production schedule is given. We then provide a simple algorithm to find the optimal selling price and production schedule for the proposed model. Finally, we use a couple of numerical examples to illustrate the algorithm and conclude this article with suggestions for possible future research.

  12. Simple fabrication of closed-packed IR microlens arrays on silicon by femtosecond laser wet etching

    NASA Astrophysics Data System (ADS)

    Meng, Xiangwei; Chen, Feng; Yang, Qing; Bian, Hao; Du, Guangqing; Hou, Xun

    2015-10-01

    We demonstrate a simple route to fabricate closed-packed infrared (IR) silicon microlens arrays (MLAs) based on femtosecond laser irradiation assisted by wet etching method. The fabricated MLAs show high fill factor, smooth surface and good uniformity. They can be used as optical devices for IR applications. The exposure and etching parameters are optimized to obtain reproducible microlens with hexagonal and rectangular arrangements. The surface roughness of the concave MLAs is only 56 nm. This presented method is a maskless process and can flexibly change the size, shape and the fill factor of the MLAs by controlling the experimental parameters. The concave MLAs on silicon can work in IR region and can be used for IR sensors and imaging applications.

  13. A detailed comparison of optimality and simplicity in perceptual decision-making

    PubMed Central

    Shen, Shan; Ma, Wei Ji

    2017-01-01

    Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259

  14. Facile and green synthesis of highly stable L-cysteine functionalized copper nanoparticles

    NASA Astrophysics Data System (ADS)

    Kumar, Nikhil; Upadhyay, Lata Sheo Bachan

    2016-11-01

    A simple eco-friendly method for L-cysteine capped copper nanoparticles (CCNPs) synthesis in aqueous solution has been developed. Glucose and L-cysteine were used as reducing agent and capping/functionalizing agent, respectively. Different parameters such as capping agent concentration, pH, reaction temperature, and reducing agent concentration were optimized during the synthesis. The L-cysteine capped copper nanoparticle were characterized by ultraviolet-visible spectroscopy, Fourier-transform infrared spectroscopy, X-ray diffraction, Particle size and zeta potential analyser, and high resolution transmission electron microscopy. Spherical shaped cysteine functionalized/capped copper nanoparticles with an average size of 40 nm were found to be highly stable at room temperature (RT) for a period of 1 month

  15. On the optimal sizing of batteries for electric vehicles and the influence of fast charge

    NASA Astrophysics Data System (ADS)

    Verbrugge, Mark W.; Wampler, Charles W.

    2018-04-01

    We provide a brief summary of advanced battery technologies and a framework (i.e., a simple model) for assessing electric-vehicle (EV) architectures and associated costs to the customer. The end result is a qualitative model that can be used to calculate the optimal EV range (which maps back to the battery size and performance), including the influence of fast charge. We are seeing two technological pathways emerging: fast-charge-capable batteries versus batteries with much higher energy densities (and specific energies) but without the capability to fast charge. How do we compare and contrast the two alternatives? This work seeks to shed light on the question. We consider costs associated with the cells, added mass due to the use of larger batteries, and charging, three factors common in such analyses. In addition, we consider a new cost input, namely, the cost of adaption, corresponding to the days a customer would need an alternative form of transportation, as the EV would not have sufficient range on those days.

  16. Cluster Free Energies from Simple Simulations of Small Numbers of Aggregants: Nucleation of Liquid MTBE from Vapor and Aqueous Phases.

    PubMed

    Patel, Lara A; Kindt, James T

    2017-03-14

    We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.

  17. Addressing the minimum fleet problem in on-demand urban mobility.

    PubMed

    Vazifeh, M M; Santi, P; Resta, G; Strogatz, S H; Ratti, C

    2018-05-01

    Information and communication technologies have opened the way to new solutions for urban mobility that provide better ways to match individuals with on-demand vehicles. However, a fundamental unsolved problem is how best to size and operate a fleet of vehicles, given a certain demand for personal mobility. Previous studies 1-5 either do not provide a scalable solution or require changes in human attitudes towards mobility. Here we provide a network-based solution to the following 'minimum fleet problem', given a collection of trips (specified by origin, destination and start time), of how to determine the minimum number of vehicles needed to serve all the trips without incurring any delay to the passengers. By introducing the notion of a 'vehicle-sharing network', we present an optimal computationally efficient solution to the problem, as well as a nearly optimal solution amenable to real-time implementation. We test both solutions on a dataset of 150 million taxi trips taken in the city of New York over one year 6 . The real-time implementation of the method with near-optimal service levels allows a 30 per cent reduction in fleet size compared to current taxi operation. Although constraints on driver availability and the existence of abnormal trip demands may lead to a relatively larger optimal value for the fleet size than that predicted here, the fleet size remains robust for a wide range of variations in historical trip demand. These predicted reductions in fleet size follow directly from a reorganization of taxi dispatching that could be implemented with a simple urban app; they do not assume ride sharing 7-9 , nor require changes to regulations, business models, or human attitudes towards mobility to become effective. Our results could become even more relevant in the years ahead as fleets of networked, self-driving cars become commonplace 10-14 .

  18. A compact inflow control device for simulating flight fan noise

    NASA Technical Reports Server (NTRS)

    Homyak, L.; Mcardle, J. G.; Heidelberg, L. J.

    1983-01-01

    Inflow control device (ICD's) of various shapes and sizes have been used to simulate inflight fan tone noise during ground static tests. A small, simple inexpensive ICD design was optimized from previous design and fabrication techniques. This compact two-fan-diameter ICD exhibits satisfactory acoustic performance characteristics without causing noise attenuation or redirection. In addition, it generates no important new noise sources. Design and construction details of the compact ICD are discussed and acoustic performance test results are presented.

  19. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  20. Meta-analysis of mismatch negativity to simple versus complex deviants in schizophrenia.

    PubMed

    Avissar, Michael; Xie, Shanghong; Vail, Blair; Lopez-Calderon, Javier; Wang, Yuanjia; Javitt, Daniel C

    2018-01-01

    Mismatch negativity (MMN) deficits in schizophrenia (SCZ) have been studied extensively since the early 1990s, with the vast majority of studies using simple auditory oddball task deviants that vary in a single acoustic dimension such as pitch or duration. There has been a growing interest in using more complex deviants that violate more abstract rules to probe higher order cognitive deficits. It is still unclear how sensory processing deficits compare to and contribute to higher order cognitive dysfunction, which can be investigated with later attention-dependent auditory event-related potential (ERP) components such as a subcomponent of P300, P3b. In this meta-analysis, we compared MMN deficits in SCZ using simple deviants to more complex deviants. We also pooled studies that measured MMN and P3b in the same study sample and examined the relationship between MMN and P3b deficits within study samples. Our analysis reveals that, to date, studies using simple deviants demonstrate larger deficits than those using complex deviants, with effect sizes in the range of moderate to large. The difference in effect sizes between deviant types was reduced significantly when accounting for magnitude of MMN measured in healthy controls. P3b deficits, while large, were only modestly greater than MMN deficits (d=0.21). Taken together, our findings suggest that MMN to simple deviants may still be optimal as a biomarker for SCZ and that sensory processing dysfunction contributes significantly to MMN deficit and disease pathophysiology. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Rarity-weighted richness: a simple and reliable alternative to integer programming and heuristic algorithms for minimum set and maximum coverage problems in conservation planning.

    PubMed

    Albuquerque, Fabio; Beier, Paul

    2015-01-01

    Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from <1 ha to 2,500 km2. On average, RWR solutions were more efficient than Zonation solutions. Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.

  2. Water supply pipe dimensioning using hydraulic power dissipation

    NASA Astrophysics Data System (ADS)

    Sreemathy, J. R.; Rashmi, G.; Suribabu, C. R.

    2017-07-01

    Proper sizing of the pipe component of water distribution networks play an important role in the overall design of the any water supply system. Several approaches have been applied for the design of networks from an economical point of view. Traditional optimization techniques and population based stochastic algorithms are widely used to optimize the networks. But the use of these approaches is mostly found to be limited to the research level due to difficulties in understanding by the practicing engineers, design engineers and consulting firms. More over due to non-availability of commercial software related to the optimal design of water distribution system,it forces the practicing engineers to adopt either trial and error or experience-based design. This paper presents a simple approach based on power dissipation in each pipeline as a parameter to design the network economically, but not to the level of global minimum cost.

  3. Easy preparation of a large-size random gene mutagenesis library in Escherichia coli.

    PubMed

    You, Chun; Percival Zhang, Y-H

    2012-09-01

    A simple and fast protocol for the preparation of a large-size mutant library for directed evolution in Escherichia coli was developed based on the DNA multimers generated by prolonged overlap extension polymerase chain reaction (POE-PCR). This protocol comprised the following: (i) a linear DNA mutant library was generated by error-prone PCR or shuffling, and a linear vector backbone was prepared by regular PCR; (ii) the DNA multimers were generated based on these two DNA templates by POE-PCR; and (iii) the one restriction enzyme-digested DNA multimers were ligated to circular plasmids, followed by transformation to E. coli. Because the ligation efficiency of one DNA fragment was several orders of magnitude higher than that of two DNA fragments for typical mutant library construction, it was very easy to generate a mutant library with a size of more than 10(7) protein mutants per 50 μl of the POE-PCR product. Via this method, four new fluorescent protein mutants were obtained based on monomeric cherry fluorescent protein. This new protocol was simple and fast because it did not require labor-intensive optimizations in restriction enzyme digestion and ligation, did not involve special plasmid design, and enabled constructing a large-size mutant library for directed enzyme evolution within 1 day. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumuluru, Jaya Shankar; McCulloch, Richard Chet James

    In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the mostmore » improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.« less

  6. Penetrating the oxide barrier in situ and separating freestanding porous anodic alumina films in one step.

    PubMed

    Tian, Mingliang; Xu, Shengyong; Wang, Jinguo; Kumar, Nitesh; Wertz, Eric; Li, Qi; Campbell, Paul M; Chan, Moses H W; Mallouk, Thomas E

    2005-04-01

    A simple method for penetrating the barrier layer of an anodic aluminum oxide (AAO) film and for detaching the AAO film from residual Al foil was developed by reversing the bias voltage in situ after the anodization process is completed. With this technique, we have been able to obtain large pieces of free-standing AAO membranes with regular pore sizes of sub-10 nm. By combining Ar ion milling and wetting enhancement processes, Au nanowires were grown in the sub-10 nm pores of the AAO films. Further scaling down of the pore size and extension to the deposition of nanowires and nanotubes of materials other than Au should be possible by further optimizing this procedure.

  7. One-step microwave-assisted synthesis of water-dispersible Fe3O4 magnetic nanoclusters for hyperthermia applications

    NASA Astrophysics Data System (ADS)

    Sathya, Ayyappan; Kalyani, S.; Ranoo, Surojit; Philip, John

    2017-10-01

    To realize magnetic hyperthermia as an alternate stand-alone therapeutic procedure for cancer treatment, magnetic nanoparticles with optimal performance, within the biologically safe limits, are to be produced using simple, reproducible and scalable techniques. Herein, we present a simple, one-step approach for synthesis of water-dispersible magnetic nanoclusters (MNCs) of superparamagnetic iron oxide by reducing of Fe2(SO4)3 in sodium acetate (alkali), poly ethylene glycol (capping ligand), and ethylene glycol (solvent and reductant) in a microwave reactor. The average size and saturation magnetization of the MNC's are tuned from 27 to 52 nm and 32 to 58 emu/g by increasing the reaction time from 10 to 600 s. Transmission electron microscopy images reveal that each MNC composed of large number of primary Fe3O4 nanoparticles. The synthesised MNCs show excellent colloidal stability in aqueous phase due to the adsorbed PEG layer. The highest SAR value of 215 ± 10 W/gFe observed in 52 nm size MNC at a frequency of 126 kHz and field of 63 kA/m suggest the potential use of these MNC in hyperthermia applications. This study further opens up the possibilities to develop metal ion-doped MNCs with tunable sizes suitable for various biomedical applications using microwave assisted synthesis.

  8. pH-responsive and enzymatically-responsive hydrogel microparticles for the oral delivery of therapeutic proteins: Effects of protein size, crosslinking density, and hydrogel degradation on protein delivery.

    PubMed

    Koetting, Michael Clinton; Guido, Joseph Frank; Gupta, Malvika; Zhang, Annie; Peppas, Nicholas A

    2016-01-10

    Two potential platform technologies for the oral delivery of protein therapeutics were synthesized and tested. pH-responsive poly(itaconic acid-co-N-vinyl-2-pyrrolidone) (P(IA-co-NVP)) hydrogel microparticles were tested in vitro with model proteins salmon calcitonin, urokinase, and rituximab to determine the effects of particle size, protein size, and crosslinking density on oral delivery capability. Particle size showed no significant effect on overall delivery potential but did improve percent release of encapsulated protein over the micro-scale particle size range studied. Protein size was shown to have a significant impact on the delivery capability of the P(IA-co-NVP) hydrogel. We show that when using P(IA-co-NVP) hydrogel microparticles with 3 mol% tetra(ethylene glycol) dimethacrylate crosslinker, a small polypeptide (salmon calcitonin) loads and releases up to 45 μg/mg hydrogel while the mid-sized protein urokinase and large monoclonal antibody rituximab load and release only 19 and 24 μg/mg hydrogel, respectively. We further demonstrate that crosslinking density offers a simple method for tuning hydrogel properties to variously sized proteins. Using 5 mol% TEGDMA crosslinker offers optimal performance for the small peptide, salmon calcitonin, whereas lower crosslinking density of 1 mol% offers optimal performance for the much larger protein rituximab. Finally, an enzymatically-degradable hydrogels of P(MAA-co-NVP) crosslinked with the peptide sequence MMRRRKK were synthesized and tested in simulated gastric and intestinal conditions. These hydrogels offer ideal loading and release behavior, showing no degradative release of encapsulated salmon calcitonin in gastric conditions while yielding rapid and complete release of encapsulated protein within 1h in intestinal conditions. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Development of a fast and feasible spectrum modeling technique for flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Woong; Bush, Karl; Mok, Ed

    Purpose: To develop a fast and robust technique for the determination of optimized photon spectra for flattening filter free (FFF) beams to be applied in convolution/superposition dose calculations. Methods: A two-step optimization method was developed to derive optimal photon spectra for FFF beams. In the first step, a simple functional form of the photon spectra proposed by Ali ['Functional forms for photon spectra of clinical linacs,' Phys. Med. Biol. 57, 31-50 (2011)] is used to determine generalized shapes of the photon spectra. In this method, the photon spectra were defined for the ranges of field sizes to consider the variationsmore » of the contributions of scattered photons with field size. Percent depth doses (PDDs) for each field size were measured and calculated to define a cost function, and a collapsed cone convolution (CCC) algorithm was used to calculate the PDDs. In the second step, the generalized functional form of the photon spectra was fine-tuned in a process whereby the weights of photon fluence became the optimizing free parameters. A line search method was used for the optimization and first order derivatives with respect to the optimizing parameters were derived from the CCC algorithm to enhance the speed of the optimization. The derived photon spectra were evaluated, and the dose distributions using the optimized spectra were validated. Results: The optimal spectra demonstrate small variations with field size for the 6 MV FFF beam and relatively large variations for the 10 MV FFF beam. The mean energies of the optimized 6 MV FFF spectra were decreased from 1.31 MeV for a 3 Multiplication-Sign 3 cm{sup 2} field to 1.21 MeV for a 40 Multiplication-Sign 40 cm{sup 2} field, and from 2.33 MeV at 3 Multiplication-Sign 3 cm{sup 2} to 2.18 MeV at 40 Multiplication-Sign 40 cm{sup 2} for the 10 MV FFF beam. The developed method could significantly improve the agreement between the calculated and measured PDDs. Root mean square differences on the optimized PDDs were observed to be 0.41% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.21% (40 Multiplication-Sign 40 cm{sup 2}) for the 6 MV FFF beam, and 0.35% (3 Multiplication-Sign 3 cm{sup 2}) down to 0.29% (40 Multiplication-Sign 40 cm{sup 2}) for the 10 MV FFF beam. The first order derivatives from the functional form were found to improve the speed of computational time up to 20 times compared to the other techniques. Conclusions: The derived photon spectra resulted in good agreements with measured PDDs over the range of field sizes investigated. The suggested method is easily applicable to commercial radiation treatment planning systems since it only requires measured PDDs as input.« less

  10. Predator bioenergetics and the prey size spectrum: do foraging costs determine fish production?

    PubMed

    Giacomini, Henrique C; Shuter, Brian J; Lester, Nigel P

    2013-09-07

    Most models of fish growth and predation dynamics assume that food ingestion rate is the major component of the energy budget affected by prey availability, while active metabolism is invariant (here called constant activity hypothesis). However, increasing empirical evidence supports an opposing view: fish tend to adjust their foraging activity to maintain reasonably constant ingestion levels in the face of varying prey density and/or quality (the constant satiation hypothesis). In this paper, we use a simple but flexible model of fish bioenergetics to show that constant satiation is likely to occur in fish that optimize both net production rate and life history. The model includes swimming speed as an explicit measure of foraging activity leading to both energy gains (through prey ingestion) and losses (through active metabolism). The fish is assumed to be a particulate feeder that has to swim between consecutive individual prey captures, and that shifts its diet ontogenetically from smaller to larger prey. The prey community is represented by a negative power-law size spectrum. From these rules, we derive the net production of fish as a function of the size spectrum, and this in turn establishes a formal link between the optimal life history (i.e. maximum body size) and prey community structure. In most cases with realistic parameter values, optimization of life history ensures that: (i) a constantly satiated fish preying on a steep size spectrum will stop growing and invest all its surplus energy in reproduction before satiation becomes too costly; (ii) conversely, a fish preying on a shallow size spectrum will grow large enough for satiation to be present throughout most of its ontogeny. These results provide a mechanistic basis for previous empirical findings, and call for the inclusion of active metabolism as a major factor limiting growth potential and the numerical response of predators in theoretical studies of food webs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Electrochemical synthesis and characterization of zinc oxalate nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamsipur, Mojtaba, E-mail: mshamsipur@yahoo.com; Roushani, Mahmoud; Department of Chemistry, Ilam University, Ilam

    2013-03-15

    Highlights: ► Synthesis of zinc oxalate nanoparticles via electrolysis of a zinc plate anode in sodium oxalate solutions. ► Design of a Taguchi orthogonal array to identify the optimal experimental conditions. ► Controlling the size and shape of particles via applied voltage and oxalate concentration. ► Characterization of zinc oxalate nanoparticles by SEM, UV–vis, FT-IR and TG–DTA. - Abstract: A rapid, clean and simple electrodeposition method was designed for the synthesis of zinc oxalate nanoparticles. Zinc oxalate nanoparticles in different size and shapes were electrodeposited by electrolysis of a zinc plate anode in sodium oxalate aqueous solutions. It was foundmore » that the size and shape of the product could be tuned by electrolysis voltage, oxalate ion concentration, and stirring rate of electrolyte solution. A Taguchi orthogonal array design was designed to identify the optimal experimental conditions. The morphological characterization of the product was carried out by scanning electron microscopy. UV–vis and FT-IR spectroscopies were also used to characterize the electrodeposited nanoparticles. The TG–DTA studies of the nanoparticles indicated that the main thermal degradation occurs in two steps over a temperature range of 350–430 °C. In contrast to the existing methods, the present study describes a process which can be easily scaled up for the production of nano-sized zinc oxalate powder.« less

  13. Fabrication of polydimethylsiloxane (PDMS) nanofluidic chips with controllable channel size and spacing.

    PubMed

    Peng, Ran; Li, Dongqing

    2016-10-07

    The ability to create reproducible and inexpensive nanofluidic chips is essential to the fundamental research and applications of nanofluidics. This paper presents a novel and cost-effective method for fabricating a single nanochannel or multiple nanochannels in PDMS chips with controllable channel size and spacing. Single nanocracks or nanocrack arrays, positioned by artificial defects, are first generated on a polystyrene surface with controllable size and spacing by a solvent-induced method. Two sets of optimal working parameters are developed to replicate the nanocracks onto the polymer layers to form the nanochannel molds. The nanochannel molds are used to make the bi-layer PDMS microchannel-nanochannel chips by simple soft lithography. An alignment system is developed for bonding the nanofluidic chips under an optical microscope. Using this method, high quality PDMS nanofluidic chips with a single nanochannel or multiple nanochannels of sub-100 nm width and height and centimeter length can be obtained with high repeatability.

  14. Small-size pedestrian detection in large scene based on fast R-CNN

    NASA Astrophysics Data System (ADS)

    Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu

    2018-04-01

    Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.

  15. Application of cyclodextrins in antibody microparticles: potentials for antibody protection in spray drying.

    PubMed

    Ramezani, Vahid; Vatanara, Alireza; Seyedabadi, Mohammad; Nabi Meibodi, Mohsen; Fanaei, Hamed

    2017-07-01

    Dry powder formulations are extensively used to improve the stability of antibodies. Spray drying is one of important methods for protein drying. This study investigated the effects of trehalose, hydroxypropyl beta cyclodextrin (HPBCD) and beta cyclodextrin (BCD) on the stability and particle properties of spray-dried IgG. D-optimal design was employed for both experimental design and analysis and optimization of the variables. The size and aerodynamic behavior of particles were determined using laser light scattering and glass twin impinger, respectively. In addition, stability, ratio of beta sheets and morphology of antibody were analyzed using size exclusion chromatography, IR spectroscopy and electron microscopy, respectively. Particle properties and antibody stability were significantly improved in the presence of HPBCD. In addition, particle aerodynamic behavior, in terms of fine-particle fraction (FPF), enhanced up to 52.23%. Furthermore, antibody was better preserved not only during spray drying, but also during long-term storage. In contrast, application of BCD resulted in the formation of larger particles. Although trehalose caused inappropriate aerodynamic property, it efficiently decreased antibody aggregation. HPBCD is an efficient excipient for the development of inhalable protein formulations. In this regard, optimal particle property and antibody stability was obtained with proper combination of cyclodextrins and simple sugars, such as trehalose.

  16. Design and optimization of integrated gas/condensate plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Root, C.R.; Wilson, J.L.

    1995-11-01

    An optimized design is demonstrated for combining gas processing and condensate stabilization plants into a single integrated process facility. This integrated design economically provides improved condensate recovery versus use of a simple stabilizer design. A selection matrix showing likely application of this integrated process is presented for use on future designs. Several methods for developing the fluid characterization and for using a process simulator to predict future design compositions are described, which could be useful in other designs. Optimization of flowsheet equipment choices and of design operating pressures and temperatures is demonstrated including the effect of both continuous and discretemore » process equipment size changes. Several similar designs using a turboexpander to provide refrigeration for liquids recovery and stabilizer reflux are described. Operating overthrust and from the P/15-D platform in the Dutch sector of the North Sea has proven these integrated designs are effective. Concerns do remain around operation near or above the critical pressure that should be addressed in future work including providing conservative separator designs, providing sufficient process design safety margin to meet dew point specifications, selecting the most conservative design values of predicted gas dew point and equipment size calculated with different Equations-of-State, and possibly improving the accuracy of PVT calculations in the near critical area.« less

  17. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    ERIC Educational Resources Information Center

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  18. Enhancement of heat transfer and entropy generation analysis of nanofluids turbulent convection flow in square section tubes

    NASA Astrophysics Data System (ADS)

    Bianco, Vincenzo; Nardini, Sergio; Manca, Oronzio

    2011-12-01

    In this article, developing turbulent forced convection flow of a water-Al2O3 nanofluid in a square tube, subjected to constant and uniform wall heat flux, is numerically investigated. The mixture model is employed to simulate the nanofluid flow and the investigation is accomplished for particles size equal to 38 nm. An entropy generation analysis is also proposed in order to find the optimal working condition for the given geometry under given boundary conditions. A simple analytical procedure is proposed to evaluate the entropy generation and its results are compared with the numerical calculations, showing a very good agreement. A comparison of the resulting Nusselt numbers with experimental correlations available in literature is accomplished. To minimize entropy generation, the optimal Reynolds number is determined.

  19. Laser-Driven Ion Acceleration from Plasma Micro-Channel Targets

    PubMed Central

    Zou, D. B.; Pukhov, A.; Yi, L. Q.; Zhou, H. B.; Yu, T. P.; Yin, Y.; Shao, F. Q.

    2017-01-01

    Efficient energy boost of the laser-accelerated ions is critical for their applications in biomedical and hadron research. Achiev-able energies continue to rise, with currently highest energies, allowing access to medical therapy energy windows. Here, a new regime of simultaneous acceleration of ~100 MeV protons and multi-100 MeV carbon-ions from plasma micro-channel targets is proposed by using a ~1020 W/cm2 modest intensity laser pulse. It is found that two trains of overdense electron bunches are dragged out from the micro-channel and effectively accelerated by the longitudinal electric-field excited in the plasma channel. With the optimized channel size, these “superponderomotive” energetic electrons can be focused on the front surface of the attached plastic substrate. The much intense sheath electric-field is formed on the rear side, leading to up to ~10-fold ionic energy increase compared to the simple planar geometry. The analytical prediction of the optimal channel size and ion maximum energies is derived, which shows good agreement with the particle-in-cell simulations. PMID:28218247

  20. Peroxide-assisted microwave activation of pyrolysis char for adsorption of dyes from wastewater.

    PubMed

    Nair, Vaishakh; Vinu, R

    2016-09-01

    In this study, mesoporous activated biochar with high surface area and controlled pore size was prepared from char obtained as a by-product of pyrolysis of Prosopis juliflora biomass. The activation was carried out by a simple process that involved H2O2 treatment followed by microwave pyrolysis. H2O2 impregnation time and microwave power were optimized to obtain biochar with high specific surface area and high adsorption capacity for commercial dyes such as Remazol Brilliant Blue and Methylene Blue. Adsorption parameters such as initial pH of the dye solution and adsorbent dosage were also optimized. Pore size distribution, surface morphology and elemental composition of activated biochar were thoroughly characterized. H2O2 impregnation time of 24h and microwave power of 600W produced nanostructured biochar with narrow and deep pores of 357m(2)g(-1) specific surface area. Langmuir and Langmuir-Freundlich isotherms described the adsorption equilibrium, while pseudo second order model described the kinetics of adsorption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Laser-Driven Ion Acceleration from Plasma Micro-Channel Targets

    NASA Astrophysics Data System (ADS)

    Zou, D. B.; Pukhov, A.; Yi, L. Q.; Zhou, H. B.; Yu, T. P.; Yin, Y.; Shao, F. Q.

    2017-02-01

    Efficient energy boost of the laser-accelerated ions is critical for their applications in biomedical and hadron research. Achiev-able energies continue to rise, with currently highest energies, allowing access to medical therapy energy windows. Here, a new regime of simultaneous acceleration of ~100 MeV protons and multi-100 MeV carbon-ions from plasma micro-channel targets is proposed by using a ~1020 W/cm2 modest intensity laser pulse. It is found that two trains of overdense electron bunches are dragged out from the micro-channel and effectively accelerated by the longitudinal electric-field excited in the plasma channel. With the optimized channel size, these “superponderomotive” energetic electrons can be focused on the front surface of the attached plastic substrate. The much intense sheath electric-field is formed on the rear side, leading to up to ~10-fold ionic energy increase compared to the simple planar geometry. The analytical prediction of the optimal channel size and ion maximum energies is derived, which shows good agreement with the particle-in-cell simulations.

  2. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  3. Optimization of PCR Condition: The First Study of High Resolution Melting Technique for Screening of APOA1 Variance.

    PubMed

    Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; Ep Mundhofir, Farmaditya; Mh Faradz, Sultana; Hisatome, Ichiro

    2017-03-01

    High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100-400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1 .

  4. Optimization of PCR Condition: The First Study of High Resolution Melting Technique for Screening of APOA1 Variance

    PubMed Central

    Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; EP Mundhofir, Farmaditya; MH Faradz, Sultana; Hisatome, Ichiro

    2017-01-01

    Background High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Methods Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Results Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100–400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. Conclusion In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1. PMID:28331418

  5. Global Climatic Controls On Leaf Size

    NASA Astrophysics Data System (ADS)

    Wright, I. J.; Prentice, I. C.; Dong, N.; Maire, V.

    2015-12-01

    Since the 1890s it's been known that the wet tropics harbour plants with exceptionally large leaves. Yet the observed latitudinal gradient of leaf size has never been fully explained: it is still unclear which aspects of climate are most important for understanding geographic trends in leaf size, a trait that varies many thousand-fold among species. The key is the leaf-to-air temperature difference, which depends on the balance of energy inputs (irradiance) and outputs (transpirational cooling, losses to the night sky). Smaller leaves track air temperatures more closely than larger leaves. Widely cited optimality-based theories predict an advantage for smaller leaves in dry environments, where transpiration is restricted, but are silent on the latitudinal gradient. We aimed to characterize and explain the worldwide pattern of leaf size. Across 7900 species from 651 sites, here we show that: large-leaved species predominate in wet, hot, sunny environments; smaller-leaved species typify hot, sunny environments only when arid; small leaves are required to avoid freezing in high latitudes and at high elevation, and to avoid overheating in dry environments. This simple pattern was unclear in earlier, more limited analyses. We present a simple but robust, fresh approach to energy-balance modelling for both day-time and night-time leaf-to-air temperature differences, and thus risk of overheating and of frost damage. Our analysis shows night-chilling is important as well as day-heating, and simplifies leaf temperature modelling. It provides both a framework for modelling leaf size constraints, and a solution to one of the oldest conundrums in ecology. Although the path forward is not yet fully clear, because of its role in controlling leaf temperatures we suggest that climate-related leaf size constraints could usefully feature in the next generation of land ecosystem models.

  6. Microwave absorption in powders of small conducting particles for heating applications.

    PubMed

    Porch, Adrian; Slocombe, Daniel; Edwards, Peter P

    2013-02-28

    In microwave chemistry there is a common misconception that small, highly conducting particles heat profusely when placed in a large microwave electric field. However, this is not the case; with the simple physical explanation that the electric field (which drives the heating) within a highly conducting particle is highly screened. Instead, it is the magnetic absorption associated with induction that accounts for the large experimental heating rates observed for small metal particles. We present simple principles for the effective heating of particles in microwave fields from calculations of electric and magnetic dipole absorptions for a range of practical values of particle size and conductivity. For highly conducting particles, magnetic absorption dominates electric absorption over a wide range of particle radii, with an optimum absorption set by the ratio of mean particle radius a to the skin depth δ (specifically, by the condition a = 2.41δ). This means that for particles of any conductivity, optimized magnetic absorption (and hence microwave heating by magnetic induction) can be achieved by simple selection of the mean particle size. For weakly conducting samples, electric dipole absorption dominates, and is maximized when the conductivity is approximately σ ≈ 3ωε(0) ≈ 0.4 S m(-1), independent of particle radius. Therefore, although electric dipole heating can be as effective as magnetic dipole heating for a powder sample of the same volume, it is harder to obtain optimized conditions at a fixed frequency of microwave field. The absorption of sub-micron particles is ineffective in both magnetic and electric fields. However, if the particles are magnetic, with a lossy part to their complex permeability, then magnetic dipole losses are dramatically enhanced compared to their values for non-magnetic particles. An interesting application of this is the use of very small magnetic particles for the selective microwave heating of biological samples.

  7. Competitive STDP Learning of Overlapping Spatial Patterns.

    PubMed

    Krunglevicius, Dalius

    2015-08-01

    Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.

  8. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  9. Evaluation of chromatographic columns packed with semi- and fully porous particles for benzimidazoles separation.

    PubMed

    Gonzalo-Lumbreras, Raquel; Sanz-Landaluze, Jon; Cámara, Carmen

    2015-07-01

    The behavior of 15 benzimidazoles, including their main metabolites, using several C18 columns with standard or narrow-bore diameters and different particle size and type were evaluated. These commercial columns were selected because their differences could affect separation of benzimidazoles, and so they can be used as alternative columns. A simple screening method for the analysis of benzimidazole residues and their main metabolites was developed. First, the separation of benzimidazoles was optimized using a Kinetex C18 column; later, analytical performances of other columns using the above optimized conditions were compared and then individually re-optimized. Critical pairs resolution, analysis run time, column type and characteristics, and selectivity were considered for chromatographic columns comparison. Kinetex XB was selected because it provides the shortest analysis time and the best resolution of critical pairs. Using this column, the separation conditions were re-optimized using a factorial design. Separations obtained with the different columns tested can be applied to the analysis of specific benzimidazoles residues or other applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    NASA Astrophysics Data System (ADS)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  11. Towards inverse modeling of turbidity currents: The inverse lock-exchange problem

    NASA Astrophysics Data System (ADS)

    Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison

    2011-04-01

    A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.

  12. Fast-SNP: a fast matrix pre-processing algorithm for efficient loopless flux optimization of metabolic models

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155

  13. Effects of Nanoparticle Size on Cellular Uptake and Liver MRI with PVP-Coated Iron Oxide Nanoparticles

    PubMed Central

    Huang, Jing; Bu, Lihong; Xie, Jin; Chen, Kai; Cheng, Zhen; Li, Xingguo; Chen, Xiaoyuan

    2010-01-01

    The effect of nanoparticle size (30–120 nm) on magnetic resonance imaging (MRI) of hepatic lesions in vivo has been systematically examined using polyvinylpyrrolidone (PVP)-coated iron oxide nanoparticles (PVP-IOs). Such biocompatible PVP-IOs with different sizes were synthesized by a simple one-pot pyrolysis method. These PVP-IOs exhibited good crystallinity and high T2 relaxivities, and the relaxivity increased with the size of the magnetic nanoparticles. It was found that cellular uptake changed with both size and surface physiochemical properties, and that PVP-IO-37 with a core size of 37 nm and hydrodynamic particle size of 100 nm exhibited higher cellular uptake rate and greater distribution than other PVP-IOs and Feridex. We systematically investigated the effect of nanoparticle size on MRI of normal liver and hepatic lesions in vivo. The physical and chemical properties of the nanoparticles influenced their pharmacokinetic behavior, which ultimately determined their ability to accumulate in the liver. The contrast enhancement of PVP-IOs within the liver was highly dependent on the overall size of the nanoparticles, and the 100 nm PVP-IO-37 nanoparticles exhibited the greatest enhancement. These results will have implications in designing engineered nanoparticles that are optimized as MR contrast agents or for use in therapeutics. PMID:21043459

  14. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  15. On Revenue-Optimal Dynamic Auctions for Bidders with Interdependent Values

    NASA Astrophysics Data System (ADS)

    Constantin, Florin; Parkes, David C.

    In a dynamic market, being able to update one's value based on information available to other bidders currently in the market can be critical to having profitable transactions. This is nicely captured by the model of interdependent values (IDV): a bidder's value can explicitly depend on the private information of other bidders. In this paper we present preliminary results about the revenue properties of dynamic auctions for IDV bidders. We adopt a computational approach to design single-item revenue-optimal dynamic auctions with known arrivals and departures but (private) signals that arrive online. In leveraging a characterization of truthful auctions, we present a mixed-integer programming formulation of the design problem. Although a discretization is imposed on bidder signals the solution is a mechanism applicable to continuous signals. The formulation size grows exponentially in the dependence of bidders' values on other bidders' signals. We highlight general properties of revenue-optimal dynamic auctions in a simple parametrized example and study the sensitivity of prices and revenue to model parameters.

  16. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  17. Maximal Neighbor Similarity Reveals Real Communities in Networks

    PubMed Central

    Žalik, Krista Rizman

    2015-01-01

    An important problem in the analysis of network data is the detection of groups of densely interconnected nodes also called modules or communities. Community structure reveals functions and organizations of networks. Currently used algorithms for community detection in large-scale real-world networks are computationally expensive or require a priori information such as the number or sizes of communities or are not able to give the same resulting partition in multiple runs. In this paper we investigate a simple and fast algorithm that uses the network structure alone and requires neither optimization of pre-defined objective function nor information about number of communities. We propose a bottom up community detection algorithm in which starting from communities consisting of adjacent pairs of nodes and their maximal similar neighbors we find real communities. We show that the overall advantage of the proposed algorithm compared to the other community detection algorithms is its simple nature, low computational cost and its very high accuracy in detection communities of different sizes also in networks with blurred modularity structure consisting of poorly separated communities. All communities identified by the proposed method for facebook network and E-Coli transcriptional regulatory network have strong structural and functional coherence. PMID:26680448

  18. Mandala Networks: ultra-small-world and highly sparse graphs

    PubMed Central

    Sampaio Filho, Cesar I. N.; Moreira, André A.; Andrade, Roberto F. S.; Herrmann, Hans J.; Andrade, José S.

    2015-01-01

    The increasing demands in security and reliability of infrastructures call for the optimal design of their embedded complex networks topologies. The following question then arises: what is the optimal layout to fulfill best all the demands? Here we present a general solution for this problem with scale-free networks, like the Internet and airline networks. Precisely, we disclose a way to systematically construct networks which are robust against random failures. Furthermore, as the size of the network increases, its shortest path becomes asymptotically invariant and the density of links goes to zero, making it ultra-small world and highly sparse, respectively. The first property is ideal for communication and navigation purposes, while the second is interesting economically. Finally, we show that some simple changes on the original network formulation can lead to an improved topology against malicious attacks. PMID:25765450

  19. Design of a wearable hand exoskeleton for exercising flexion/extension of the fingers.

    PubMed

    Jo, Inseong; Lee, Jeongsoo; Park, Yeongyu; Bae, Joonbum

    2017-07-01

    In this paper, design of a wearable hand exoskeleton system for exercising flexion/extension of the fingers, is proposed. The exoskeleton was designed with a simple and wearable structure to aid finger motions in 1 degree of freedom (DOF). A hand grasping experiment by fully-abled people was performed to investigate general hand flexion/extension motions and the polynomial curve of general hand motions was obtained. To customize the hand exoskeleton for the user, the polynomial curve was adjusted to the joint range of motion (ROM) of the user and the optimal design of the exoskeleton structure was obtained using the optimization algorithm. A prototype divided into two parts (one part for the thumb, the other for rest fingers) was actuated by only two linear motors for compact size and light weight.

  20. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  1. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Origami silicon optoelectronics for hemispherical electronic eye systems.

    PubMed

    Zhang, Kan; Jung, Yei Hwan; Mikael, Solomon; Seo, Jung-Hun; Kim, Munho; Mi, Hongyi; Zhou, Han; Xia, Zhenyang; Zhou, Weidong; Gong, Shaoqin; Ma, Zhenqiang

    2017-11-24

    Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices.

  3. Mitigation of Adverse Effects Caused by Shock Wave Boundary Layer Interactions Through Optimal Wall Shaping

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Lee, Byung Joon

    2013-01-01

    It is known that the adverse effects of shock wave boundary layer interactions in high speed inlets include reduced total pressure recovery and highly distorted flow at the aerodynamic interface plane (AIP). This paper presents a design method for flow control which creates perturbations in geometry. These perturbations are tailored to change the flow structures in order to minimize shock wave boundary layer interactions (SWBLI) inside supersonic inlets. Optimizing the shape of two dimensional micro-size bumps is shown to be a very effective flow control method for two-dimensional SWBLI. In investigating the three dimensional SWBLI, a square duct is employed as a baseline. To investigate the mechanism whereby the geometric elements of the baseline, i.e. the bottom wall, the sidewall and the corner, exert influence on the flow's aerodynamic characteristics, each element is studied and optimized separately. It is found that arrays of micro-size bumps on the bottom wall of the duct have little effect in improving total pressure recovery though they are useful in suppressing the incipient separation in three-dimensional problems. Shaping sidewall geometry is effective in re-distributing flow on the side wall and results in a less distorted flow at the exit. Subsequently, a near 50% reduction in distortion is achieved. A simple change in corner geometry resulted in a 2.4% improvement in total pressure recovery.

  4. Improving piezo actuators for nanopositioning tasks

    NASA Astrophysics Data System (ADS)

    Seeliger, Martin; Gramov, Vassil; Götz, Bernt

    2018-02-01

    In recent years, numerous applications emerged on the market with seemingly contradicting demands. On one side, the structure size decreased while on the other side, the overall sample size and speed of operation increased. Although the principle usage of piezoelectric positioning solutions has become a standard in the field of micro- and nanopositioning, surface inspection and manipulation, piezosystem jena now enhanced the performance beyond simple control loop tuning and actuator design. In automated manufacturing machines, a given signal has to be tracked fast and precise. However, control systems naturally decrease the ability to follow this signal in real time. piezosystem jena developed a new signal feed forward system bypassing the PID control. This way, we could reduce signal tracking errors by a factor of three compared to a conventionally optimized PID control. Of course, PID-values still have to be adjusted to specific conditions, e.g. changing additional mass, to optimize the performance. This can now be done with a new automatic tuning tool designed to analyze the current setup, find the best fitting configuration, and also gather and display theoretical as well as experimental performance data. Thus, the control quality of a mechanical setup can be improved within a few minutes without the need of external calibration equipment. Furthermore, new mechanical optimization techniques that focus not only on the positioning device, but also take the whole setup into account, prevent parasitic motion down to a few nanometers.

  5. Aerosol delivery and humidification with the Boussignac continuous positive airway pressure device.

    PubMed

    Thille, Arnaud W; Bertholon, Jean-François; Becquemin, Marie-Hélène; Roy, Monique; Lyazidi, Aissam; Lellouche, François; Pertusini, Esther; Boussignac, Georges; Maître, Bernard; Brochard, Laurent

    2011-10-01

    A simple method for effective bronchodilator aerosol delivery while administering continuing continuous positive airway pressure (CPAP) would be useful in patients with severe bronchial obstruction. To assess the effectiveness of bronchodilator aerosol delivery during CPAP generated by the Boussignac CPAP system and its optimal humidification system. First we assessed the relationship between flow and pressure generated in the mask with the Boussignac CPAP system. Next we measured the inspired-gas humidity during CPAP, with several humidification strategies, in 9 healthy volunteers. We then measured the bronchodilator aerosol particle size during CPAP, with and without heat-and-moisture exchanger, in a bench study. Finally, in 7 patients with acute respiratory failure and airway obstruction, we measured work of breathing and gas exchange after a β(2)-agonist bronchodilator aerosol (terbutaline) delivered during CPAP or via standard nebulization. Optimal humidity was obtained only with the heat-and-moisture exchanger or heated humidifier. The heat-and-moisture exchanger had no influence on bronchodilator aerosol particle size. Work of breathing decreased similarly after bronchodilator via either standard nebulization or CPAP, but P(aO(2)) increased significantly only after CPAP aerosol delivery. CPAP bronchodilator delivery decreases the work of breathing as effectively as does standard nebulization, but produces a greater oxygenation improvement in patients with airway obstruction. To optimize airway humidification, a heat-and-moisture exchanger could be used with the Boussignac CPAP system, without modifying aerosol delivery.

  6. Application of da Vinci(®) Robot in simple or radical hysterectomy: Tips and tricks.

    PubMed

    Iavazzo, Christos; Gkegkes, Ioannis D

    2016-01-01

    The first robotic simple hysterectomy was performed more than 10 years ago. These days, robotic-assisted hysterectomy is accepted as an alternative surgical approach and is applied both in benign and malignant surgical entities. The two important points that should be taken into account to optimize postoperative outcomes in the early period of a surgeon's training are how to achieve optimal oncological and functional results. Overcoming any technical challenge, as with any innovative surgical method, leads to an improved surgical operation timewise as well as for patients' safety. The standardization of the technique and recognition of critical anatomical landmarks are essential for optimal oncological and clinical outcomes on both simple and radical robotic-assisted hysterectomy. Based on our experience, our intention is to present user-friendly tips and tricks to optimize the application of a da Vinci® robot in simple or radical hysterectomies.

  7. Shot-Noise Limited Single-Molecule FRET Histograms: Comparison between Theory and Experiments†

    PubMed Central

    Nir, Eyal; Michalet, Xavier; Hamadani, Kambiz M.; Laurence, Ted A.; Neuhauser, Daniel; Kovchegov, Yevgeniy; Weiss, Shimon

    2011-01-01

    We describe a simple approach and present a straightforward numerical algorithm to compute the best fit shot-noise limited proximity ratio histogram (PRH) in single-molecule fluorescence resonant energy transfer diffusion experiments. The key ingredient is the use of the experimental burst size distribution, as obtained after burst search through the photon data streams. We show how the use of an alternated laser excitation scheme and a correspondingly optimized burst search algorithm eliminates several potential artifacts affecting the calculation of the best fit shot-noise limited PRH. This algorithm is tested extensively on simulations and simple experimental systems. We find that dsDNA data exhibit a wider PRH than expected from shot noise only and hypothetically account for it by assuming a small Gaussian distribution of distances with an average standard deviation of 1.6 Å. Finally, we briefly mention the results of a future publication and illustrate them with a simple two-state model system (DNA hairpin), for which the kinetic transition rates between the open and closed conformations are extracted. PMID:17078646

  8. Beyond size–number trade-offs: clutch size as a maternal effect

    PubMed Central

    Brown, Gregory P.; Shine, Richard

    2009-01-01

    Traditionally, research on life-history traits has viewed the link between clutch size and offspring size as a straightforward linear trade-off; the product of these two components is taken as a measure of maternal reproductive output. Investing more per egg results in fewer but larger eggs and, hence, offspring. This simple size–number trade-off has proved attractive to modellers, but our experimental studies on keelback snakes (Tropidonophis mairii, Colubridae) reveal a more complex relationship between clutch size and offspring size. At constant water availability, the amount of water taken up by a snake egg depends upon the number of adjacent eggs. In turn, water uptake affects hatchling size, and therefore an increase in clutch size directly increases offspring size (and thus fitness under field conditions). This allometric advantage may influence the evolution of reproductive traits such as growth versus reproductive effort, optimal age at female maturation, the body-reserve threshold required to initiate reproduction and nest-site selection (e.g. communal oviposition). The published literature suggests that similar kinds of complex effects of clutch size on offspring viability are widespread in both vertebrates and invertebrates. Our results also challenge conventional experimental methodologies such as split-clutch designs for laboratory incubation studies: by separating an egg from its siblings, we may directly affect offspring size and thus viability. PMID:19324614

  9. pH-Induced transformation of ligated Au25 to brighter Au23 nanoclusters.

    PubMed

    Waszkielewicz, Magdalena; Olesiak-Banska, Joanna; Comby-Zerbino, Clothilde; Bertorelle, Franck; Dagany, Xavier; Bansal, Ashu K; Sajjad, Muhammad T; Samuel, Ifor D W; Sanader, Zeljka; Rozycka, Miroslawa; Wojtas, Magdalena; Matczyszyn, Katarzyna; Bonacic-Koutecky, Vlasta; Antoine, Rodolphe; Ozyhar, Andrzej; Samoc, Marek

    2018-05-01

    Thiolate-protected gold nanoclusters have recently attracted considerable attention due to their size-dependent luminescence characterized by a long lifetime and large Stokes shift. However, the optimization of nanocluster properties such as the luminescence quantum yield is still a challenge. We report here the transformation of Au25Capt18 (Capt labels captopril) nanoclusters occurring at low pH and yielding a product with a much increased luminescence quantum yield which we have identified as Au23Capt17. We applied a simple method of treatment with HCl to accomplish this transformation and we characterized the absorption and emission of the newly created ligated nanoclusters as well as their morphology. Based on DFT calculations we show which Au nanocluster size transformations can lead to highly luminescent species such as Au23Capt17.

  10. Pixel-based OPC optimization based on conjugate gradients.

    PubMed

    Ma, Xu; Arce, Gonzalo R

    2011-01-31

    Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.

  11. Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.

    PubMed

    De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J

    1989-01-01

    A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.

  12. Nano-sized crystalline drug production by milling technology.

    PubMed

    Moribe, Kunikazu; Ueda, Keisuke; Limwikrant, Waree; Higashi, Kenjirou; Yamamoto, Keiji

    2013-01-01

    Nano-formulation of poorly water-soluble drugs has been developed to enhance drug dissolution. In this review, we introduce nano-milling technology described in recently published papers. Factors affecting the size of drug crystals are compared based on the preparation methods and drug and excipient types. A top-down approach using the comminution process is a method conventionally used to prepare crystalline drug nanoparticles. Wet milling using media is well studied and several wet-milled drug formulations are now on the market. Several trials on drug nanosuspension preparation using different apparatuses, materials, and conditions have been reported. Wet milling using a high-pressure homogenizer is another alternative to preparing production-scale drug nanosuspensions. Dry milling is a simple method of preparing a solid-state drug nano-formulation. The effect of size on the dissolution of a drug from nanoparticles is an area of fundamental research, but it is sometimes incorrectly evaluated. Here, we discuss evaluation procedures and the associated problems. Lastly, the importance of quality control, process optimization, and physicochemical characterization are briefly discussed.

  13. On-chip generation of microbubbles as a practical technology for manufacturing contrast agents for ultrasonic imaging

    PubMed Central

    Hettiarachchi, Kanaka; Talu, Esra; Longo, Marjorie L.; Dayton, Paul A.; Lee, Abraham P.

    2007-01-01

    This paper presents a new manufacturing method to generate monodisperse microbubble contrast agents with polydispersity index (σ) values of <2% through microfluidic flow-focusing. Micron-sized lipid shell-based perfluorocarbon (PFC) gas microbubbles for use as ultrasound contrast agents were produced using this method. The poly(dimethylsiloxane) (PDMS)-based devices feature expanding nozzle geometry with a 7 μm orifice width, and are robust enough for consistent production of microbubbles with runtimes lasting several hours. With high-speed imaging, we characterized relationships between channel geometry, liquid flow rate Q, and gas pressure P in controlling bubble sizes. By a simple optimization of the channel geometry and Q and P, bubbles with a mean diameter of <5 μm can be obtained, ideal for various ultrasonic imaging applications. This method demonstrates the potential of microfluidics as an efficient means for custom-designing ultrasound contrast agents with precise size distributions, different gas compositions and new shell materials for stabilization, and for future targeted imaging and therapeutic applications. PMID:17389962

  14. How To Characterize Individual Nanosize Liposomes with Simple Self-Calibrating Fluorescence Microscopy.

    PubMed

    Mortensen, Kim I; Tassone, Chiara; Ehrlich, Nicky; Andresen, Thomas L; Flyvbjerg, Henrik

    2018-05-09

    Nanosize lipid vesicles are used extensively at the interface between nanotechnology and biology, e.g., as containers for chemical reactions at minute concentrations and vehicles for targeted delivery of pharmaceuticals. Typically, vesicle samples are heterogeneous as regards vesicle size and structural properties. Consequently, vesicles must be characterized individually to ensure correct interpretation of experimental results. Here we do that using dual-color fluorescence labeling of vesicles-of their lipid bilayers and lumens, separately. A vesicle then images as two spots, one in each color channel. A simple image analysis determines the total intensity and width of each spot. These four data all depend on the vesicle radius in a simple manner for vesicles that are spherical, unilamellar, and optimal encapsulators of molecular cargo. This permits identification of such ideal vesicles. They in turn enable calibration of the dual-color fluorescence microscopy images they appear in. Since this calibration is not a separate experiment but an analysis of images of vesicles to be characterized, it eliminates the potential source of error that a separate calibration experiment would have been. Nonideal vesicles in the same images were characterized by how their four data violate the calibrated relationship established for ideal vesicles. In this way, our method yields size, shape, lamellarity, and encapsulation efficiency of each imaged vesicle. Applying this procedure to extruded samples of vesicles, we found that, contrary to common assumptions, only a fraction of vesicles are ideal.

  15. SKA weak lensing- II. Simulated performance and survey design considerations

    NASA Astrophysics Data System (ADS)

    Bonaldi, Anna; Harrison, Ian; Camera, Stefano; Brown, Michael L.

    2016-12-01

    We construct a pipeline for simulating weak lensing cosmology surveys with the Square Kilometre Array (SKA), taking as inputs telescope sensitivity curves; correlated source flux, size and redshift distributions; a simple ionospheric model; source redshift and ellipticity measurement errors. We then use this simulation pipeline to optimize a 2-yr weak lensing survey performed with the first deployment of the SKA (SKA1). Our assessments are based on the total signal to noise of the recovered shear power spectra, a metric that we find to correlate very well with a standard dark energy figure of merit. We first consider the choice of frequency band, trading off increases in number counts at lower frequencies against poorer resolution; our analysis strongly prefers the higher frequency Band 2 (950-1760 MHz) channel of the SKA-MID telescope to the lower frequency Band 1 (350-1050 MHz). Best results would be obtained by allowing the centre of Band 2 to shift towards lower frequency, around 1.1 GHz. We then move on to consider survey size, finding that an area of 5000 deg2 is optimal for most SKA1 instrumental configurations. Finally, we forecast the performance of a weak lensing survey with the second deployment of the SKA. The increased survey size (3π steradian) and sensitivity improves both the signal to noise and the dark energy metrics by two orders of magnitude.

  16. Optimized emission in nanorod arrays through quasi-aperiodic inverse design.

    PubMed

    Anderson, P Duke; Povinelli, Michelle L

    2015-06-01

    We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.

  17. Successive ion layer adsorption and reaction (SILAR) technique synthesis of Al(III)-8-hydroxy-5-nitrosoquinolate nano-sized thin films: characterization and factors optimization.

    PubMed

    Haggag, Sawsan M S; Farag, A A M; Abdel Refea, M

    2013-02-01

    Nano Al(III)-8-hydroxy-5-nitrosoquinolate [Al(III)-(HNOQ)(3)] thin films were synthesized by the rapid, direct, simple and efficient successive ion layer adsorption and reaction (SILAR) technique. Thin film formation optimized factors were evaluated. Stoichiometry and structure were confirmed by elemental analysis and FT-IR. The particle size (27-71 nm) was determined using scanning electron microscope (SEM). Thermal stability and thermal parameters were determined by thermal gravimetric analysis (TGA). Optical properties were investigated using spectrophotometric measurements of transmittance and reflectance at normal incidence. Refractive index, n, and absorption index, k, were determined. Spectral behavior of the absorption coefficient in the intrinsic absorption region revealed a direct allowed transition with 2.45 eV band gap. The current-voltage (I-V) characteristics of [Al(III)-(HNOQ)(3)]/p-Si heterojunction was measured at room temperature. The forward and reverse I-V characteristics were analyzed. The calculated zero-bias barrier height (Φ(b)) and ideality factor (n) showed strong bias dependence. Energy distribution of interface states (N(ss)) was obtained. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  19. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  20. Integral Design Methodology of Photocatalytic Reactors for Air Pollution Remediation.

    PubMed

    Passalía, Claudio; Alfano, Orlando M; Brandi, Rodolfo J

    2017-06-07

    An integral reactor design methodology was developed to address the optimal design of photocatalytic wall reactors to be used in air pollution control. For a target pollutant to be eliminated from an air stream, the proposed methodology is initiated with a mechanistic derived reaction rate. The determination of intrinsic kinetic parameters is associated with the use of a simple geometry laboratory scale reactor, operation under kinetic control and a uniform incident radiation flux, which allows computing the local superficial rate of photon absorption. Thus, a simple model can describe the mass balance and a solution may be obtained. The kinetic parameters may be estimated by the combination of the mathematical model and the experimental results. The validated intrinsic kinetics obtained may be directly used in the scaling-up of any reactor configuration and size. The bench scale reactor may require the use of complex computational software to obtain the fields of velocity, radiation absorption and species concentration. The complete methodology was successfully applied to the elimination of airborne formaldehyde. The kinetic parameters were determined in a flat plate reactor, whilst a bench scale corrugated wall reactor was used to illustrate the scaling-up methodology. In addition, an optimal folding angle of the corrugated reactor was found using computational fluid dynamics tools.

  1. Patterned assembly of colloidal particles by confined dewetting lithography.

    PubMed

    Celio, Hugo; Barton, Emily; Stevenson, Keith J

    2006-12-19

    We report the assembly of colloidal particles into confined arrangements and patterns on various cleaned and chemically modified solid substrates using a method which we term "confined dewetting lithography" or CDL for short. The experimental setup for CDL is a simple deposition cell where an aqueous suspension of colloidal particles (e.g., polystyrene spheres) is placed between a floating deposition template (i.e., metal microgrid) and the solid substrate. The voids of the deposition template serve as an array of micrometer-sized reservoirs where several hydrodynamic processes are confined. These processes include water evaporation, meniscus formation, convective flow, rupturing, dewetting, and capillary-bridge formation. We discuss the optimal conditions where the CDL has a high efficiency to deposit intricate patterns of colloidal particles using polystyrene spheres (PS; 4.5, 2.0, 1.7, 0.11, 0.064 microm diameter) and square and hexagonal deposition templates as model systems. We find that the optimization conditions of the CDL method, when using submicrometer, sulfate-functionalized PS particles, are primarily dependent on minimizing attractive particle-substrate interactions. The CDL methodology described herein presents a relatively simple and rapid method to assemble virtually any geometric pattern, including more complex patterns assembled using PS particles with different diameters, from aqueous suspensions by choosing suitable conditions and materials.

  2. A Simple but Powerful Heuristic Method for Accelerating k-Means Clustering of Large-Scale Data in Life Science.

    PubMed

    Ichikawa, Kazuki; Morishita, Shinichi

    2014-01-01

    K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.

  3. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    PubMed

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  4. An Optimal Current Controller Design for a Grid Connected Inverter to Improve Power Quality and Test Commercial PV Inverters.

    PubMed

    Algaddafi, Ali; Altuwayjiri, Saud A; Ahmed, Oday A; Daho, Ibrahim

    2017-01-01

    Grid connected inverters play a crucial role in generating energy to be fed to the grid. A filter is commonly used to suppress the switching frequency harmonics produced by the inverter, this being passive, and either an L- or LCL-filter. The latter is smaller in size compared to the L-filter. But choosing the optimal values of the LCL-filter is challenging due to resonance, which can affect stability. This paper presents a simple inverter controller design with an L-filter. The control topology is simple and applied easily using traditional control theory. Fast Fourier Transform analysis is used to compare different grid connected inverter control topologies. The modelled grid connected inverter with the proposed controller complies with the IEEE-1547 standard, and total harmonic distortion of the output current of the modelled inverter has been just 0.25% with an improved output waveform. Experimental work on a commercial PV inverter is then presented, including the effect of strong and weak grid connection. Inverter effects on the resistive load connected at the point of common coupling are presented. Results show that the voltage and current of resistive load, when the grid is interrupted, are increased, which may cause failure or damage for connecting appliances.

  5. An Optimal Current Controller Design for a Grid Connected Inverter to Improve Power Quality and Test Commercial PV Inverters

    PubMed Central

    Altuwayjiri, Saud A.; Ahmed, Oday A.; Daho, Ibrahim

    2017-01-01

    Grid connected inverters play a crucial role in generating energy to be fed to the grid. A filter is commonly used to suppress the switching frequency harmonics produced by the inverter, this being passive, and either an L- or LCL-filter. The latter is smaller in size compared to the L-filter. But choosing the optimal values of the LCL-filter is challenging due to resonance, which can affect stability. This paper presents a simple inverter controller design with an L-filter. The control topology is simple and applied easily using traditional control theory. Fast Fourier Transform analysis is used to compare different grid connected inverter control topologies. The modelled grid connected inverter with the proposed controller complies with the IEEE-1547 standard, and total harmonic distortion of the output current of the modelled inverter has been just 0.25% with an improved output waveform. Experimental work on a commercial PV inverter is then presented, including the effect of strong and weak grid connection. Inverter effects on the resistive load connected at the point of common coupling are presented. Results show that the voltage and current of resistive load, when the grid is interrupted, are increased, which may cause failure or damage for connecting appliances. PMID:28540362

  6. Directional hearing by linear summation of binaural inputs at the medial superior olive

    PubMed Central

    van der Heijden, Marcel; Lorteije, Jeannette A. M.; Plauška, Andrius; Roberts, Michael T.; Golding, Nace L.; Borst, J. Gerard G.

    2013-01-01

    SUMMARY Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own “best ITD” to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. PMID:23764292

  7. [Exploration of one-step preparation of Ganoderma lucidum multicomponent microemulsion].

    PubMed

    He, Jun-Jie; Chen, Yan; Du, Meng; Cao, Wei; Yuan, Ling; Zheng, Li-Yan

    2013-03-01

    To explore one-step method for the preparation of Ganoderma lucidum multicomponent microemulsion, according to the dissolution characteristics of triterpenes and polysaccharides in Ganoderma lucidum, formulation of the microemulsion was optimized. The optimal blank microemulsion was used as a solvent to sonicate the Ganoderma lucidum powder to prepare the multicomponent microemulsion, besides, its physicochemical properties were compared with the microemulsion made by conventional method. The results showed that the multicomponent microemulsion was characterized as (43.32 +/- 6.82) nm in size, 0.173 +/- 0.025 in polydispersity index (PDI) and -(3.98 +/- 0.82) mV in zeta potential. The contents of Ganoderma lucidum triterpenes and polysaccharides were (5.95 +/- 0.32) and (7.58 +/- 0.44) mg x mL(-1), respectively. Sonicating Ganoderma lucidum powder by blank microemulsion could prepare the multicomponent microemulsion. Compared with the conventional method, this method is simple and low cost, which is suitable for industrial production.

  8. A wave dynamics criterion for optimization of mammalian cardiovascular system.

    PubMed

    Pahlevan, Niema M; Gharib, Morteza

    2014-05-07

    The cardiovascular system in mammals follows various optimization criteria covering the heart, the vascular network, and the coupling of the two. Through a simple dimensional analysis we arrived at a non-dimensional number (wave condition number) that can predict the optimum wave state in which the left ventricular (LV) pulsatile power (LV workload) is minimized in a mammalian cardiovascular system. This number is also universal among all mammals independent of animal size maintaining a value of around 0.1. By utilizing a unique in vitro model of human aorta, we tested our hypothesis against a wide range of aortic compliance (pulse wave velocity). We concluded that the optimum value of the wave condition number remains to be around 0.1 for a wide range of aorta compliance that we could simulate in our in-vitro system. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  10. Regenerator Operation at Very High Frequencies for Microcryocoolers

    NASA Astrophysics Data System (ADS)

    Radebaugh, Ray; O'Gallagher, Agnes

    2006-04-01

    The size of Stirling and Stirling-type pulse tube cryocoolers is dominated by the size of the pressure oscillator. Such cryocoolers typically operate at frequencies up to about 60 Hz for cold-end temperatures above about 60 K. Higher operating frequencies would allow the size and mass of the pressure oscillator to be reduced for a given power input. However, simply increasing the operating frequency leads to large losses in the regenerator. The simple analytical equations derived here show how the right combination of frequency and pressure, along with optimized regenerator geometry, can lead to successful regenerator operation at frequencies up to 1 kHz. Efficient regenerator operation at such high frequencies is possible only with pressures of about 5 to 8 MPa and with very small hydraulic diameters and lengths. Other geometrical parameters must also be optimized for such conditions. The analytical equations are used to provide guidance to the right combination of parameters. We give example numerical calculations with REGEN3.2 in the paper for 60 Hz, 400 Hz, and 1000 Hz operation of optimized screen regenerators and show that the coefficient of performance at 400 Hz and 1000 Hz is about 78 % and 68 %, respectively, of that for 60 Hz when an average pressure of 7 MPa is used with the higher frequency, compared with 2.5 MPa for 60 Hz operation. The 1000 Hz coefficient of performance for parallel tubes is about the same as that of the screen geometry at 60 Hz. The compressor and cold-end swept volumes are reduced by a factor of 47 at 1000 Hz, compared with the 60 Hz case for the same input acoustic power, which can enable the development of microcryocoolers for MEMS applications.

  11. A finite element based method for solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.

    1989-01-01

    A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.

  12. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.

  13. The effects of relative food item size on optimal tooth cusp sharpness during brittle food item processing

    PubMed Central

    Berthaume, Michael A.; Dumont, Elizabeth R.; Godfrey, Laurie R.; Grosse, Ian R.

    2014-01-01

    Teeth are often assumed to be optimal for their function, which allows researchers to derive dietary signatures from tooth shape. Most tooth shape analyses normalize for tooth size, potentially masking the relationship between relative food item size and tooth shape. Here, we model how relative food item size may affect optimal tooth cusp radius of curvature (RoC) during the fracture of brittle food items using a parametric finite-element (FE) model of a four-cusped molar. Morphospaces were created for four different food item sizes by altering cusp RoCs to determine whether optimal tooth shape changed as food item size changed. The morphospaces were also used to investigate whether variation in efficiency metrics (i.e. stresses, energy and optimality) changed as food item size changed. We found that optimal tooth shape changed as food item size changed, but that all optimal morphologies were similar, with one dull cusp that promoted high stresses in the food item and three cusps that acted to stabilize the food item. There were also positive relationships between food item size and the coefficients of variation for stresses in food item and optimality, and negative relationships between food item size and the coefficients of variation for stresses in the enamel and strain energy absorbed by the food item. These results suggest that relative food item size may play a role in selecting for optimal tooth shape, and the magnitude of these selective forces may change depending on food item size and which efficiency metric is being selected. PMID:25320068

  14. The impact of case mix on timely access to appointments in a primary care group practice.

    PubMed

    Ozen, Asli; Balasubramanian, Hari

    2013-06-01

    At the heart of the practice of primary care is the concept of a physician panel. A panel refers to the set of patients for whose long term, holistic care the physician is responsible. A physician's appointment burden is determined by the size and composition of the panel. Size refers to the number of patients in the panel while composition refers to the case-mix, or the type of patients (older versus younger, healthy versus chronic patients), in the panel. In this paper, we quantify the impact of the size and case-mix on the ability of a multi-provider practice to provide adequate access to its empanelled patients. We use overflow frequency, or the probability that the demand exceeds the capacity, as a measure of access. We formulate problem of minimizing the maximum overflow for a multi-physician practice as a non-linear integer programming problem and establish structural insights that enable us to create simple yet near optimal heuristic strategies to change panels. This optimization framework helps a practice: (1) quantify the imbalances across physicians due to the variation in case mix and panel size, and the resulting effect on access; and (2) determine how panels can be altered in the least disruptive way to improve access. We illustrate our methodology using four test practices created using patient level data from the primary care practice at Mayo Clinic, Rochester, Minnesota. An important advantage of our approach is that it can be implemented in an Excel Spreadsheet and used for aggregate level planning and panel management decisions.

  15. Optimized theory for simple and molecular fluids.

    PubMed

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  16. A Simple Artificial Life Model Explains Irrational Behavior in Human Decision-Making

    PubMed Central

    Feher da Silva, Carolina; Baldo, Marcus Vinícius Chrysóstomo

    2012-01-01

    Although praised for their rationality, humans often make poor decisions, even in simple situations. In the repeated binary choice experiment, an individual has to choose repeatedly between the same two alternatives, where a reward is assigned to one of them with fixed probability. The optimal strategy is to perseverate with choosing the alternative with the best expected return. Whereas many species perseverate, humans tend to match the frequencies of their choices to the frequencies of the alternatives, a sub-optimal strategy known as probability matching. Our goal was to find the primary cognitive constraints under which a set of simple evolutionary rules can lead to such contrasting behaviors. We simulated the evolution of artificial populations, wherein the fitness of each animat (artificial animal) depended on its ability to predict the next element of a sequence made up of a repeating binary string of varying size. When the string was short relative to the animats’ neural capacity, they could learn it and correctly predict the next element of the sequence. When it was long, they could not learn it, turning to the next best option: to perseverate. Animats from the last generation then performed the task of predicting the next element of a non-periodical binary sequence. We found that, whereas animats with smaller neural capacity kept perseverating with the best alternative as before, animats with larger neural capacity, which had previously been able to learn the pattern of repeating strings, adopted probability matching, being outperformed by the perseverating animats. Our results demonstrate how the ability to make predictions in an environment endowed with regular patterns may lead to probability matching under less structured conditions. They point to probability matching as a likely by-product of adaptive cognitive strategies that were crucial in human evolution, but may lead to sub-optimal performances in other environments. PMID:22563454

  17. A simple artificial life model explains irrational behavior in human decision-making.

    PubMed

    Feher da Silva, Carolina; Baldo, Marcus Vinícius Chrysóstomo

    2012-01-01

    Although praised for their rationality, humans often make poor decisions, even in simple situations. In the repeated binary choice experiment, an individual has to choose repeatedly between the same two alternatives, where a reward is assigned to one of them with fixed probability. The optimal strategy is to perseverate with choosing the alternative with the best expected return. Whereas many species perseverate, humans tend to match the frequencies of their choices to the frequencies of the alternatives, a sub-optimal strategy known as probability matching. Our goal was to find the primary cognitive constraints under which a set of simple evolutionary rules can lead to such contrasting behaviors. We simulated the evolution of artificial populations, wherein the fitness of each animat (artificial animal) depended on its ability to predict the next element of a sequence made up of a repeating binary string of varying size. When the string was short relative to the animats' neural capacity, they could learn it and correctly predict the next element of the sequence. When it was long, they could not learn it, turning to the next best option: to perseverate. Animats from the last generation then performed the task of predicting the next element of a non-periodical binary sequence. We found that, whereas animats with smaller neural capacity kept perseverating with the best alternative as before, animats with larger neural capacity, which had previously been able to learn the pattern of repeating strings, adopted probability matching, being outperformed by the perseverating animats. Our results demonstrate how the ability to make predictions in an environment endowed with regular patterns may lead to probability matching under less structured conditions. They point to probability matching as a likely by-product of adaptive cognitive strategies that were crucial in human evolution, but may lead to sub-optimal performances in other environments.

  18. An intracellular analysis of the visual responses of neurones in cat visual cortex.

    PubMed Central

    Douglas, R J; Martin, K A; Whitteridge, D

    1991-01-01

    1. Extracellular and intracellular recordings were made from neurones in the visual cortex of the cat in order to compare the subthreshold membrane potentials, reflecting the input to the neurone, with the output from the neurone seen as action potentials. 2. Moving bars and edges, generated under computer control, were used to stimulate the neurones. The membrane potential was digitized and averaged for a number of trials after stripping the action potentials. Comparison of extracellular and intracellular discharge patterns indicated that the intracellular impalement did not alter the neurones' properties. Input resistance of the neurone altered little during stable intracellular recordings (30 min-2 h 50 min). 3. Intracellular recordings showed two distinct patterns of membrane potential changes during optimal visual stimulation. The patterns corresponded closely to the division of S-type (simple) and C-type (complex) receptive fields. Simple cells had a complex pattern of membrane potential fluctuations, involving depolarizations alternating with hyperpolarizations. Complex cells had a simple single sustained plateau of depolarization that was often followed but not preceded by a hyperpolarization. In both simple and complex cells the depolarizations led to action potential discharges. The hyperpolarizations were associated with inhibition of action potential discharge. 4. Stimulating simple cells with non-optimal directions of motion produced little or no hyperpolarization of the membrane in most cases, despite a lack of action potential output. Directional complex cells always produced a single plateau of depolarization leading to action potential discharge in both the optimal and non-optimal directions of motion. The directionality could not be predicted on the basis of the position of the hyperpolarizing inhibitory potentials found in the optimal direction. 5. Stimulation of simple cells with non-optimal orientations occasionally produced slight hyperpolarizations and inhibition of action potential discharge. Complex cells, which had broader orientation tuning than simple cells, could show marked hyperpolarization for non-optimal orientations, but this was not generally the case. 6. The data do not support models of directionality and orientation that rely solely on strong inhibitory mechanisms to produce stimulus selectivity. PMID:1804981

  19. Heating-rate-induced porous α-Fe2O3 with controllable pore size and crystallinity grown on graphene for supercapacitors.

    PubMed

    Yang, Shuhua; Song, Xuefeng; Zhang, Peng; Gao, Lian

    2015-01-14

    Porous α-Fe2O3/graphene composites (S-PIGCs) have been synthesized by a simple hydrothermal method combined with a slow annealing route. The S-PIGCs as a supercapacitors electrode material exhibit an ultrahigh specific capacitance of 343.7 F g(-1) at a current density of 3 A g(-1), good rate capability, and excellent cycling stability. The enhanced electrochemical performances are attributed to the combined contribution from the optimally architecture of the porous α-Fe2O3, as a result of a slow annealing, and the extraordinary electrical conductivity of the graphene sheets.

  20. A low voltage submillisecond-response polymer network liquid crystal spatial light modulator

    NASA Astrophysics Data System (ADS)

    Sun, Jie; Wu, Shin-Tson; Haseba, Yasuhiro

    2014-01-01

    We report a low voltage and highly transparent polymer network liquid crystal (PNLC) with submillisecond response time. By employing a large dielectric anisotropy LC host JC-BP07N, we have lowered the V2π voltage to 23 V at λ = 514 nm. This will enable PNLC to be integrated with a high resolution liquid-crystal-on-silicon spatial light modulator, in which the maximum voltage is 24 V. A simple model correlating PNLC performance with its host LC is proposed and validated experimentally. By optimizing the domain size, we can achieve V2π < 15 V with some compromises in scattering and response time.

  1. Signal optimization in urban transport: A totally asymmetric simple exclusion process with traffic lights.

    PubMed

    Arita, Chikashi; Foulaadvand, M Ebrahim; Santen, Ludger

    2017-03-01

    We consider the exclusion process on a ring with time-dependent defective bonds at which the hopping rate periodically switches between zero and one. This system models main roads in city traffics, intersecting with perpendicular streets. We explore basic properties of the system, in particular dependence of the vehicular flow on the parameters of signalization as well as the system size and the car density. We investigate various types of the spatial distribution of the vehicular density, and show existence of a shock profile. We also measure waiting time behind traffic lights, and examine its relationship with the traffic flow.

  2. Signal optimization in urban transport: A totally asymmetric simple exclusion process with traffic lights

    NASA Astrophysics Data System (ADS)

    Arita, Chikashi; Foulaadvand, M. Ebrahim; Santen, Ludger

    2017-03-01

    We consider the exclusion process on a ring with time-dependent defective bonds at which the hopping rate periodically switches between zero and one. This system models main roads in city traffics, intersecting with perpendicular streets. We explore basic properties of the system, in particular dependence of the vehicular flow on the parameters of signalization as well as the system size and the car density. We investigate various types of the spatial distribution of the vehicular density, and show existence of a shock profile. We also measure waiting time behind traffic lights, and examine its relationship with the traffic flow.

  3. Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile

    NASA Astrophysics Data System (ADS)

    Halverson, Thomas; Poirier, Bill

    2015-03-01

    'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.

  4. Using Simple Environmental Variables to Estimate Biomass Disturbance

    DTIC Science & Technology

    2014-08-01

    ER D C/ CE RL T R- 14 -1 3 Optimal Allocation of Land for Training and Non-Training Uses ( OPAL ) Using Simple Environmental Variables to...Uses ( OPAL ) ERDC/CERL TR-14-13 August 2014 Using Simple Environmental Variables to Estimate Biomass Disturbance Natalie Myers, Daniel Koch...Development of the Optimal Allocation of Land for Training and Non-Training Uses ( OPAL ) Program was undertak- en to meet this need. This phase of work

  5. Transverse mode selection in vertical-cavity surface-emitting lasers via deep impurity-induced disordering

    NASA Astrophysics Data System (ADS)

    O'Brien, Thomas R.; Kesler, Benjamin; Dallesasse, John M.

    2017-02-01

    Top emission 850-nm vertical-cavity surface-emitting lasers (VCSELs) demonstrating transverse mode selection via impurity-induced disordering (IID) are presented. The IID apertures are fabricated via closed ampoule zinc diffusion. A simple 1-D plane wave model based on the intermixing of Group III atoms during IID is presented to optimize the mirror loss of higher-order modes as a function of IID strength and depth. In addition, the impact of impurity diffusion into the cap layer of the lasers is shown to improve contact resistance. Further investigation of the mode-dependent characteristics of the device imply an increase in the thermal impedance associated with the fraction of IID contained within the oxide aperture. The optimization of the ratio of the IID aperture to oxide aperture is experimentally determined. Single fundamental mode output of 1.6 mW with 30 dBm side mode suppression ratio is achieved by a 3.0 μm oxide-confined device with an IID aperture of 1.3 μm indicating an optimal IID aperture size of 43% of the oxide aperture.

  6. Automated Generation of Finite-Element Meshes for Aircraft Conceptual Design

    NASA Technical Reports Server (NTRS)

    Li, Wu; Robinson, Jay

    2016-01-01

    This paper presents a novel approach for automated generation of fully connected finite-element meshes for all internal structural components and skins of a given wing-body geometry model, controlled by a few conceptual-level structural layout parameters. Internal structural components include spars, ribs, frames, and bulkheads. Structural layout parameters include spar/rib locations in wing chordwise/spanwise direction and frame/bulkhead locations in longitudinal direction. A simple shell thickness optimization problem with two load conditions is used to verify versatility and robustness of the automated meshing process. The automation process is implemented in ModelCenter starting from an OpenVSP geometry and ending with a NASTRAN 200 solution. One subsonic configuration and one supersonic configuration are used for numerical verification. Two different structural layouts are constructed for each configuration and five finite-element meshes of different sizes are generated for each layout. The paper includes various comparisons of solutions of 20 thickness optimization problems, as well as discussions on how the optimal solutions are affected by the stress constraint bound and the initial guess of design variables.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adaptingmore » Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.« less

  8. Optimization on condition of epigallocatechin-3-gallate (EGCG) nanoliposomes by response surface methodology and cellular uptake studies in Caco-2 cells

    NASA Astrophysics Data System (ADS)

    Luo, Xiaobo; Guan, Rongfa; Chen, Xiaoqiang; Tao, Miao; Ma, Jieqing; Zhao, Jin

    2014-06-01

    The major component in green tea polyphenols, epigallocatechin-3-gallate (EGCG), has been demonstrated to prevent carcinogenesis. To improve the effectiveness of EGCG, liposomes were used as a carrier in this study. Reverse-phase evaporation method besides response surface methodology is a simple, rapid, and beneficial approach for liposome preparation and optimization. The optimal preparation conditions were as follows: phosphatidylcholine-to-cholesterol ratio of 4.00, EGCG concentration of 4.88 mg/mL, Tween 80 concentration of 1.08 mg/mL, and rotary evaporation temperature of 34.51°C. Under these conditions, the experimental encapsulation efficiency and size of EGCG nanoliposomes were 85.79% ± 1.65% and 180 nm ± 4 nm, which were close with the predicted value. The malondialdehyde value and the release test in vitro indicated that the prepared EGCG nanoliposomes were stable and suitable for more widespread application. Furthermore, compared with free EGCG, encapsulation of EGCG enhanced its inhibitory effect on tumor cell viability at higher concentrations.

  9. Optimizing stellarator coil winding surfaces with Regcoil

    NASA Astrophysics Data System (ADS)

    Bader, Aaron; Landreman, Matt; Anderson, David; Hegna, Chris

    2017-10-01

    We show initial attempts at optimizing a coil winding surface using the Regcoil code [1] for selected quasi helically symmetric equilibria. We implement a generic optimization scheme which allows for variation of the winding surface to allow for improved diagnostic access and allow for flexible divertor solutions. Regcoil and similar coil-solving algorithms require a user-input winding surface, on which the coils lie. Simple winding surfaces created by uniformly expanding the plasma boundary may not be ideal. Engineering constraints on reactor design require a coil-plasma separation sufficient for the introduction of neutron shielding and a tritium generating blanket. This distance can be the limiting factor in determining reactor size. Furthermore, expanding coils in other regions, where possible, can be useful for diagnostic and maintenance access along with providing sufficient room for a divertor. We minimize a target function that includes as constraints, the minimum coil-plasma distance, the winding surface volume, and the normal magnetic field on the plasma boundary. Results are presented for two quasi-symmetric equilibria at different aspect ratios. Work supported by the US DOE under Grant DE-FG02-93ER54222.

  10. The Friction Force Determination of Large-Sized Composite Rods in Pultrusion

    NASA Astrophysics Data System (ADS)

    Grigoriev, S. N.; Krasnovskii, A. N.; Kazakov, I. A.

    2014-08-01

    Nowadays, the simple pull-force models of pultrusion process are not suitable for large sized rods because they are not considered a chemical shrinkage and thermal expansion acting in cured material inside the die. But the pulling force of the resin-impregnated fibers as they travels through the heated die is essential factor in the pultrusion process. In order to minimize the number of trial-and-error experiments a new mathematical approach to determine the frictional force is presented. The governing equations of the model are stated in general terms and various simplifications are implemented in order to obtain solutions without extensive numerical efforts. The influence of different pultrusion parameters on the frictional force value is investigated. The results obtained by the model can establish a foundation by which process control parameters are selected to achieve an appropriate pull-force and can be used for optimization pultrusion process.

  11. Headspace Single-Drop Microextraction Gas Chromatography Mass Spectrometry for the Analysis of Volatile Compounds from Herba Asari

    PubMed Central

    Wang, Guan-Jie; Tian, Li; Fan, Yu-Ming; Qi, Mei-Ling

    2013-01-01

    A rapid headspace single-drop microextraction gas chromatography mass spectrometry (SDME-GC-MS) for the analysis of the volatile compounds in Herba Asari was developed in this study. The extraction solvent, extraction temperature and time, sample amount, and particle size were optimized. A mixed solvent of n-tridecane and butyl acetate (1 : 1) was finally used for the extraction with sample amount of 0.750 g and 100-mesh particle size at 70°C for 15 min. Under the determined conditions, the pound samples of Herba Asari were directly applied for the analysis. The result showed that SDME-GC–MS method was a simple, effective, and inexpensive way to measure the volatile compounds in Herba Asari and could be used for the analysis of volatile compounds in Chinese medicine. PMID:23607049

  12. Sensitivity optimization of ZnO clad-modified optical fiber humidity sensor by means of tuning the optical fiber waist diameter

    NASA Astrophysics Data System (ADS)

    Azad, Saeed; Sadeghi, Ebrahim; Parvizi, Roghaieh; Mazaheri, Azardokht; Yousefi, M.

    2017-05-01

    In this work, the multimode optical fiber size effects on the performances of the clad-modified fiber with ZnO nanorods relative humidity (RH) sensor were experimentally investigated. Simple and controlled chemical etching method through on line monitoring was used to prepare different fiber waist diameter with long length of 15 mm. More precisely, the competition behavior of sensor performances with varying fiber waist diameter was studied to find appropriate size of maximizing evanescent fields. The obtained results revealed that evanescent wave absorption coefficient (γ) enhanced more than 10 times compare to bare fiber at the proposed optimum fiber diameter of 28 μm. Also, high linearity and fast recovery time about 7 s was obtained at the proposed fiber waist diameter. Applicable features of the proposed sensor allow this device to be used for humidity sensing applications, especially to be applied in remote sensing technologies.

  13. Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs

    NASA Astrophysics Data System (ADS)

    Mawson, Mark J.; Revell, Alistair J.

    2014-10-01

    The Lattice Boltzmann method (LBM) for solving fluid flow is naturally well suited to an efficient implementation for massively parallel computing, due to the prevalence of local operations in the algorithm. This paper presents and analyses the performance of a 3D lattice Boltzmann solver, optimized for third generation nVidia GPU hardware, also known as 'Kepler'. We provide a review of previous optimization strategies and analyse data read/write times for different memory types. In LBM, the time propagation step (known as streaming), involves shifting data to adjacent locations and is central to parallel performance; here we examine three approaches which make use of different hardware options. Two of which make use of 'performance enhancing' features of the GPU; shared memory and the new shuffle instruction found in Kepler based GPUs. These are compared to a standard transfer of data which relies instead on optimized storage to increase coalesced access. It is shown that the more simple approach is most efficient; since the need for large numbers of registers per thread in LBM limits the block size and thus the efficiency of these special features is reduced. Detailed results are obtained for a D3Q19 LBM solver, which is benchmarked on nVidia K5000M and K20C GPUs. In the latter case the use of a read-only data cache is explored, and peak performance of over 1036 Million Lattice Updates Per Second (MLUPS) is achieved. The appearance of a periodic bottleneck in the solver performance is also reported, believed to be hardware related; spikes in iteration-time occur with a frequency of around 11 Hz for both GPUs, independent of the size of the problem.

  14. Predicting the response of seven Asian glaciers to future climate scenarios using a simple linear glacier model

    NASA Astrophysics Data System (ADS)

    Ren, Diandong; Karoly, David J.

    2008-03-01

    Observations from seven Central Asian glaciers (35-55°N; 70-95°E) are used, together with regional temperature data, to infer uncertain parameters for a simple linear model of the glacier length variations. The glacier model is based on first order glacier dynamics and requires the knowledge of reference states of forcing and glacier perturbation magnitude. An adjoint-based variational method is used to optimally determine the glacier reference states in 1900 and the uncertain glacier model parameters. The simple glacier model is then used to estimate the glacier length variations until 2060 using regional temperature projections from an ensemble of climate model simulations for a future climate change scenario (SRES A2). For the period 2000-2060, all glaciers are projected to experience substantial further shrinkage, especially those with gentle slopes (e.g., Glacier Chogo Lungma retreats ˜4 km). Although nearly one-third of the year 2000 length will be reduced for some small glaciers, the existence of the glaciers studied here is not threatened by year 2060. The differences between the individual glacier responses are large. No straightforward relationship is found between glacier size and the projected fractional change of its length.

  15. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  16. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  17. The Impact of Heterogeneous Thresholds on Social Contagion with Multiple Initiators

    PubMed Central

    Karampourniotis, Panagiotis D.; Sreenivasan, Sameet; Szymanski, Boleslaw K.; Korniss, Gyorgy

    2015-01-01

    The threshold model is a simple but classic model of contagion spreading in complex social systems. To capture the complex nature of social influencing we investigate numerically and analytically the transition in the behavior of threshold-limited cascades in the presence of multiple initiators as the distribution of thresholds is varied between the two extreme cases of identical thresholds and a uniform distribution. We accomplish this by employing a truncated normal distribution of the nodes’ thresholds and observe a non-monotonic change in the cascade size as we vary the standard deviation. Further, for a sufficiently large spread in the threshold distribution, the tipping-point behavior of the social influencing process disappears and is replaced by a smooth crossover governed by the size of initiator set. We demonstrate that for a given size of the initiator set, there is a specific variance of the threshold distribution for which an opinion spreads optimally. Furthermore, in the case of synthetic graphs we show that the spread asymptotically becomes independent of the system size, and that global cascades can arise just by the addition of a single node to the initiator set. PMID:26571486

  18. Unleashing the Power of Distributed CPU/GPU Architectures: Massive Astronomical Data Analysis and Visualization Case Study

    NASA Astrophysics Data System (ADS)

    Hassan, A. H.; Fluke, C. J.; Barnes, D. G.

    2012-09-01

    Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed today's single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a “software as a service” manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.

  19. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. People adopt optimal policies in simple decision-making, after practice and guidance.

    PubMed

    Evans, Nathan J; Brown, Scott D

    2017-04-01

    Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.

  1. A computational method for optimizing fuel treatment locations

    Treesearch

    Mark A. Finney

    2006-01-01

    Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...

  2. Fisher information and asymptotic normality in system identification for quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guta, Madalin

    2011-06-15

    This paper deals with the problem of estimating the coupling constant {theta} of a mixing quantum Markov chain. For a repeated measurement on the chain's output we show that the outcomes' time average has an asymptotically normal (Gaussian) distribution, and we give the explicit expressions of its mean and variance. In particular, we obtain a simple estimator of {theta} whose classical Fisher information can be optimized over different choices of measured observables. We then show that the quantum state of the output together with the system is itself asymptotically Gaussian and compute its quantum Fisher information, which sets an absolutemore » bound to the estimation error. The classical and quantum Fisher information are compared in a simple example. In the vicinity of {theta}=0 we find that the quantum Fisher information has a quadratic rather than linear scaling in output size, and asymptotically the Fisher information is localized in the system, while the output is independent of the parameter.« less

  3. Various supercritical carbon dioxide cycle layouts study for molten carbonate fuel cell application

    NASA Astrophysics Data System (ADS)

    Bae, Seong Jun; Ahn, Yoonhan; Lee, Jekyoung; Lee, Jeong Ik

    2014-12-01

    Various supercritical carbon dioxide (S-CO2) cycles for a power conversion system of a Molten Carbonate Fuel Cell (MCFC) hybrid system are studied in this paper. Re-Compressing Brayton (RCB) cycle, Simple Recuperated Brayton (SRB) cycle and Simple Recuperated Transcritical (SRT) cycle layouts were selected as candidates for this study. In addition, a novel concept of S-CO2 cycle which combines Brayton cycle and Rankine cycle is proposed and intensively studied with other S-CO2 layouts. A parametric study is performed to optimize the total system to be compact and to achieve wider operating range. Performances of each S-CO2 cycle are compared in terms of the thermal efficiency, net electricity of the MCFC hybrid system and approximate total volumes of each S-CO2 cycle. As a result, performance and total physical size of S-CO2 cycle can be better understood for MCFC S-CO2 hybrid system and especially, newly suggested S-CO2 cycle shows some success.

  4. GaAs MOEMS Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SPAHN, OLGA B.; GROSSETETE, GRANT D.; CICH, MICHAEL J.

    2003-03-01

    Many MEMS-based components require optical monitoring techniques using optoelectronic devices for converting mechanical position information into useful electronic signals. While the constituent piece-parts of such hybrid opto-MEMS components can be separately optimized, the resulting component performance, size, ruggedness and cost are substantially compromised due to assembly and packaging limitations. GaAs MOEMS offers the possibility of monolithically integrating high-performance optoelectronics with simple mechanical structures built in very low-stress epitaxial layers with a resulting component performance determined only by GaAs microfabrication technology limitations. GaAs MOEMS implicitly integrates the capability for radiation-hardened optical communications into the MEMS sensor or actuator component, a vitalmore » step towards rugged integrated autonomous microsystems that sense, act, and communicate. This project establishes a new foundational technology that monolithically combines GaAs optoelectronics with simple mechanics. Critical process issues addressed include selectivity, electrochemical characteristics, and anisotropy of the release chemistry, and post-release drying and coating processes. Several types of devices incorporating this novel technology are demonstrated.« less

  5. Fast and Simple Microwave Synthesis of TiO2/Au Nanoparticles for Gas-Phase Photocatalytic Hydrogen Generation.

    PubMed

    May-Masnou, Anna; Soler, Lluís; Torras, Miquel; Salles, Pol; Llorca, Jordi; Roig, Anna

    2018-01-01

    The fabrication of small anatase titanium dioxide (TiO 2 ) nanoparticles (NPs) attached to larger anisotropic gold (Au) morphologies by a very fast and simple two-step microwave-assisted synthesis is presented. The TiO 2 /Au NPs are synthesized using polyvinylpyrrolidone (PVP) as reducing, capping and stabilizing agent through a polyol approach. To optimize the contact between the titania and the gold and facilitate electron transfer, the PVP is removed by calcination at mild temperatures. The nanocatalysts activity is then evaluated in the photocatalytic production of hydrogen from water/ethanol mixtures in gas-phase at ambient temperature. A maximum value of 5.3 mmol·[Formula: see text]h -1 (7.4 mmol·[Formula: see text]h -1 ) of hydrogen is recorded for the system with larger gold particles at an optimum calcination temperature of 450°C. Herein we demonstrate that TiO 2 -based photocatalysts with high Au loading and large Au particle size (≈50 nm) NPs have photocatalytic activity.

  6. Effects of Moisture and Particle Size on Quantitative Determination of Total Organic Carbon (TOC) in Soils Using Near-Infrared Spectroscopy.

    PubMed

    Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe

    2017-10-17

    Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.

  7. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    PubMed

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  8. Dynamical stability of the one-dimensional rigid Brownian rotator: the role of the rotator’s spatial size and shape

    NASA Astrophysics Data System (ADS)

    Jeknić-Dugić, Jasmina; Petrović, Igor; Arsenijević, Momir; Dugić, Miroljub

    2018-05-01

    We investigate dynamical stability of a single propeller-like shaped molecular cogwheel modelled as the fixed-axis rigid rotator. In the realistic situations, rotation of the finite-size cogwheel is subject to the environmentally-induced Brownian-motion effect that we describe by utilizing the quantum Caldeira-Leggett master equation. Assuming the initially narrow (classical-like) standard deviations for the angle and the angular momentum of the rotator, we investigate the dynamics of the first and second moments depending on the size, i.e. on the number of blades of both the free rotator as well as of the rotator in the external harmonic field. The larger the standard deviations, the less stable (i.e. less predictable) rotation. We detect the absence of the simple and straightforward rules for utilizing the rotator’s stability. Instead, a number of the size-related criteria appear whose combinations may provide the optimal rules for the rotator dynamical stability and possibly control. In the realistic situations, the quantum-mechanical corrections, albeit individually small, may effectively prove non-negligible, and also revealing subtlety of the transition from the quantum to the classical dynamics of the rotator. As to the latter, we detect a strong size-dependence of the transition to the classical dynamics beyond the quantum decoherence process.

  9. Monitoring and localization of buried plastic natural gas pipes using passive RF tags

    NASA Astrophysics Data System (ADS)

    Mondal, Saikat; Kumar, Deepak; Ghazali, Mohd. Ifwat; Chahal, Prem; Udpa, Lalita; Deng, Yiming

    2018-04-01

    A passive harmonic radio frequency (RF) tag on the pipe with added sensing capabilities is proposed in this paper. Radio frequency identification (RFID) based tagging has already emerged as a potential solution for chemical sensing, location detection, animal tagging, etc. Harmonic transponders are already quite popular compared to conventional RFIDs due to their improved signal to noise ratio (SNR). However, the operating frequency, transmitted power and tag efficiency become critical issues for underground RFIDs. In this paper, a comprehensive on-tag sensing, power budget and frequency analyses is performed for buried harmonic tag design. Accurate tracking of infrastructure burial depth is proposed to reduce the probability of failure of underground pipelines. Burial depth is estimated using phase information of received signals at different frequencies calculated using genetic algorithm (GA) based optimization for post processing. Suitable frequency range is determined for a variety of soil with different moisture content for small tag-antenna size. Different types of harmonic tags such as 1) Schottky diode, 2) Non-linear Transmission Line (NLTL) were compared for underground applications. In this study, the power, frequency and tag design have been optimized to achieve small antenna size, minimum signal loss and simple reader circuit for underground detection at up to 5 feet depth in different soil medium and moisture contents.

  10. Monopoly models with time-varying demand function

    NASA Astrophysics Data System (ADS)

    Cavalli, Fausto; Naimzada, Ahmad

    2018-05-01

    We study a family of monopoly models for markets characterized by time-varying demand functions, in which a boundedly rational agent chooses output levels on the basis of a gradient adjustment mechanism. After presenting the model for a generic framework, we analytically study the case of cyclically alternating demand functions. We show that both the perturbation size and the agent's reactivity to profitability variation signals can have counterintuitive roles on the resulting period-2 cycles and on their stability. In particular, increasing the perturbation size can have both a destabilizing and a stabilizing effect on the resulting dynamics. Moreover, in contrast with the case of time-constant demand functions, the agent's reactivity is not just destabilizing, but can improve stability, too. This means that a less cautious behavior can provide better performance, both with respect to stability and to achieved profits. We show that, even if the decision mechanism is very simple and is not able to always provide the optimal production decisions, achieved profits are very close to those optimal. Finally, we show that in agreement with the existing empirical literature, the price series obtained simulating the proposed model exhibit a significant deviation from normality and large volatility, in particular when underlying deterministic dynamics become unstable and complex.

  11. The N-policy for an unreliable server with delaying repair and two phases of service

    NASA Astrophysics Data System (ADS)

    Choudhury, Gautam; Ke, Jau-Chuan; Tadj, Lotfi

    2009-09-01

    This paper deals with an MX/G/1 with an additional second phase of optional service and unreliable server, which consist of a breakdown period and a delay period under N-policy. While the server is working with any phase of service, it may break down at any instant and the service channel will fail for a short interval of time. Further concept of the delay time is also introduced. If no customer arrives during the breakdown period, the server becomes idle in the system until the queue size builds up to a threshold value . As soon as the queue size becomes at least N, the server immediately begins to serve the first phase of regular service to all the waiting customers. After the completion of which, only some of them receive the second phase of the optional service. We derive the queue size distribution at a random epoch and departure epoch as well as various system performance measures. Finally we derive a simple procedure to obtain optimal stationary policy under a suitable linear cost structure.

  12. MAP: an iterative experimental design methodology for the optimization of catalytic search space structure modeling.

    PubMed

    Baumes, Laurent A

    2006-01-01

    One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.

  13. Simple optimized Brenner potential for thermodynamic properties of diamond

    NASA Astrophysics Data System (ADS)

    Liu, F.; Tang, Q. H.; Shang, B. S.; Wang, T. C.

    2012-02-01

    We have examined the commonly used Brenner potentials in the context of the thermodynamic properties of diamond. A simple optimized Brenner potential is proposed that provides very good predictions of the thermodynamic properties of diamond. It is shown that, compared to the experimental data, the lattice wave theory of molecular dynamics (LWT) with this optimized Brenner potential can accurately predict the temperature dependence of specific heat, lattice constant, Grüneisen parameters and coefficient of thermal expansion (CTE) of diamond.

  14. Optimal iodine staining of cardiac tissue for X-ray computed tomography.

    PubMed

    Butters, Timothy D; Castro, Simon J; Lowe, Tristan; Zhang, Yanmin; Lei, Ming; Withers, Philip J; Zhang, Henggui

    2014-01-01

    X-ray computed tomography (XCT) has been shown to be an effective imaging technique for a variety of materials. Due to the relatively low differential attenuation of X-rays in biological tissue, a high density contrast agent is often required to obtain optimal contrast. The contrast agent, iodine potassium iodide ([Formula: see text]), has been used in several biological studies to augment the use of XCT scanning. Recently I2KI was used in XCT scans of animal hearts to study cardiac structure and to generate 3D anatomical computer models. However, to date there has been no thorough study into the optimal use of I2KI as a contrast agent in cardiac muscle with respect to the staining times required, which has been shown to impact significantly upon the quality of results. In this study we address this issue by systematically scanning samples at various stages of the staining process. To achieve this, mouse hearts were stained for up to 58 hours and scanned at regular intervals of 6-7 hours throughout this process. Optimal staining was found to depend upon the thickness of the tissue; a simple empirical exponential relationship was derived to allow calculation of the required staining time for cardiac samples of an arbitrary size.

  15. Energetics and Self-Assembly of Amphipathic Peptide Pores in Lipid Membranes

    PubMed Central

    Zemel, Assaf; Fattal, Deborah R.; Ben-Shaul, Avinoam

    2003-01-01

    We present a theoretical study of the energetics, equilibrium size, and size distribution of membrane pores composed of electrically charged amphipathic peptides. The peptides are modeled as cylinders (mimicking α-helices) carrying different amounts of charge, with the charge being uniformly distributed over a hydrophilic face, defined by the angle subtended by polar amino acid residues. The free energy of a pore of a given radius, R, and a given number of peptides, s, is expressed as a sum of the peptides' electrostatic charging energy (calculated using Poisson-Boltzmann theory), and the lipid-perturbation energy associated with the formation of a membrane rim (which we model as being semitoroidal) in the gap between neighboring peptides. A simple phenomenological model is used to calculate the membrane perturbation energy. The balance between the opposing forces (namely, the radial free energy derivatives) associated with the electrostatic free energy that favors large R, and the membrane perturbation term that favors small R, dictates the equilibrium properties of the pore. Systematic calculations are reported for circular pores composed of various numbers of peptides, carrying different amounts of charge (1–6 elementary, positive charges) and characterized by different polar angles. We find that the optimal R's, for all (except, possibly, very weakly) charged peptides conform to the “toroidal” pore model, whereby a membrane rim larger than ∼1 nm intervenes between neighboring peptides. Only weakly charged peptides are likely to form “barrel-stave” pores where the peptides essentially touch one another. Treating pore formation as a two-dimensional self-assembly phenomenon, a simple statistical thermodynamic model is formulated and used to calculate pore size distributions. We find that the average pore size and size polydispersity increase with peptide charge and with the amphipathic polar angle. We also argue that the transition of peptides from the adsorbed to the inserted (membrane pore) state is cooperative and thus occurs rather abruptly upon a change in ambient conditions. PMID:12668433

  16. Some Marginalist Intuition Concerning the Optimal Commodity Tax Problem

    ERIC Educational Resources Information Center

    Brett, Craig

    2006-01-01

    The author offers a simple intuition that can be exploited to derive and to help interpret some canonical results in the theory of optimal commodity taxation. He develops and explores the principle that the marginal social welfare loss per last unit of tax revenue generated be equalized across tax instruments. A simple two-consumer,…

  17. Microstructure as a function of the grain size distribution for packings of frictionless disks: Effects of the size span and the shape of the distribution.

    PubMed

    Estrada, Nicolas; Oquendo, W F

    2017-10-01

    This article presents a numerical study of the effects of grain size distribution (GSD) on the microstructure of two-dimensional packings of frictionless disks. The GSD is described by a power law with two parameters controlling the size span and the shape of the distribution. First, several samples are built for each combination of these parameters. Then, by means of contact dynamics simulations, the samples are densified in oedometric conditions and sheared in a simple shear configuration. The microstructure is analyzed in terms of packing fraction, local ordering, connectivity, and force transmission properties. It is shown that the microstructure is notoriously affected by both the size span and the shape of the GSD. These findings confirm recent observations regarding the size span of the GSD and extend previous works by describing the effects of the GSD shape. Specifically, we find that if the GSD shape is varied by increasing the proportion of small grains by a certain amount, it is possible to increase the packing fraction, increase coordination, and decrease the proportion of floating particles. Thus, by carefully controlling the GSD shape, it is possible to obtain systems that are denser and better connected, probably increasing the system's robustness and optimizing important strength properties such as stiffness, cohesion, and fragmentation susceptibility.

  18. Classification and treatment of periprosthetic supracondylar femur fractures.

    PubMed

    Ricci, William

    2013-02-01

    Locked plating and retrograde nailing are two accepted methods for treatment of periprosthetic distal femur fractures. Each has relative benefits and potential pitfalls. Appropriate patient selection and knowledge of the specific femoral component geometry are required to optimally choose between these two methods. Locked plating may be applied to most periprosthetic distal femur fractures. The fracture pattern, simple or comminuted, will dictate the specific plating technique, compression plating or bridge plating. Nailing requires an open intercondylar box and a distal fragment of enough size to allow interlocking. With proper patient selection and proper techniques, good results can be obtained with either method. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  19. Optimization of Wireless Power Transfer Systems Enhanced by Passive Elements and Metasurfaces

    NASA Astrophysics Data System (ADS)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-10-01

    This paper presents a rigorous optimization technique for wireless power transfer (WPT) systems enhanced by passive elements, ranging from simple reflectors and intermedi- ate relays all the way to general electromagnetic guiding and focusing structures, such as metasurfaces and metamaterials. At its core is a convex semidefinite relaxation formulation of the otherwise nonconvex optimization problem, of which tightness and optimality can be confirmed by a simple test of its solutions. The resulting method is rigorous, versatile, and general -- it does not rely on any assumptions. As shown in various examples, it is able to efficiently and reliably optimize such WPT systems in order to find their physical limitations on performance, optimal operating parameters and inspect their working principles, even for a large number of active transmitters and passive elements.

  20. Simple Example of Backtest Overfitting (SEBO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less

  1. Design considerations for quasi-phase-matching in doubly resonant lithium niobate hexagonal micro-resonators

    NASA Astrophysics Data System (ADS)

    Sono, Tleyane J.; Riziotis, Christos; Mailis, Sakellaris; Eason, Robert W.

    2017-09-01

    Fabrication capabilities of high optical quality hexagonal superstructures by chemical etching of inverted ferroelectric domains in lithium niobate platform suggests a route for efficient implementation of compact hexagonal microcavities. Such nonlinear optical hexagonal micro-resonators are proposed as a platform for second harmonic generation (SHG) by the combined mechanisms of total internal reflection (TIR) and quasi-phase-matching (QPM). The proposed scheme for SHG via TIR-QPM in a hexagonal microcavity can improve the efficiency and also the compactness of SHG devices compared to traditional linear-type based devices. A simple theoretical model based on six-bounce trajectory and phase matching conditions was capable for obtaining the optimal cavity size. Furthermore numerical simulation results based on finite difference time domain beam propagation method analysis confirmed the solutions obtained by demonstrating resonant operation of the microcavity for the second harmonic wave produced by TIR-QPM. Design aspects, optimization issues and characteristics of the proposed nonlinear device are presented.

  2. Structure-Guided Design of EED Binders Allosterically Inhibiting the Epigenetic Polycomb Repressive Complex 2 (PRC2) Methyltransferase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingel, Andreas; Sendzik, Martin; Huang, Ying

    2017-01-12

    PRC2 is a multisubunit methyltransferase involved in epigenetic regulation of early embryonic development and cell growth. The catalytic subunit EZH2 methylates primarily lysine 27 of histone H3, leading to chromatin compaction and repression of tumor suppressor genes. Inhibiting this activity by small molecules targeting EZH2 was shown to result in antitumor efficacy. Here, we describe the optimization of a chemical series representing a new class of PRC2 inhibitors which acts allosterically via the trimethyllysine pocket of the noncatalytic EED subunit. Deconstruction of a larger and complex screening hit to a simple fragment-sized molecule followed by structure-guided regrowth and careful propertymore » modulation were employed to yield compounds which achieve submicromolar inhibition in functional assays and cellular activity. The resulting molecules can serve as a simplified entry point for lead optimization and can be utilized to study this new mechanism of PRC2 inhibition and the associated biology in detail.« less

  3. A stochastic differential equation model for the foraging behavior of fish schools.

    PubMed

    Tạ, Tôn Việt; Nguyen, Linh Thi Hoai

    2018-03-15

    Constructing models of living organisms locating food sources has important implications for understanding animal behavior and for the development of distribution technologies. This paper presents a novel simple model of stochastic differential equations for the foraging behavior of fish schools in a space including obstacles. The model is studied numerically. Three configurations of space with various food locations are considered. In the first configuration, fish swim in free but limited space. All individuals can find food with large probability while keeping their school structure. In the second and third configurations, they move in limited space with one and two obstacles, respectively. Our results reveal that the probability of foraging success is highest in the first configuration, and smallest in the third one. Furthermore, when school size increases up to an optimal value, the probability of foraging success tends to increase. When it exceeds an optimal value, the probability tends to decrease. The results agree with experimental observations.

  4. Validation of ATR FT-IR to identify polymers of plastic marine debris, including those ingested by marine organisms.

    PubMed

    Jung, Melissa R; Horgen, F David; Orski, Sara V; Rodriguez C, Viviana; Beers, Kathryn L; Balazs, George H; Jones, T Todd; Work, Thierry M; Brignac, Kayla C; Royer, Sarah-Jeanne; Hyrenbach, K David; Jensen, Brenda A; Lynch, Jennifer M

    2018-02-01

    Polymer identification of plastic marine debris can help identify its sources, degradation, and fate. We optimized and validated a fast, simple, and accessible technique, attenuated total reflectance Fourier transform infrared spectroscopy (ATR FT-IR), to identify polymers contained in plastic ingested by sea turtles. Spectra of consumer good items with known resin identification codes #1-6 and several #7 plastics were compared to standard and raw manufactured polymers. High temperature size exclusion chromatography measurements confirmed ATR FT-IR could differentiate these polymers. High-density (HDPE) and low-density polyethylene (LDPE) discrimination is challenging but a clear step-by-step guide is provided that identified 78% of ingested PE samples. The optimal cleaning methods consisted of wiping ingested pieces with water or cutting. Of 828 ingested plastics pieces from 50 Pacific sea turtles, 96% were identified by ATR FT-IR as HDPE, LDPE, unknown PE, polypropylene (PP), PE and PP mixtures, polystyrene, polyvinyl chloride, and nylon. Published by Elsevier Ltd.

  5. A stochastic differential equation model for the foraging behavior of fish schools

    NASA Astrophysics Data System (ADS)

    Tạ, Tôn ệt, Vi; Hoai Nguyen, Linh Thi

    2018-05-01

    Constructing models of living organisms locating food sources has important implications for understanding animal behavior and for the development of distribution technologies. This paper presents a novel simple model of stochastic differential equations for the foraging behavior of fish schools in a space including obstacles. The model is studied numerically. Three configurations of space with various food locations are considered. In the first configuration, fish swim in free but limited space. All individuals can find food with large probability while keeping their school structure. In the second and third configurations, they move in limited space with one and two obstacles, respectively. Our results reveal that the probability of foraging success is highest in the first configuration, and smallest in the third one. Furthermore, when school size increases up to an optimal value, the probability of foraging success tends to increase. When it exceeds an optimal value, the probability tends to decrease. The results agree with experimental observations.

  6. An iterative approach to optimize change classification in SAR time series data

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2016-10-01

    The detection of changes using remote sensing imagery has become a broad field of research with many approaches for many different applications. Besides the simple detection of changes between at least two images acquired at different times, analyses which aim on the change type or category are at least equally important. In this study, an approach for a semi-automatic classification of change segments is presented. A sparse dataset is considered to ensure the fast and simple applicability for practical issues. The dataset is given by 15 high resolution (HR) TerraSAR-X (TSX) amplitude images acquired over a time period of one year (11/2013 to 11/2014). The scenery contains the airport of Stuttgart (GER) and its surroundings, including urban, rural, and suburban areas. Time series imagery offers the advantage of analyzing the change frequency of selected areas. In this study, the focus is set on the analysis of small-sized high frequently changing regions like parking areas, construction sites and collecting points consisting of high activity (HA) change objects. For each HA change object, suitable features are extracted and a k-means clustering is applied as the categorization step. Resulting clusters are finally compared to a previously introduced knowledge-based class catalogue, which is modified until an optimal class description results. In other words, the subjective understanding of the scenery semantics is optimized by the data given reality. Doing so, an even sparsely dataset containing only amplitude imagery can be evaluated without requiring comprehensive training datasets. Falsely defined classes might be rejected. Furthermore, classes which were defined too coarsely might be divided into sub-classes. Consequently, classes which were initially defined too narrowly might be merged. An optimal classification results when the combination of previously defined key indicators (e.g., number of clusters per class) reaches an optimum.

  7. Multiobjective optimization of hybrid regenerative life support technologies. Topic D: Technology Assessment

    NASA Technical Reports Server (NTRS)

    Manousiouthakis, Vasilios

    1995-01-01

    We developed simple mathematical models for many of the technologies constituting the water reclamation system in a space station. These models were employed for subsystem optimization and for the evaluation of the performance of individual water reclamation technologies, by quantifying their operational 'cost' as a linear function of weight, volume, and power consumption. Then we performed preliminary investigations on the performance improvements attainable by simple hybrid systems involving parallel combinations of technologies. We are developing a software tool for synthesizing a hybrid water recovery system (WRS) for long term space missions. As conceptual framework, we are employing the state space approach. Given a number of available technologies and the mission specifications, the state space approach would help design flowsheets featuring optimal process configurations, including those that feature stream connections in parallel, series, or recycles. We visualize this software tool to function as follows: given the mission duration, the crew size, water quality specifications, and the cost coefficients, the software will synthesize a water recovery system for the space station. It should require minimal user intervention. The following tasks need to be solved for achieving this goal: (1) formulate a problem statement that will be used to evaluate the advantages of a hybrid WRS over a single technology WBS; (2) model several WRS technologies that can be employed in the space station; (3) propose a recycling network design methodology (since the WRS synthesis task is a recycling network design problem, it is essential to employ a systematic method in synthesizing this network); (4) develop a software implementation for this design methodology, design a hybrid system using this software, and compare the resulting WRS with a base-case WRS; and (5) create a user-friendly interface for this software tool.

  8. The requirements for low-temperature plasma ionization support miniaturization of the ion source.

    PubMed

    Kiontke, Andreas; Holzer, Frank; Belder, Detlev; Birkemeyer, Claudia

    2018-06-01

    Ambient ionization mass spectrometry (AI-MS), the ionization of samples under ambient conditions, enables fast and simple analysis of samples without or with little sample preparation. Due to their simple construction and low resource consumption, plasma-based ionization methods in particular are considered ideal for use in mobile analytical devices. However, systematic investigations that have attempted to identify the optimal configuration of a plasma source to achieve the sensitive detection of target molecules are still rare. We therefore used a low-temperature plasma ionization (LTPI) source based on dielectric barrier discharge with helium employed as the process gas to identify the factors that most strongly influence the signal intensity in the mass spectrometry of species formed by plasma ionization. In this study, we investigated several construction-related parameters of the plasma source and found that a low wall thickness of the dielectric, a small outlet spacing, and a short distance between the plasma source and the MS inlet are needed to achieve optimal signal intensity with a process-gas flow rate of as little as 10 mL/min. In conclusion, this type of ion source is especially well suited for downscaling, which is usually required in mobile devices. Our results provide valuable insights into the LTPI mechanism; they reveal the potential to further improve its implementation and standardization for mobile mass spectrometry as well as our understanding of the requirements and selectivity of this technique. Graphical abstract Optimized parameters of a dielectric barrier discharge plasma for ionization in mass spectrometry. The electrode size, shape, and arrangement, the thickness of the dielectric, and distances between the plasma source, sample, and MS inlet are marked in red. The process gas (helium) flow is shown in black.

  9. Experimental design to optimize an Haemophilus influenzae type b conjugate vaccine made with hydrazide-derivatized tetanus toxoid.

    PubMed

    Laferriere, Craig; Ravenscroft, Neil; Wilson, Seanette; Combrink, Jill; Gordon, Lizelle; Petre, Jean

    2011-10-01

    The introduction of type b Haemophilus influenzae conjugate vaccines into routine vaccination schedules has significantly reduced the burden of this disease; however, widespread use in developing countries is constrained by vaccine costs, and there is a need for a simple and high-yielding manufacturing process. The vaccine is composed of purified capsular polysaccharide conjugated to an immunogenic carrier protein. To improve the yield and rate of the reductive amination conjugation reaction used to make this vaccine, some of the carboxyl groups of the carrier protein, tetanus toxoid, were modified to hydrazides, which are more reactive than the ε -amine of lysine. Other reaction parameters, including the ratio of the reactants, the size of the polysaccharide, the temperature and the salt concentration, were also investigated. Experimental design was used to minimize the number of experiments required to optimize all these parameters to obtain conjugate in high yield with target characteristics. It was found that increasing the reactant ratio and decreasing the size of the polysaccharide increased the polysaccharide:protein mass ratio in the product. Temperature and salt concentration did not improve this ratio. These results are consistent with a diffusion controlled rate limiting step in the conjugation reaction. Excessive modification of tetanus toxoid with hydrazide was correlated with reduced yield and lower free polysaccharide. This was attributed to a greater tendency for precipitation, possibly due to changes in the isoelectric point. Experimental design and multiple regression helped identify key parameters to control and thereby optimize this conjugation reaction.

  10. Genome-wide characterization and selection of expressed sequence tag simple sequence repeat primers for optimized marker distribution and reliability in peach

    USDA-ARS?s Scientific Manuscript database

    Expressed sequence tag (EST) simple sequence repeats (SSRs) in Prunus were mined, and flanking primers designed and used for genome-wide characterization and selection of primers to optimize marker distribution and reliability. A total of 12,618 contigs were assembled from 84,727 ESTs, along with 34...

  11. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  12. Formation of free round jets with long laminar regions at large Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander

    2018-04-01

    The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.

  13. Multi-A Graph Patrolling and Partitioning

    NASA Astrophysics Data System (ADS)

    Elor, Y.; Bruckstein, A. M.

    2012-12-01

    We introduce a novel multi agent patrolling algorithm inspired by the behavior of gas filled balloons. Very low capability ant-like agents are considered with the task of patrolling an unknown area modeled as a graph. While executing the proposed algorithm, the agents dynamically partition the graph between them using simple local interactions, every agent assuming the responsibility for patrolling his subgraph. Balanced graph partition is an emergent behavior due to the local interactions between the agents in the swarm. Extensive simulations on various graphs (environments) showed that the average time to reach a balanced partition is linear with the graph size. The simulations yielded a convincing argument for conjecturing that if the graph being patrolled contains a balanced partition, the agents will find it. However, we could not prove this. Nevertheless, we have proved that if a balanced partition is reached, the maximum time lag between two successive visits to any vertex using the proposed strategy is at most twice the optimal so the patrol quality is at least half the optimal. In case of weighted graphs the patrol quality is at least (1)/(2){lmin}/{lmax} of the optimal where lmax (lmin) is the longest (shortest) edge in the graph.

  14. Application of Differential Evolutionary Optimization Methodology for Parameter Structure Identification in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Nishikawa, T.

    2013-12-01

    With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.

  15. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  16. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  17. Energy extraction from atmospheric turbulence to improve flight vehicle performance

    NASA Astrophysics Data System (ADS)

    Patel, Chinmay Karsandas

    Small 'bird-sized' Unmanned Aerial Vehicles (UAVs) have now become practical due to technological advances in embedded electronics, miniature sensors and actuators, and propulsion systems. Birds are known to take advantage of wind currents to conserve energy and fly long distances without flapping their wings. This dissertation explores the possibility of improving the performance of small UAVs by extracting the energy available in atmospheric turbulence. An aircraft can gain energy from vertical gusts by increasing its lift in regions of updraft and reducing its lift in downdrafts - a concept that has been known for decades. Starting with a simple model of a glider flying through a sinusoidal gust, a parametric optimization approach is used to compute the minimum gust amplitude and optimal control input required for the glider to sustain flight without losing energy. For small UAVs using optimal control inputs, sinusoidal gusts with amplitude of 10--15% of the cruise speed are sufficient to keep the aircraft aloft. The method is then modified and extended to include random gusts that are representative of natural turbulence. A procedure to design optimal control laws for energy extraction from realistic gust profiles is developed using a Genetic Algorithm (GA). A feedback control law is designed to perform well over a variety of random gusts, and not be tailored for one particular gust. A small UAV flying in vertical turbulence is shown to obtain average energy savings of 35--40% with the use of a simple control law. The design procedure is also extended to determine optimal control laws for sinusoidal as well as turbulent lateral gusts. The theoretical work is complemented by experimental validation using a small autonomous UAV. The development of a lightweight autopilot and UAV platform is presented. Flight test results show that active control of the lift of an autonomous glider resulted in approximately 46% average energy savings compared to glides with fixed control surfaces. Statistical analysis of test samples shows that 19% of the active control test runs resulted in no energy loss, thus demonstrating the potential of the 'gust soaring' concept to dramatically improve the performance of small UAVs.

  18. 4D Optimization of Scanned Ion Beam Tracking Therapy for Moving Tumors

    PubMed Central

    Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph

    2014-01-01

    Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking. PMID:24889215

  19. 4D optimization of scanned ion beam tracking therapy for moving tumors

    NASA Astrophysics Data System (ADS)

    Eley, John Gordon; Newhauser, Wayne David; Lüchtenborg, Robert; Graeff, Christian; Bert, Christoph

    2014-07-01

    Motion mitigation strategies are needed to fully realize the theoretical advantages of scanned ion beam therapy for patients with moving tumors. The purpose of this study was to determine whether a new four-dimensional (4D) optimization approach for scanned-ion-beam tracking could reduce dose to avoidance volumes near a moving target while maintaining target dose coverage, compared to an existing 3D-optimized beam tracking approach. We tested these approaches computationally using a simple 4D geometrical phantom and a complex anatomic phantom, that is, a 4D computed tomogram of the thorax of a lung cancer patient. We also validated our findings using measurements of carbon-ion beams with a motorized film phantom. Relative to 3D-optimized beam tracking, 4D-optimized beam tracking reduced the maximum predicted dose to avoidance volumes by 53% in the simple phantom and by 13% in the thorax phantom. 4D-optimized beam tracking provided similar target dose homogeneity in the simple phantom (standard deviation of target dose was 0.4% versus 0.3%) and dramatically superior homogeneity in the thorax phantom (D5-D95 was 1.9% versus 38.7%). Measurements demonstrated that delivery of 4D-optimized beam tracking was technically feasible and confirmed a 42% decrease in maximum film exposure in the avoidance region compared with 3D-optimized beam tracking. In conclusion, we found that 4D-optimized beam tracking can reduce the maximum dose to avoidance volumes near a moving target while maintaining target dose coverage, compared with 3D-optimized beam tracking.

  20. Scaling law and enhancement of lift generation of an insect-size hovering flexible wing

    PubMed Central

    Kang, Chang-kwon; Shyy, Wei

    2013-01-01

    We report a comprehensive scaling law and novel lift generation mechanisms relevant to the aerodynamic functions of structural flexibility in insect flight. Using a Navier–Stokes equation solver, fully coupled to a structural dynamics solver, we consider the hovering motion of a wing of insect size, in which the dynamics of fluid–structure interaction leads to passive wing rotation. Lift generated on the flexible wing scales with the relative shape deformation parameter, whereas the optimal lift is obtained when the wing deformation synchronizes with the imposed translation, consistent with previously reported observations for fruit flies and honeybees. Systematic comparisons with rigid wings illustrate that the nonlinear response in wing motion results in a greater peak angle compared with a simple harmonic motion, yielding higher lift. Moreover, the compliant wing streamlines its shape via camber deformation to mitigate the nonlinear lift-degrading wing–wake interaction to further enhance lift. These bioinspired aeroelastic mechanisms can be used in the development of flapping wing micro-robots. PMID:23760300

  1. Enhanced sialylation and in vivo efficacy of recombinant human α-galactosidase through in vitro glycosylation

    PubMed Central

    Sohn, Youngsoo; Lee, Jung Mi; Park, Heung-Rok; Jung, Sung-Chul; Park, Tai Hyun; Oh, Doo-Byoung

    2013-01-01

    Human α-galactosidase A (GLA) has been used in enzyme replacement therapy for patients with Fabry disease. We expressed recombinant GLA from Chinese hamster ovary cells with very high productivity. When compared to an approved GLA (agalsidase beta), its size and charge were found to be smaller and more neutral. These differences resulted from the lack of terminal sialic acids playing essential roles in the serum half-life and proper tissue targeting. Because a simple sialylation reaction was not enough to increase the sialic acid content, a combined reaction using galactosyltransferase, sialyltransferase, and their sugar substrates at the same time was developed and optimized to reduce the incubation time. The product generated by this reaction had nearly the same size, isoelectric points, and sialic acid content as agalsidase beta. Furthermore, it had better in vivo efficacy to degrade the accumulated globotriaosylceramide in target organs of Fabry mice compared to an unmodified version. [BMB Reports 2013; 46(3): 157-162] PMID:23527859

  2. Electroless Deposition of Palladium on Macroscopic 3D-Printed Polymers with Dense Microlattice Architectures for Development of Multifunctional Composite Materials

    DOE PAGES

    Jones, Christopher G.; Mills, Bernice E.; Nishimoto, Ryan K.; ...

    2017-10-25

    A simple procedure has been developed to create palladium (Pd) films on the surface of several common polymers used in commercial fused deposition modeling (FDM) and stereolithography (SLA) based three-dimensional (3D) printing by an electroless deposition process. The procedure can be performed at room temperature, with equipment less expensive than many 3D printers, and occurs rapidly enough to achieve full coverage of the film within a few minutes. 3D substrates composed of dense logpile or cubic lattices with part sizes in the mm to cm range, and feature sizes as small as 150 μm were designed and printed using commerciallymore » available 3D printers. The deposition procedure was successfully adapted to show full coverage in the lattice substrates. As a result, the ability to design, print, and metallize highly ordered three-dimensional microscale structures could accelerate development of a range of optimized chemical and mechanical engineering systems.« less

  3. Electroless Deposition of Palladium on Macroscopic 3D-Printed Polymers with Dense Microlattice Architectures for Development of Multifunctional Composite Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Christopher G.; Mills, Bernice E.; Nishimoto, Ryan K.

    A simple procedure has been developed to create palladium (Pd) films on the surface of several common polymers used in commercial fused deposition modeling (FDM) and stereolithography (SLA) based three-dimensional (3D) printing by an electroless deposition process. The procedure can be performed at room temperature, with equipment less expensive than many 3D printers, and occurs rapidly enough to achieve full coverage of the film within a few minutes. 3D substrates composed of dense logpile or cubic lattices with part sizes in the mm to cm range, and feature sizes as small as 150 μm were designed and printed using commerciallymore » available 3D printers. The deposition procedure was successfully adapted to show full coverage in the lattice substrates. As a result, the ability to design, print, and metallize highly ordered three-dimensional microscale structures could accelerate development of a range of optimized chemical and mechanical engineering systems.« less

  4. Directed transport by surface chemical potential gradients for enhancing analyte collection in nanoscale sensors.

    PubMed

    Sitt, Amit; Hess, Henry

    2015-05-13

    Nanoscale detectors hold great promise for single molecule detection and the analysis of small volumes of dilute samples. However, the probability of an analyte reaching the nanosensor in a dilute solution is extremely low due to the sensor's small size. Here, we examine the use of a chemical potential gradient along a surface to accelerate analyte capture by nanoscale sensors. Utilizing a simple model for transport induced by surface binding energy gradients, we study the effect of the gradient on the efficiency of collecting nanoparticles and single and double stranded DNA. The results indicate that chemical potential gradients along a surface can lead to an acceleration of analyte capture by several orders of magnitude compared to direct collection from the solution. The improvement in collection is limited to a relatively narrow window of gradient slopes, and its extent strongly depends on the size of the gradient patch. Our model allows the optimization of gradient layouts and sheds light on the fundamental characteristics of chemical potential gradient induced transport.

  5. Economic feeder for recharging and ``topping off''

    NASA Astrophysics Data System (ADS)

    Fickett, Bryan; Mihalik, G.

    2000-04-01

    Increasing the size of the melt charge significantly increases yield and reduces costs. Siemens Solar Industries is optimizing a method to charge additional material after meltdown (top-off) using an external feeder system. A prototype feeder system was fabricated consisting of a hopper and feed delivery system. The low-cost feeder is designed for simple operation and maintenance. The system is capable of introducing up to 60 kg of granular silicon while under vacuum. An isolation valve permits refilling of the hopper while maintaining vacuum in the growth furnace. Using the feeder system in conjunction with Siemens Solar Industries' energy efficient hot zone dramatically reduces power and argon consumption. Throughput is also improved as faster pull speeds can be attained. The increased pull speeds have an even greater impact when the charge size is increased. Further cost reduction can be achieved by refilling the crucible after crystal growth and pulling a second ingot run. Siemens Solar Industries is presently testing the feeder in production.

  6. "Optimal" Size and Schooling: A Relative Concept.

    ERIC Educational Resources Information Center

    Swanson, Austin D.

    Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…

  7. Liquid-Phase Laser Induced Forward Transfer for Complex Organic Inks and Tissue Engineering.

    PubMed

    Nguyen, Alexander K; Narayan, Roger J

    2017-01-01

    Laser induced forward transfer (LIFT) acts as a novel alternative to incumbent plotting techniques such as inkjet printing due to its ability to precisely deposit and position picoliter-sized droplets while being gentle enough to preserve sensitive structures within the ink. Materials as simple as screen printing ink to complex eukaryotic cells have been printed with applications spanning from microelectronics to tissue engineering. Biotechnology can benefit from this technique due to the efficient use of low volumes of reagent and the compatibility with a wide range of rheological properties. In addition, LIFT can be performed in a simple lab environment, not requiring vacuum or other extreme conditions. Although the basic apparatus is simple, many strategies exist to optimize the performance considering the ink and the desired pattern. The basic mechanism is similar between studies so the large number of variants can be summarized into a couple of categories and reported on with respect to their specific applications. In particular, precise and gentle deposition of complex molecules and eukaryotic cells represent the unique abilities of this technology. LIFT has demonstrated not only marked improvements in the quality of sensors and related medical devices over those manufactured with incumbent technologies but also great applicability in tissue engineering due to the high viability of printed cells.

  8. Simple and cost-effective method of highly conductive and elastic carbon nanotube/polydimethylsiloxane composite for wearable electronics.

    PubMed

    Kim, Jeong Hun; Hwang, Ji-Young; Hwang, Ha Ryeon; Kim, Han Seop; Lee, Joong Hoon; Seo, Jae-Won; Shin, Ueon Sang; Lee, Sang-Hoon

    2018-01-22

    The development of various flexible and stretchable materials has attracted interest for promising applications in biomedical engineering and electronics industries. This interest in wearable electronics, stretchable circuits, and flexible displays has created a demand for stable, easily manufactured, and cheap materials. However, the construction of flexible and elastic electronics, on which commercial electronic components can be mounted through simple and cost-effective processing, remains challenging. We have developed a nanocomposite of carbon nanotubes (CNTs) and polydimethylsiloxane (PDMS) elastomer. To achieve uniform distributions of CNTs within the polymer, an optimized dispersion process was developed using isopropyl alcohol (IPA) and methyl-terminated PDMS in combination with ultrasonication. After vaporizing the IPA, various shapes and sizes can be easily created with the nanocomposite, depending on the mold. The material provides high flexibility, elasticity, and electrical conductivity without requiring a sandwich structure. It is also biocompatible and mechanically stable, as demonstrated by cytotoxicity assays and cyclic strain tests (over 10,000 times). We demonstrate the potential for the healthcare field through strain sensor, flexible electric circuits, and biopotential measurements such as EEG, ECG, and EMG. This simple and cost-effective fabrication method for CNT/PDMS composites provides a promising process and material for various applications of wearable electronics.

  9. Nearly Perfect Durable Superhydrophobic Surfaces Fabricated by a Simple One-Step Plasma Treatment.

    PubMed

    Ryu, Jeongeun; Kim, Kiwoong; Park, JooYoung; Hwang, Bae Geun; Ko, YoungChul; Kim, HyunJoo; Han, JeongSu; Seo, EungRyeol; Park, YongJong; Lee, Sang Joon

    2017-05-16

    Fabrication of superhydrophobic surfaces is an area of great interest because it can be applicable to various engineering fields. A simple, safe and inexpensive fabrication process is required to fabricate applicable superhydrophobic surfaces. In this study, we developed a facile fabrication method of nearly perfect superhydrophobic surfaces through plasma treatment with argon and oxygen gases. A polytetrafluoroethylene (PTFE) sheet was selected as a substrate material. We optimized the fabrication parameters to produce superhydrophobic surfaces of superior performance using the Taguchi method. The contact angle of the pristine PTFE surface is approximately 111.0° ± 2.4°, with a sliding angle of 12.3° ± 6.4°. After the plasma treatment, nano-sized spherical tips, which looked like crown-structures, were created. This PTFE sheet exhibits the maximum contact angle of 178.9°, with a sliding angle less than 1°. As a result, this superhydrophobic surface requires a small external force to detach water droplets dripped on the surface. The contact angle of the fabricated superhydrophobic surface is almost retained, even after performing an air-aging test for 80 days and a droplet impacting test for 6 h. This fabrication method can provide superb superhydrophobic surface using simple one-step plasma etching.

  10. Methods for estimating 2D cloud size distributions from 1D observations

    DOE PAGES

    Romps, David M.; Vogelmann, Andrew M.

    2017-08-04

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  11. Methods for estimating 2D cloud size distributions from 1D observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David M.; Vogelmann, Andrew M.

    The two-dimensional (2D) size distribution of clouds in the horizontal plane plays a central role in the calculation of cloud cover, cloud radiative forcing, convective entrainment rates, and the likelihood of precipitation. Here, a simple method is proposed for calculating the area-weighted mean cloud size and for approximating the 2D size distribution from the 1D cloud chord lengths measured by aircraft and vertically pointing lidar and radar. This simple method (which is exact for square clouds) compares favorably against the inverse Abel transform (which is exact for circular clouds) in the context of theoretical size distributions. Both methods also performmore » well when used to predict the size distribution of real clouds from a Landsat scene. When applied to a large number of Landsat scenes, the simple method is able to accurately estimate the mean cloud size. Finally, as a demonstration, the methods are applied to aircraft measurements of shallow cumuli during the RACORO campaign, which then allow for an estimate of the true area-weighted mean cloud size.« less

  12. Image quality comparison between single energy and dual energy CT protocols for hepatic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Yuan, E-mail: yuanyao@stanford.edu; Pelc, Nor

    Purpose: Multi-detector computed tomography (MDCT) enables volumetric scans in a single breath hold and is clinically useful for hepatic imaging. For simple tasks, conventional single energy (SE) computed tomography (CT) images acquired at the optimal tube potential are known to have better quality than dual energy (DE) blended images. However, liver imaging is complex and often requires imaging of both structures containing iodinated contrast media, where atomic number differences are the primary contrast mechanism, and other structures, where density differences are the primary contrast mechanism. Hence it is conceivable that the broad spectrum used in a dual energy acquisition maymore » be an advantage. In this work we are interested in comparing these two imaging strategies at equal-dose and more complex settings. Methods: We developed numerical anthropomorphic phantoms to mimic realistic clinical CT scans for medium size and large size patients. MDCT images based on the defined phantoms were simulated using various SE and DE protocols at pre- and post-contrast stages. For SE CT, images from 60 kVp through 140 with 10 kVp steps were considered; for DE CT, both 80/140 and 100/140 kVp scans were simulated and linearly blended at the optimal weights. To make a fair comparison, the mAs of each scan was adjusted to match the reference radiation dose (120 kVp, 200 mAs for medium size patients and 140 kVp, 400 mAs for large size patients). Contrast-to-noise ratio (CNR) of liver against other soft tissues was used to evaluate and compare the SE and DE protocols, and multiple pre- and post-contrasted liver-tissue pairs were used to define a composite CNR. To help validate the simulation results, we conducted a small clinical study. Eighty-five 120 kVp images and 81 blended 80/140 kVp images were collected and compared through both quantitative image quality analysis and an observer study. Results: In the simulation study, we found that the CNR of pre-contrast SE image mostly increased with increasing kVp while for post-contrast imaging 90 kVp or lower yielded higher CNR images, depending on the differential iodine concentration of each tissue. Similar trends were seen in DE blended CNR and those from SE protocols. In the presence of differential iodine concentration (i.e., post-contrast), the CNR curves maximize at lower kVps (80–120), with the peak shifted rightward for larger patients. The combined pre- and post-contrast composite CNR study demonstrated that an optimal SE protocol has better performance than blended DE images, and the optimal tube potential for SE scan is around 90 kVp for a medium size patients and between 90 and 120 kVp for large size patients (although low kVp imaging requires high x-ray tube power to avoid photon starvation). Also, a tin filter added to the high kVp beam is not only beneficial for material decomposition but it improves the CNR of the DE blended images as well. The dose adjusted CNR of the clinical images also showed the same trend and radiologists favored the SE scans over blended DE images. Conclusions: Our simulation showed that an optimized SE protocol produces up to 5% higher CNR for a range of clinical tasks. The clinical study also suggested 120 kVp SE scans have better image quality than blended DE images. Hence, blended DE images do not have a fundamental CNR advantage over optimized SE images.« less

  13. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  14. Albendazole nanocrystals with improved pharmacokinetic performance in mice.

    PubMed

    Paredes, Alejandro J; Bruni, Sergio Sánchez; Allemandi, Daniel; Lanusse, Carlos; Palma, Santiago D

    2018-02-01

    Albendazole (ABZ) is a broad-spectrum antiparasitic agent with poor aqueous solubility, which leads to poor/erratic bioavailability and therapeutic failures. Here, we aimed to produce a novel formulation of ABZ nanocrystals (ABZNC) and assess its pharmacokinetic performance in mice. Results/methodology: ABZNC were prepared by high-pressure homogenization and spray-drying processes. Redispersion capacity and solid yield were measured in order to obtain an optimized product. The final particle size was 415.69±7.40 nm and the solid yield was 72.32%. The pharmacokinetic parameters obtained in a mice model for ABZNC were enhanced (p < 0.05) with respect to the control formulation. ABZNC with improved pharmacokinetic behavior were produced by a simple, inexpensive and potentially scalable methodology.

  15. Dewetting of patterned solid films: Towards a predictive modelling approach

    NASA Astrophysics Data System (ADS)

    Trautmann, M.; Cheynis, F.; Leroy, F.; Curiotto, S.; Pierre-Louis, O.; Müller, P.

    2017-06-01

    Owing to its ability to produce an assembly of nanoislands with controllable size and locations, the solid state dewetting of patterned films has recently received great attention. A simple Kinetic Monte Carlo model based on two reduced energetic parameters allows one to reproduce experimental observations of the dewetting morphological evolution of patterned films of Si(001) on SiO2 (or SOI for Silicon-on-Insulator) with various pattern designs. Thus, it is now possible to use KMC to drive further experiments and to optimize the pattern shapes to reach a desired dewetted structure. Comparisons between KMC simulations and dewetting experiments, at least for wire-shaped patterns, show that the prevailing dewetting mechanism depends on the wire width.

  16. Further reduction of minimal first-met bad markings for the computationally efficient synthesis of a maximally permissive controller

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Chao, Daniel Yuh

    2015-08-01

    To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.

  17. Simple proof that Gaussian attacks are optimal among collective attacks against continuous-variable quantum key distribution with a Gaussian modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, University Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex

    2010-06-15

    In this article, we give a simple proof of the fact that the optimal collective attacks against continuous-variable quantum key distribution with a Gaussian modulation are Gaussian attacks. Our proof, which makes use of symmetry properties of the protocol in phase space, is particularly relevant for the finite-key analysis of the protocol and therefore for practical applications.

  18. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    NASA Technical Reports Server (NTRS)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  19. Foam separation of Rhodamine-G and Evans Blue using a simple separatory bottle system.

    PubMed

    Dasarathy, Dhweeja; Ito, Yoichiro

    2017-09-29

    A simple separatory glass bottle was used to improve separation effectiveness and cost efficiency while simultaneously creating a simpler system for separating biological compounds. Additionally, it was important to develop a scalable separation method so this would be applicable to both analytical and preparative separations. Compared to conventional foam separation methods, this method easily forms stable dry foam which ensures high purity of yielded fractions. A negatively charged surfactant, sodium dodecyl sulfate (SDS), was used as the ligand to carry a positively charged Rhodamine-G, leaving a negatively charged Evans Blue in the bottle. The performance of the separatory bottle was tested for separating Rhodamine-G from Evans Blue with sample sizes ranged from 1 to 12mg in preparative separations and 1-20μg in analytical separations under optimum conditions. These conditions including N 2 gas pressure, spinning speed of contents with a magnetic stirrer, concentration of the ligand, volume of the solvent, and concentration of the sample, were all modified and optimized. Based on the calculations at their peak absorbances, Rhodamine-G and Evans Blue were efficiently separated in times ranging from 1h to 3h, depending on sample volume. Optimal conditions were found to be 60psi N 2 pressure and 2mM SDS for the affinity ligand. This novel separation method will allow for rapid separation of biological compounds while simultaneously being scalable and cost effective. Published by Elsevier B.V.

  20. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  1. SN-38 loading capacity of hydrophobic polymer blend nanoparticles: formulation, optimization and efficacy evaluation.

    PubMed

    Dimchevska, Simona; Geskovski, Nikola; Petruševski, Gjorgji; Chacorovska, Marina; Popeski-Dimovski, Riste; Ugarkovic, Sonja; Goracinova, Katerina

    2017-03-01

    One of the most important problems in nanoencapsulation of extremely hydrophobic drugs is poor drug loading due to rapid drug crystallization outside the polymer core. The effort to use nanoprecipitation, as a simple one-step procedure with good reproducibility and FDA approved polymers like Poly(lactic-co-glycolic acid) (PLGA) and Polycaprolactone (PCL), will only potentiate this issue. Considering that drug loading is one of the key defining characteristics, in this study we attempted to examine whether the nanoparticle (NP) core composed of two hydrophobic polymers will provide increased drug loading for 7-Ethyl-10-hydroxy-camptothecin (SN-38), relative to NPs prepared using individual polymers. D-optimal design was applied to optimize PLGA/PCL ratio in the polymer blend and the mode of addition of the amphiphilic copolymer Lutrol ® F127 in order to maximize SN-38 loading and obtain NPs with acceptable size for passive tumor targeting. Drug/polymer and polymer/polymer interaction analysis pointed to high degree of compatibility and miscibility among both hydrophobic polymers, providing core configuration with higher drug loading capacity. Toxicity studies outlined the biocompatibility of the blank NPs. Increased in vitro efficacy of drug-loaded NPs compared to the free drug was confirmed by growth inhibition studies using SW-480 cell line. Additionally, the optimized NP formulation showed very promising blood circulation profile with elimination half-time of 7.4 h.

  2. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  3. Nd2O3-SiO2 nanocomposites: A simple sonochemical preparation, characterization and photocatalytic activity.

    PubMed

    Zinatloo-Ajabshir, Sahar; Mortazavi-Derazkola, Sobhan; Salavati-Niasari, Masoud

    2018-04-01

    Nd 2 O 3 -SiO 2 nanocomposites with enhanced photocatalytic activity have been obtained through simple and rapid sonochemical route in presence of putrescine as a new basic agent, for the first time. The influence of the mole ratio of Si:Nd, basic agent and ultrasonic power have been optimized to obtain the best Nd 2 O 3 -SiO 2 nanocomposites on shape, size and photocatalytic activity. The produced Nd 2 O 3 -SiO 2 nanocomposites have been characterized utilizing XRD, EDX, TEM, FT-IR, DRS and FESEM. Application of the as-formed Nd 2 O 3 -SiO 2 nano and bulk structures as photocatalyst with photodegradation of methyl violet contaminant under ultraviolet illumination was compared. Results demonstrated that SiO 2 has remarkable effect on catalytic performance of Nd 2 O 3 photocatalyst for decomposition. By introducing of SiO 2 to Nd 2 O 3 , decomposition efficiency of Nd 2 O 3 toward methyl violet contaminant under ultraviolet illumination was increased. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Fast and simple microwave synthesis of TiO2/Au nanoparticles for gas-phase photocatalytic hydrogen generation

    NASA Astrophysics Data System (ADS)

    May-Masnou, Anna; Soler, Lluís; Torras, Miquel; Salles, Pol; Llorca, Jordi; Roig, Anna

    2018-04-01

    The fabrication of small anatase titanium dioxide (TiO2) nanoparticles (NPs) attached to larger anisotropic gold (Au) morphologies by a very fast and simple two-step microwave-assisted synthesis is presented. The TiO2/Au NPs are synthesized using polyvinylpyrrolidone (PVP) as reducing, capping and stabilizing agent through a polyol approach. To optimize the contact between the titania and the gold and facilitate electron transfer, the PVP is removed by calcination at mild temperatures. The nanocatalysts activity is then evaluated in the photocatalytic production of hydrogen from water/ethanol mixtures in gas-phase at ambient temperature. A maximum value of 5.3 mmol·gcat-1·h-1 (7.4 mmol·gTiO2-1·h-1) of hydrogen is recorded for the system with larger gold particles at an optimum calcination temperature of 450 °C. Herein we demonstrate that TiO2-based photocatalysts with high Au loading and large Au particle size (≈ 50 nm) NPs have photocatalytic activity.

  5. Easy synthesis of bismuth iron oxide nanoparticles as photocatalyst for solar hydrogen generation from water

    NASA Astrophysics Data System (ADS)

    Deng, Jinyi

    In this study, high purity bismuth iron oxide (BiFeO3/BFO) nanoparticles of size 50-80 nm have been successfully synthesized by a simple sol-gel method using urea and polyvinyl alcohol at low temperature. X-ray diffraction (XRD) measurement is used to optimize the synthetic process to get highly crystalline and pure phase material. Diffuse reflectance ultraviolet-visible (DRUV-Vis) spectrum indicates that the absorption cut-off wavelength of the nanoparticles is about 620 nm, corresponding to an energy band gap of 2.1 eV. Compared to BaTiO3, BFO has a better degradation of methyl orange under light radiation. Also, photocatalytic tests prove this material to be efficient towards water splitting under simulated solar light to generate hydrogen. The simple synthetic methodology adopted in this paper will be useful in developing low-cost semiconductor materials as effective photocatalysts for hydrogen generation. Photocatalytic tests followed by gas chromatography (GC) analyses show that BiFeO3 generates three times more hydrogen than commercial titania P25 catalyst under the same experimental conditions.

  6. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Zhenhuan; Boyuka, David; Zou, X

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less

  7. Synthesis of optimal usage of available aggregates in highway construction and maintenance.

    DOT National Transportation Integrated Search

    2009-11-01

    The optimization of available aggregates for highway construction and maintenance is vital both from an economic and environmental perspective. By not optimizing the aggregate supply, project costs escalate as a simple response to supply and demand. ...

  8. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations

    PubMed Central

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W.

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures. PMID:26904094

  9. Active Control of Fan Noise-Feasibility Study. Volume 2: Canceling Noise Source-Design of an Acoustic Plate Radiator Using Piezoceramic Actuators

    NASA Technical Reports Server (NTRS)

    Pla, F. G.; Rajiyah, H.

    1995-01-01

    The feasibility of using acoustic plate radiators powered by piezoceramic thin sheets as canceling sources for active control of aircraft engine fan noise is demonstrated. Analytical and numerical models of actuated beams and plates are developed and validated. An optimization study is performed to identify the optimum combination of design parameters that maximizes the plate volume velocity for a given resonance frequency. Fifteen plates with various plate and actuator sizes, thicknesses, and bonding layers were fabricated and tested using results from the optimization study. A maximum equivalent piston displacement of 0.39 mm was achieved with the optimized plate samples tested with only one actuator powered, corresponding to a plate deflection at the center of over 1 millimeter. This is very close to the deflection required for a full size engine application and represents a 160-fold improvement over previous work. Experimental results further show that performance is limited by the critical stress of the piezoceramic actuator and bonding layer rather than by the maximum moment available from the actuator. Design enhancements are described in detail that will lead to a flight-worthy acoustic plate radiator by minimizing actuator tensile stresses and reducing nonlinear effects. Finally, several adaptive tuning methods designed to increase the bandwidth of acoustic plate radiators are analyzed including passive, active, and semi-active approaches. The back chamber pressurization and volume variation methods are investigated experimentally and shown to be simple and effective ways to obtain substantial control over the resonance frequency of a plate radiator. This study shows that piezoceramic-based plate radiators can be a viable acoustic source for active control of aircraft engine fan noise.

  10. Comparison of Acceleration Techniques for Selected Low-Level Bioinformatics Operations.

    PubMed

    Langenkämper, Daniel; Jakobi, Tobias; Feld, Dustin; Jelonek, Lukas; Goesmann, Alexander; Nattkemper, Tim W

    2016-01-01

    Within the recent years clock rates of modern processors stagnated while the demand for computing power continued to grow. This applied particularly for the fields of life sciences and bioinformatics, where new technologies keep on creating rapidly growing piles of raw data with increasing speed. The number of cores per processor increased in an attempt to compensate for slight increments of clock rates. This technological shift demands changes in software development, especially in the field of high performance computing where parallelization techniques are gaining in importance due to the pressing issue of large sized datasets generated by e.g., modern genomics. This paper presents an overview of state-of-the-art manual and automatic acceleration techniques and lists some applications employing these in different areas of sequence informatics. Furthermore, we provide examples for automatic acceleration of two use cases to show typical problems and gains of transforming a serial application to a parallel one. The paper should aid the reader in deciding for a certain techniques for the problem at hand. We compare four different state-of-the-art automatic acceleration approaches (OpenMP, PluTo-SICA, PPCG, and OpenACC). Their performance as well as their applicability for selected use cases is discussed. While optimizations targeting the CPU worked better in the complex k-mer use case, optimizers for Graphics Processing Units (GPUs) performed better in the matrix multiplication example. But performance is only superior at a certain problem size due to data migration overhead. We show that automatic code parallelization is feasible with current compiler software and yields significant increases in execution speed. Automatic optimizers for CPU are mature and usually no additional manual adjustment is required. In contrast, some automatic parallelizers targeting GPUs still lack maturity and are limited to simple statements and structures.

  11. Engineering two-wire optical antennas for near field enhancement

    NASA Astrophysics Data System (ADS)

    Yang, Zhong-Jian; Zhao, Qian; Xiao, Si; He, Jun

    2017-07-01

    We study the optimization of near field enhancement in the two-wire optical antenna system. By varying the nanowire sizes we obtain the optimized side-length (width and height) for the maximum field enhancement with a given gap size. The optimized side-length applies to a broadband range (λ = 650-1000 nm). The ratio of extinction cross section to field concentration size is found to be closely related to the field enhancement behavior. We also investigate two experimentally feasible cases which are antennas on glass substrate and mirror, and find that the optimized side-length also applies to these systems. It is also found that the optimized side-length shows a tendency of increasing with the gap size. Our results could find applications in field-enhanced spectroscopies.

  12. Resource Allocation and Seed Size Selection in Perennial Plants under Pollen Limitation.

    PubMed

    Huang, Qiaoqiao; Burd, Martin; Fan, Zhiwei

    2017-09-01

    Pollen limitation may affect resource allocation patterns in plants, but its role in the selection of seed size is not known. Using an evolutionarily stable strategy model of resource allocation in perennial iteroparous plants, we show that under density-independent population growth, pollen limitation (i.e., a reduction in ovule fertilization rate) should increase the optimal seed size. At any level of pollen limitation (including none), the optimal seed size maximizes the ratio of juvenile survival rate to the resource investment needed to produce one seed (including both ovule production and seed provisioning); that is, the optimum maximizes the fitness effect per unit cost. Seed investment may affect allocation to postbreeding adult survival. In our model, pollen limitation increases individual seed size but decreases overall reproductive allocation, so that pollen limitation should also increase the optimal allocation to postbreeding adult survival. Under density-dependent population growth, the optimal seed size is inversely proportional to ovule fertilization rate. However, pollen limitation does not affect the optimal allocation to postbreeding adult survival and ovule production. These results highlight the importance of allocation trade-offs in the effect pollen limitation has on the ecology and evolution of seed size and postbreeding adult survival in perennial plants.

  13. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    PubMed

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  14. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data

    PubMed Central

    Kim, Sehwi

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674

  15. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design

    PubMed Central

    Adesina, Simeon K.; Wight, Scott A.; Akala, Emmanuel O.

    2015-01-01

    Purpose Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize crosslinked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Methods Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Results and Conclusion Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the crosslinking agent and stabilizer indicate the important factors for minimizing particle size. PMID:24059281

  16. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design.

    PubMed

    Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O

    2014-11-01

    Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.

  17. [Survival strategy of photosynthetic organisms. 1. Variability of the extent of light-harvesting pigment aggregation as a structural factor optimizing the function of oligomeric photosynthetic antenna. Model calculations].

    PubMed

    Fetisova, Z G

    2004-01-01

    In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.

  18. Relationships of maternal body size and morphology with egg and clutch size in the diamondback terrapin, Malaclemys terrapin (Testudines: Emydidae)

    USGS Publications Warehouse

    Kern, Maximilian M.; Guzy, Jacquelyn C.; Lovich, Jeffrey E.; Gibbons, J. Whitfield; Dorcas, Michael E.

    2016-01-01

    Because resources are finite, female animals face trade-offs between the size and number of offspring they are able to produce during a single reproductive event. Optimal egg size (OES) theory predicts that any increase in resources allocated to reproduction should increase clutch size with minimal effects on egg size. Variations of OES predict that egg size should be optimized, although not necessarily constant across a population, because optimality is contingent on maternal phenotypes, such as body size and morphology, and recent environmental conditions. We examined the relationships among body size variables (pelvic aperture width, caudal gap height, and plastron length), clutch size, and egg width of diamondback terrapins from separate but proximate populations at Kiawah Island and Edisto Island, South Carolina. We found that terrapins do not meet some of the predictions of OES theory. Both populations exhibited greater variation in egg size among clutches than within, suggesting an absence of optimization except as it may relate to phenotype/habitat matching. We found that egg size appeared to be constrained by more than just pelvic aperture width in Kiawah terrapins but not in the Edisto population. Terrapins at Edisto appeared to exhibit osteokinesis in the caudal region of their shells, which may aid in the oviposition of large eggs.

  19. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  20. Calculating an optimal box size for ligand docking and virtual screening against experimental and predicted binding pockets.

    PubMed

    Feinstein, Wei P; Brylinski, Michal

    2015-01-01

    Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.

  1. Thermal-Structural Optimization of Integrated Cryogenic Propellant Tank Concepts for a Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.

    2004-01-01

    A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.

  2. Fluorescence detection of dental calculus

    NASA Astrophysics Data System (ADS)

    Gonchukov, S.; Biryukova, T.; Sukhinina, A.; Vdovin, Yu

    2010-11-01

    This work is devoted to the optimization of fluorescence dental calculus diagnostics in optical spectrum. The optimal wavelengths for fluorescence excitation and registration are determined. Two spectral ranges 620 - 645 nm and 340 - 370 nm are the most convenient for supra- and subgingival calculus determination. The simple implementation of differential method free from the necessity of spectrometer using was investigated. Calculus detection reliability in the case of simple implementation is higher than in the case of spectra analysis at optimal wavelengths. The use of modulated excitation light and narrowband detection of informative signal allows us to decrease essentially its diagnostic intensity even in comparison with intensity of the low level laser dental therapy.

  3. Constituents of Quality of Life and Urban Size

    ERIC Educational Resources Information Center

    Royuela, Vicente; Surinach, Jordi

    2005-01-01

    Do cities have an optimal size? In seeking to answer this question, various theories, including Optimal City Size Theory, the supply-oriented dynamic approach and the city network paradigm, have been forwarded that considered a city's population size as a determinant of location costs and benefits. However, the generalised growth in wealth that…

  4. Optimization of Microencapsulation Composition of Menthol, Vanillin, and Benzyl Acetate inside Polyvinyl Alcohol with Coacervation Method for Application in Perfumery

    NASA Astrophysics Data System (ADS)

    Sahlan, Muhamad; Raihani Rahman, Mohammad

    2017-07-01

    One of many applications of essential oils is as fragrance in perfumery. Menthol, benzyl acetate, and vanillin, each represents olfactive characteristic of peppermint leaves, jasmine flowers, and vanilla beans, are commonly used in perfumery. These components are highly volatile, hence the fragrance components will quickly evaporate resulting in short-lasting scent and low shelf life. In this research, said components have been successfully encapsulated simultaneously inside Polyvinyl Alcohol (PVA) using simple coacervation method to increase its shelf life. Optimization has been done using Central Composite Diagram with 4 independent variables, i.e. composition of menthol, benzyl acetate, vanillin, and tergitol 15-S-9 (as emulsifier). Encapsulation efficiency, loading capacity, and microcapsule size have been measured. In optimized composition of menthol (13.98 %w/w), benzyl acetate (14.75 %w/w), vanillin (17.84 %w/w), and tergitol 15-S-9 (13.4 %w/w) encapsulation efficiency of 97,34% and loading capacity of 46,46% have been achieved. Mean diameter of microcapsule is 20,24 μm and within range of 2,011-36,24 μm. Final product was achieved in the form of cross linked polyvinyl alcohol with hydrogel consistency and orange to yellow in color.

  5. Optimized mixed Markov models for motif identification

    PubMed Central

    Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping

    2006-01-01

    Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929

  6. Ionic liquid-based microwave-assisted extraction of flavonoids from Bauhinia championii (Benth.) Benth.

    PubMed

    Xu, Wei; Chu, Kedan; Li, Huang; Zhang, Yuqin; Zheng, Haiyin; Chen, Ruilan; Chen, Lidian

    2012-12-03

    An ionic liquids (IL)-based microwave-assisted approach for extraction and determination of flavonoids from Bauhinia championii (Benth.) Benth. was proposed for the first time. Several ILs with different cations and anions and the microwave-assisted extraction (MAE) conditions, including sample particle size, extraction time and liquid-solid ratio, were investigated. Two M 1-butyl-3-methylimidazolium bromide ([bmim] Br) solution with 0.80 M HCl was selected as the optimal solvent. Meanwhile the optimized conditions a ratio of liquid to material of 30:1, and the extraction for 10 min at 70 °C. Compared with conventional heat-reflux extraction (CHRE) and the regular MAE, IL-MAE exhibited a higher extraction yield and shorter extraction time (from 1.5 h to 10 min). The optimized extraction samples were analysed by LC-MS/MS. IL extracts of Bauhinia championii (Benth.)Benth consisted mainly of flavonoids, among which myricetin, quercetin and kaempferol, β-sitosterol, triacontane and hexacontane were identified. The study indicated that IL-MAE was an efficient and rapid method with simple sample preparation. LC-MS/MS was also used to determine the chemical composition of the ethyl acetate/MAE extract of Bauhinia championii (Benth.) Benth, and it maybe become a rapid method to determine the composition of new plant extracts.

  7. Center for Parallel Optimization.

    DTIC Science & Technology

    1996-03-19

    A NEW OPTIMIZATION BASED APPROACH TO IMPROVING GENERALIZATION IN MACHINE LEARNING HAS BEEN PROPOSED AND COMPUTATIONALLY VALIDATED ON SIMPLE LINEAR MODELS AS WELL AS ON HIGHLY NONLINEAR SYSTEMS SUCH AS NEURAL NETWORKS.

  8. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  9. A geometrically based method for automated radiosurgery planning.

    PubMed

    Wagner, T H; Yi, T; Meeks, S L; Bova, F J; Brechner, B L; Chen, Y; Buatti, J M; Friedman, W A; Foote, K D; Bouchet, L G

    2000-12-01

    A geometrically based method of multiple isocenter linear accelerator radiosurgery treatment planning optimization was developed, based on a target's solid shape. Our method uses an edge detection process to determine the optimal sphere packing arrangement with which to cover the planning target. The sphere packing arrangement is converted into a radiosurgery treatment plan by substituting the isocenter locations and collimator sizes for the spheres. This method is demonstrated on a set of 5 irregularly shaped phantom targets, as well as a set of 10 clinical example cases ranging from simple to very complex in planning difficulty. Using a prototype implementation of the method and standard dosimetric radiosurgery treatment planning tools, feasible treatment plans were developed for each target. The treatment plans generated for the phantom targets showed excellent dose conformity and acceptable dose homogeneity within the target volume. The algorithm was able to generate a radiosurgery plan conforming to the Radiation Therapy Oncology Group (RTOG) guidelines on radiosurgery for every clinical and phantom target examined. This automated planning method can serve as a valuable tool to assist treatment planners in rapidly and consistently designing conformal multiple isocenter radiosurgery treatment plans.

  10. A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine

    PubMed Central

    Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini

    2013-01-01

    A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136

  11. Validation of ATR FT-IR to identify polymers of plastic marine debris, including those ingested by marine organisms

    USGS Publications Warehouse

    Jung, Melissa R.; Horgen, F. David; Orski, Sara V.; Rodriguez, Viviana; Beers, Kathryn L.; Balazs, George H.; Jones, T. Todd; Work, Thierry M.; Brignac, Kayla C.; Royer, Sarah-Jeanne; Hyrenbach, David K.; Jensen, Brenda A.; Lynch, Jennifer M.

    2018-01-01

    Polymer identification of plastic marine debris can help identify its sources, degradation, and fate. We optimized and validated a fast, simple, and accessible technique, attenuated total reflectance Fourier transform infrared spectroscopy (ATR FT-IR), to identify polymers contained in plastic ingested by sea turtles. Spectra of consumer good items with known resin identification codes #1–6 and several #7 plastics were compared to standard and raw manufactured polymers. High temperature size exclusion chromatography measurements confirmed ATR FT-IR could differentiate these polymers. High-density (HDPE) and low-density polyethylene (LDPE) discrimination is challenging but a clear step-by-step guide is provided that identified 78% of ingested PE samples. The optimal cleaning methods consisted of wiping ingested pieces with water or cutting. Of 828 ingested plastics pieces from 50 Pacific sea turtles, 96% were identified by ATR FT-IR as HDPE, LDPE, unknown PE, polypropylene (PP), PE and PP mixtures, polystyrene, polyvinyl chloride, and nylon.

  12. Differentiable McCormick relaxations

    DOE PAGES

    Khan, Kamil A.; Watson, Harry A. J.; Barton, Paul I.

    2016-05-27

    McCormick's classical relaxation technique constructs closed-form convex and concave relaxations of compositions of simple intrinsic functions. These relaxations have several properties which make them useful for lower bounding problems in global optimization: they can be evaluated automatically, accurately, and computationally inexpensively, and they converge rapidly to the relaxed function as the underlying domain is reduced in size. They may also be adapted to yield relaxations of certain implicit functions and differential equation solutions. However, McCormick's relaxations may be nonsmooth, and this nonsmoothness can create theoretical and computational obstacles when relaxations are to be deployed. This article presents a continuously differentiablemore » variant of McCormick's original relaxations in the multivariate McCormick framework of Tsoukalas and Mitsos. Gradients of the new differentiable relaxations may be computed efficiently using the standard forward or reverse modes of automatic differentiation. Furthermore, extensions to differentiable relaxations of implicit functions and solutions of parametric ordinary differential equations are discussed. A C++ implementation based on the library MC++ is described and applied to a case study in nonsmooth nonconvex optimization.« less

  13. Tunable, Flexible, and Efficient Optimization of Control Pulses for Practical Qubits

    NASA Astrophysics Data System (ADS)

    Machnes, Shai; Assémat, Elie; Tannor, David; Wilhelm, Frank K.

    2018-04-01

    Quantum computation places very stringent demands on gate fidelities, and experimental implementations require both the controls and the resultant dynamics to conform to hardware-specific constraints. Superconducting qubits present the additional requirement that pulses must have simple parameterizations, so they can be further calibrated in the experiment, to compensate for uncertainties in system parameters. Other quantum technologies, such as sensing, require extremely high fidelities. We present a novel, conceptually simple and easy-to-implement gradient-based optimal control technique named gradient optimization of analytic controls (GOAT), which satisfies all the above requirements, unlike previous approaches. To demonstrate GOAT's capabilities, with emphasis on flexibility and ease of subsequent calibration, we optimize fast coherence-limited pulses for two leading superconducting qubits architectures—flux-tunable transmons and fixed-frequency transmons with tunable couplers.

  14. A simple approach to optimal control of invasive species.

    PubMed

    Hastings, Alan; Hall, Richard J; Taylor, Caz M

    2006-12-01

    The problem of invasive species and their control is one of the most pressing applied issues in ecology today. We developed simple approaches based on linear programming for determining the optimal removal strategies of different stage or age classes for control of invasive species that are still in a density-independent phase of growth. We illustrate the application of this method to the specific example of invasive Spartina alterniflora in Willapa Bay, WA. For all such systems, linear programming shows in general that the optimal strategy in any time step is to prioritize removal of a single age or stage class. The optimal strategy adjusts which class is the focus of control through time and can be much more cost effective than prioritizing removal of the same stage class each year.

  15. Continuous Optimization on Constraint Manifolds

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1988-01-01

    This paper demonstrates continuous optimization on the differentiable manifold formed by continuous constraint functions. The first order tensor geodesic differential equation is solved on the manifold in both numerical and closed analytic form for simple nonlinear programs. Advantages and disadvantages with respect to conventional optimization techniques are discussed.

  16. The evolution of island gigantism and body size variation in tortoises and turtles

    PubMed Central

    Jaffe, Alexander L.; Slater, Graham J.; Alfaro, Michael E.

    2011-01-01

    Extant chelonians (turtles and tortoises) span almost four orders of magnitude of body size, including the startling examples of gigantism seen in the tortoises of the Galapagos and Seychelles islands. However, the evolutionary determinants of size diversity in chelonians are poorly understood. We present a comparative analysis of body size evolution in turtles and tortoises within a phylogenetic framework. Our results reveal a pronounced relationship between habitat and optimal body size in chelonians. We found strong evidence for separate, larger optimal body sizes for sea turtles and island tortoises, the latter showing support for the rule of island gigantism in non-mammalian amniotes. Optimal sizes for freshwater and mainland terrestrial turtles are similar and smaller, although the range of body size variation in these forms is qualitatively greater. The greater number of potential niches in freshwater and terrestrial environments may mean that body size relationships are more complicated in these habitats. PMID:21270022

  17. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  18. Structural characterization of nanocrystalline cadmium sulphide powder prepared by solvent evaporation technique

    NASA Astrophysics Data System (ADS)

    Pandya, Samir; Tandel, Digisha; Chodavadiya, Nisarg

    2018-05-01

    CdS is one of the most important compounds in the II-VI group of semiconductor. There are numerous applications of CdS in the form of nanoparticles and nanocrystalline. Semiconductors nanoparticles (also known as quantum dots), belong to state of matter in the transition region between molecules and solids, have attracted a great deal of attention because of their unique electrical and optical properties, compared to bulk materials. In the field of optoelectronic, nanocrystalline form utilizes mostly in the field of catalysis and fluid technology. Considering these observations, presented work had been carried out, i.e. based on the nanocrystalline material preparation. In the present work CdS nano-crystalline powder was synthesized by a simple and cost effective chemical technique to grow cadmium sulphide (CdS) nanoparticles at 200 °C with different concentrations of cadmium. The synthesis parameters were optimized. The synthesized powder was structurally characterized by X-ray diffraction and particle size analyzer. In the XRD analysis, Micro-structural parameters such as lattice strain, dislocation density and crystallite size were analysed. The broadened diffraction peaks indicated nanocrystalline particles of the film material. In addition to that the size of the prepared particles was analyzed by particle size analyzer. The results show the average size of CdS particles ranging from 80 to 100 nm. The overall conclusion of the work can be very useful in the synthesis of nanocrystalline CdS powder.

  19. Development and optimization of SPECT gated blood pool cluster analysis for the prediction of CRT outcome.

    PubMed

    Lalonde, Michel; Wells, R Glenn; Birnie, David; Ruddy, Terrence D; Wassenaar, Richard

    2014-07-01

    Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. About 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.

  20. Development and optimization of SPECT gated blood pool cluster analysis for the prediction of CRT outcome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, Michel, E-mail: mlalonde15@rogers.com; Wassenaar, Richard; Wells, R. Glenn

    2014-07-15

    Purpose: Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. Methods: Aboutmore » 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Results: Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). Conclusions: A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.« less

  1. Optical systems integrated modeling

    NASA Technical Reports Server (NTRS)

    Shannon, Robert R.; Laskin, Robert A.; Brewer, SI; Burrows, Chris; Epps, Harlan; Illingworth, Garth; Korsch, Dietrich; Levine, B. Martin; Mahajan, Vini; Rimmer, Chuck

    1992-01-01

    An integrated modeling capability that provides the tools by which entire optical systems and instruments can be simulated and optimized is a key technology development, applicable to all mission classes, especially astrophysics. Many of the future missions require optical systems that are physically much larger than anything flown before and yet must retain the characteristic sub-micron diffraction limited wavefront accuracy of their smaller precursors. It is no longer feasible to follow the path of 'cut and test' development; the sheer scale of these systems precludes many of the older techniques that rely upon ground evaluation of full size engineering units. The ability to accurately model (by computer) and optimize the entire flight system's integrated structural, thermal, and dynamic characteristics is essential. Two distinct integrated modeling capabilities are required. These are an initial design capability and a detailed design and optimization system. The content of an initial design package is shown. It would be a modular, workstation based code which allows preliminary integrated system analysis and trade studies to be carried out quickly by a single engineer or a small design team. A simple concept for a detailed design and optimization system is shown. This is a linkage of interface architecture that allows efficient interchange of information between existing large specialized optical, control, thermal, and structural design codes. The computing environment would be a network of large mainframe machines and its users would be project level design teams. More advanced concepts for detailed design systems would support interaction between modules and automated optimization of the entire system. Technology assessment and development plans for integrated package for initial design, interface development for detailed optimization, validation, and modeling research are presented.

  2. Nanoliter microfluidic hybrid method for simultaneous screening and optimization validated with crystallization of membrane proteins

    PubMed Central

    Li, Liang; Mustafi, Debarshi; Fu, Qiang; Tereshko, Valentina; Chen, Delai L.; Tice, Joshua D.; Ismagilov, Rustem F.

    2006-01-01

    High-throughput screening and optimization experiments are critical to a number of fields, including chemistry and structural and molecular biology. The separation of these two steps may introduce false negatives and a time delay between initial screening and subsequent optimization. Although a hybrid method combining both steps may address these problems, miniaturization is required to minimize sample consumption. This article reports a “hybrid” droplet-based microfluidic approach that combines the steps of screening and optimization into one simple experiment and uses nanoliter-sized plugs to minimize sample consumption. Many distinct reagents were sequentially introduced as ≈140-nl plugs into a microfluidic device and combined with a substrate and a diluting buffer. Tests were conducted in ≈10-nl plugs containing different concentrations of a reagent. Methods were developed to form plugs of controlled concentrations, index concentrations, and incubate thousands of plugs inexpensively and without evaporation. To validate the hybrid method and demonstrate its applicability to challenging problems, crystallization of model membrane proteins and handling of solutions of detergents and viscous precipitants were demonstrated. By using 10 μl of protein solution, ≈1,300 crystallization trials were set up within 20 min by one researcher. This method was compatible with growth, manipulation, and extraction of high-quality crystals of membrane proteins, demonstrated by obtaining high-resolution diffraction images and solving a crystal structure. This robust method requires inexpensive equipment and supplies, should be especially suitable for use in individual laboratories, and could find applications in a number of areas that require chemical, biochemical, and biological screening and optimization. PMID:17159147

  3. Designing optimal greenhouse gas monitoring networks for Australia

    NASA Astrophysics Data System (ADS)

    Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.

    2016-01-01

    Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.

  4. The topography of the environment alters the optimal search strategy for active particles

    PubMed Central

    Volpe, Giovanni

    2017-01-01

    In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher’s size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers’ taxis to biochemical rates of reaction. PMID:29073055

  5. The topography of the environment alters the optimal search strategy for active particles

    NASA Astrophysics Data System (ADS)

    Volpe, Giorgio; Volpe, Giovanni

    2017-10-01

    In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher's size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers' taxis to biochemical rates of reaction.

  6. Mystery of Foil Air Bearings for Oil-free Turbomachinery Unlocked: Load Capacity Rule-of-thumb Allows Simple Estimation of Performance

    NASA Technical Reports Server (NTRS)

    DellaCorte, Christopher; Valco, Mark J.

    2002-01-01

    The Oil-Free Turbomachinery team at the NASA Glenn Research Center has unlocked one of the mysteries surrounding foil air bearing performance. Foil air bearings are self-acting hydrodynamic bearings that use ambient air, or any fluid, as their lubricant. In operation, the motion of the shaft's surface drags fluid into the bearing by viscous action, creating a pressurized lubricant film. This lubricating film separates the stationary foil bearing surface from the moving shaft and supports load. Foil bearings have been around for decades and are widely employed in the air cycle machines used for cabin pressurization and cooling aboard commercial jetliners. The Oil-Free Turbomachinery team is fostering the maturation of this technology for integration into advanced Oil-Free aircraft engines. Elimination of the engine oil system can significantly reduce weight and cost and could enable revolutionary new engine designs. Foil bearings, however, have complex elastic support structures (spring packs) that make the prediction of bearing performance, such as load capacity, difficult if not impossible. Researchers at Glenn recently found a link between foil bearing design and load capacity performance. The results have led to a simple rule-of-thumb that relates a bearing's size, speed, and design to its load capacity. Early simple designs (Generation I) had simple elastic (spring) support elements, and performance was limited. More advanced bearings (Generation III) with elastic supports, in which the stiffness is varied locally to optimize gas film pressures, exhibit load capacities that are more than double those of the best previous designs. This is shown graphically in the figure. These more advanced bearings have enabled industry to introduce commercial Oil-Free gas-turbine-based electrical generators and are allowing the aeropropulsion industry to incorporate the technology into aircraft engines. The rule-of-thumb enables engine and bearing designers to easily size and select bearing technology for a new application and determine the level of complexity required in the bearings. This new understanding enables industry to assess the feasibility of new engine designs and provides critical guidance toward the future development of Oil-Free turbomachinery propulsion systems.

  7. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  8. Automating Structural Analysis of Spacecraft Vehicles

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2004-01-01

    A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.

  9. A portable gas sensor based on cataluminescence.

    PubMed

    Kang, C; Tang, F; Liu, Y; Wu, Y; Wang, X

    2013-01-01

    We describe a portable gas sensor based on cataluminescence. Miniaturization of the gas sensor was achieved by using a miniature photomultiplier tube, a miniature gas pump and a simple light seal. The signal to noise ratio (SNR) was considered as the evaluation criteria for the design and testing of the sensor. The main source of noise was from thermal background. Optimal working temperature and flow rate were determined experimentally from the viewpoint of improvement in SNR. A series of parameters related to analytical performance was estimated. The limitation of detection of the sensor was 7 ppm (SNR = 3) for ethanol and 10 ppm (SNR = 3) for hydrogen sulphide. Zirconia and barium carbonate were respectively selected as nano-sized catalysts for ethanol and hydrogen sulphide. Copyright © 2012 John Wiley & Sons, Ltd.

  10. [Preparation of Oenothera biennis Oil Solid Lipid Nanoparticles Based on Microemulsion Technique].

    PubMed

    Piao, Lin-mei; Jin, Yong; Cui, Yan-lin; Yin, Shou-yu

    2015-06-01

    To study the preparation of Oenothera biennis oil solid lipid nanoparticles and its quality evaluation. The solid lipid nanoparticles were prepared by microemulsion technique. The optimum condition was performed based on the orthogonal design to examine the entrapment efficiency, the mean diameter of the particles and so on. The optimal preparation of Oenothera biennis oil solid lipid nanoparticles was as follows: Oenothera biennis dosage 300 mg, glycerol monostearate-Oenothera biennis (2: 3), Oenothera biennis -RH/40/PEG-400 (1: 2), RH-40/PEG-400 (1: 2). The resulting nanoparticles average encapsulation efficiency was (89.89 ± 0.71)%, the average particle size was 44.43 ± 0.08 nm, and the Zeta potential was 64.72 ± 1.24 mV. The preparation process is simple, stable and feasible.

  11. Tabletop computed lighting for practical digital photography.

    PubMed

    Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby

    2007-01-01

    We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.

  12. Tunable Oleo-Furan Surfactants by Acylation of Renewable Furans

    DOE PAGES

    Park, Dae Sung; Joseph, Kristeen E.; Koehle, Maura; ...

    2016-10-19

    One important advance in fluid surface control was the amphiphilic surfactant composed of coupled molecular structures (i.e., hydrophilic and hydrophobic) to reduce surface tension between two distinct fluid phases. However, implementation of simple surfactants has been hindered by the broad range of applications in water containing alkaline earth metals (i.e., hard water). This disrupts surfactant function and requires extensive use of undesirable and expensive chelating additives. We show that sugar-derived furans can be linked with triglyceride-derived fatty acid chains via Friedel–Crafts acylation within single layer (SPP) zeolite catalysts. Finally, these alkylfuran surfactants independently suppress the effects of hard water whilemore » simultaneously permitting broad tunability of size, structure, and function, which can be optimized for superior capability for forming micelles and solubilizing in water.« less

  13. Tunable Oleo-Furan Surfactants by Acylation of Renewable Furans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Dae Sung; Joseph, Kristeen E.; Koehle, Maura

    2016-11-23

    An important advance in fluid surface control was the amphiphilic surfactant comprised of coupled molecular structures (i.e. hydrophilic and hydrophobic) to reduce surface tension between two distinct fluid phases. However, implementation of simple surfactants has been hindered by the broad range of applications in water containing alkaline earth metals (i.e. hard water), which disrupt surfactant function and require extensive use of undesirable and expensive chelating additives. Here we show that sugar-derived furans can be linked with triglyceride-derived fatty acid chains via Friedel-Crafts acylation within single layer (SPP) zeolite catalysts. These alkylfuran surfactants independently suppress the effects of hard water whilemore » simultaneously permitting broad tunability of size, structure, and function, which can be optimized for superior capability for forming micelles and solubilizing in water.« less

  14. “Nano-Ginseng” for Enhanced Cytotoxicity AGAINST Cancer Cells

    PubMed Central

    Zhu, Weiyan; Si, Chuanling; Lei, Jiandu

    2018-01-01

    Panax ginseng has high medicinal and health values. However, the various and complex components of ginseng may interact with each other, thus reducing and even reversing therapeutic effects. In this study, we designed and fabricated a novel “nano-ginseng” with definite ingredients, ginsenoside Rb1/protopanaxadiol nanoparticles (Rb1/PPD NPs), completely based on the protopanaxadiol-type extracts. The optimized nano-formulations demonstrated an appropriate size (~110 nm), high drug loading efficiency (~96.8%) and capacity (~27.9 wt %), long half-time in systemic circulation (nine-fold longer than free PPD), better antitumor effects in vitro and in vivo, higher accumulation at the tumor site and reduced damage to normal tissues. Importantly, this process of “nano-ginseng” production is a simple, scalable, green economy process. PMID:29473838

  15. The effect of nanoparticle size on theranostic systems: the optimal particle size for imaging is not necessarily optimal for drug delivery

    NASA Astrophysics Data System (ADS)

    Dreifuss, Tamar; Betzer, Oshra; Barnoy, Eran; Motiei, Menachem; Popovtzer, Rachela

    2018-02-01

    Theranostics is an emerging field, defined as combination of therapeutic and diagnostic capabilities in the same material. Nanoparticles are considered as an efficient platform for theranostics, particularly in cancer treatment, as they offer substantial advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of theranostic nanoplatforms raises an important question: Is the optimal particle for imaging also optimal for therapy? Are the specific parameters required for maximal drug delivery, similar to those required for imaging applications? Herein, we examined this issue by investigating the effect of nanoparticle size on tumor uptake and imaging. Anti-epidermal growth factor receptor (EGFR)-conjugated gold nanoparticles (GNPs) in different sizes (diameter range: 20-120 nm) were injected to tumor bearing mice and their uptake by tumors was measured, as well as their tumor visualization capabilities as tumor-targeted CT contrast agent. Interestingly, the results showed that different particles led to highest tumor uptake or highest contrast enhancement, meaning that the optimal particle size for drug delivery is not necessarily optimal for tumor imaging. These results have important implications on the design of theranostic nanoplatforms.

  16. Characteristic Sizes of Life in the Oceans, from Bacteria to Whales.

    PubMed

    Andersen, K H; Berge, T; Gonçalves, R J; Hartvig, M; Heuschele, J; Hylander, S; Jacobsen, N S; Lindemann, C; Martens, E A; Neuheimer, A B; Olsson, K; Palacz, A; Prowe, A E F; Sainmont, J; Traving, S J; Visser, A W; Wadhwa, N; Kiørboe, T

    2016-01-01

    The size of an individual organism is a key trait to characterize its physiology and feeding ecology. Size-based scaling laws may have a limited size range of validity or undergo a transition from one scaling exponent to another at some characteristic size. We collate and review data on size-based scaling laws for resource acquisition, mobility, sensory range, and progeny size for all pelagic marine life, from bacteria to whales. Further, we review and develop simple theoretical arguments for observed scaling laws and the characteristic sizes of a change or breakdown of power laws. We divide life in the ocean into seven major realms based on trophic strategy, physiology, and life history strategy. Such a categorization represents a move away from a taxonomically oriented description toward a trait-based description of life in the oceans. Finally, we discuss life forms that transgress the simple size-based rules and identify unanswered questions.

  17. A Simple Label Switching Algorithm for Semisupervised Structural SVMs.

    PubMed

    Balamurugan, P; Shevade, Shirish; Sundararajan, S

    2015-10-01

    In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.

  18. Simultaneous Determination of Size and Quantification of Gold Nanoparticles by Direct Coupling Thin layer Chromatography with Catalyzed Luminol Chemiluminescence

    PubMed Central

    Yan, Neng; Zhu, Zhenli; He, Dong; Jin, Lanlan; Zheng, Hongtao; Hu, Shenghong

    2016-01-01

    The increasing use of metal-based nanoparticle products has raised concerns in particular for the aquatic environment and thus the quantification of such nanomaterials released from products should be determined to assess their environmental risks. In this study, a simple, rapid and sensitive method for the determination of size and mass concentration of gold nanoparticles (AuNPs) in aqueous suspension was established by direct coupling of thin layer chromatography (TLC) with catalyzed luminol-H2O2 chemiluminescence (CL) detection. For this purpose, a moving stage was constructed to scan the chemiluminescence signal from TLC separated AuNPs. The proposed TLC-CL method allows the quantification of differently sized AuNPs (13 nm, 41 nm and 100 nm) contained in a mixture. Various experimental parameters affecting the characterization of AuNPs, such as the concentration of H2O2, the concentration and pH of the luminol solution, and the size of the spectrometer aperture were investigated. Under optimal conditions, the detection limits for AuNP size fractions of 13 nm, 41 nm and 100 nm were 38.4 μg L−1, 35.9 μg L−1 and 39.6 μg L−1, with repeatabilities (RSD, n = 7) of 7.3%, 6.9% and 8.1% respectively for 10 mg L−1 samples. The proposed method was successfully applied to the characterization of AuNP size and concentration in aqueous test samples. PMID:27080702

  19. A simple apparatus for controlling nucleation and size in protein crystal growth

    NASA Technical Reports Server (NTRS)

    Gernert, Kim M.; Smith, Robert; Carter, Daniel C.

    1988-01-01

    A simple device is described for controlling vapor equilibrium in macromolecular crystallization as applied to the protein crystal growth technique commonly referred to as the 'hanging drop' method. Crystal growth experiments with hen egg white lysozyme have demonstrated control of the nucleation rate. Nucleation rate and final crystal size have been found to be highly dependent upon the rate at which critical supersaturation is approached. Slower approaches show a marked decrease in the nucleation rate and an increase in crystal size.

  20. Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.

  1. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    PubMed

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  2. [Target volume segmentation of PET images by an iterative method based on threshold value].

    PubMed

    Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L

    2014-01-01

    An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  3. Evaluation of bioavailability, efficacy, and safety profile of doxorubicin-loaded solid lipid nanoparticles

    NASA Astrophysics Data System (ADS)

    Patro, Nagaraju M.; Devi, Kshama; Pai, Roopa S.; Suresh, Sarasija

    2013-12-01

    We investigated the bioavailability, efficacy, and toxicity of doxorubicin-loaded solid lipid nanoparticles (DOX-SLNs) prepared by a simple modified double-emulsification method. A 3-factor, 3-level Box-Behnken statistical design was adopted in the optimization of DOX-SLN formulation considering dependent factors particle size and entrapment efficiency. Optimized SLN formulation composed of lipid (2 %) consisting of soya lecithin and Precirol ATO 5 (1:3) with Pluronic F68 (0.3 %) resulted in 217.36 ± 3.31 nm particle size and 59.45 ± 1.75 % entrapment efficiency. DOX-SLN exhibited significant enhancement ( p < 0.05) in bioavailability as compared with free DOX in Sprague-Dawley (SD) rats. DOX-SLN exhibited higher peak plasma concentration (6.761 ± 0.08 vs. 2.412 ± 0.04 μg/ml), increased AUC (61.368 ± 3.54 vs. 5.812 ± 0.49 μg/ml h), decreased clearance (36 ± 0.01 vs. 619 ± 0.005 mL/h kg), and volume of distribution (733 ± 0.092 vs. 2,064 ± 0.061 mL/kg) when compared to free DOX. The collective results of cardiac and kidney enzyme assay, antioxidant enzyme levels, hematological parameters, effect on body weight and tumor volume, tumor necrosis factor-α level, histopathological examination, and survival analysis confirmed the improved efficacy and safety profile of DOX-SLN in 7,12-dimethyl benzanthracene-induced breast cancer in SD rats.

  4. Three dimensional design, simulation and optimization of a novel, universal diabetic foot offloading orthosis

    NASA Astrophysics Data System (ADS)

    Sukumar, Chand; Ramachandran, K. I.

    2016-09-01

    Leg amputation is a major consequence of aggregated foot ulceration in diabetic patients. A common sense based treatment approach for diabetic foot ulceration is foot offloading where the patient is required to wear a foot offloading orthosis during the entire treatment course. Removable walker is an excellent foot offloading modality compared to the golden standard solution - total contact cast and felt padding. Commercially available foot offloaders are generally customized with huge cost and less patient compliance. This work suggests an optimized 3D model of a new type light weight removable foot offloading orthosis for diabetic patients. The device has simple adjustable features which make this suitable for wide range of patients with weight of 35 to 74 kg and height of 137 to 180 cm. Foot plate of this orthosis is unisexual, with a size adjustability of (US size) 6 to 10. Materials like Aluminum alloy 6061-T6, Acrylonitrile Butadiene Styrene (ABS) and Polyurethane acted as the key player in reducing weight of the device to 0.804 kg. Static analysis of this device indicated that maximum stress developed in this device under a load of 1000 N is only 37.8 MPa, with a small deflection of 0.150 cm and factor of safety of 3.28, keeping the safety limits, whereas dynamic analysis results assures the load bearing capacity of this device. Thus, the proposed device can be safely used as an orthosis for offloading diabetic ulcerated foot.

  5. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  6. Optimal ancilla-free Pauli+V circuits for axial rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blass, Andreas; Bocharov, Alex; Gurevich, Yuri

    We address the problem of optimal representation of single-qubit rotations in a certain unitary basis consisting of the so-called V gates and Pauli matrices. The V matrices were proposed by Lubotsky, Philips, and Sarnak [Commun. Pure Appl. Math. 40, 401–420 (1987)] as a purely geometric construct in 1987 and recently found applications in quantum computation. They allow for exceptionally simple quantum circuit synthesis algorithms based on quaternionic factorization. We adapt the deterministic-search technique initially proposed by Ross and Selinger to synthesize approximating Pauli+V circuits of optimal depth for single-qubit axial rotations. Our synthesis procedure based on simple SL{sub 2}(ℤ) geometrymore » is almost elementary.« less

  7. Tunable, Flexible and Efficient Optimization of Control Pulses for Superconducting Qubits, part I - Theory

    NASA Astrophysics Data System (ADS)

    Machnes, Shai; AsséMat, Elie; Tannor, David; Wilhelm, Frank

    Quantum computation places very stringent demands on gate fidelities, and experimental implementations require both the controls and the resultant dynamics to conform to hardware-specific ansatzes and constraints. Superconducting qubits present the additional requirement that pulses have simple parametrizations, so they can be further calibrated in the experiment, to compensate for uncertainties in system characterization. We present a novel, conceptually simple and easy-to-implement gradient-based optimal control algorithm, GOAT, which satisfies all the above requirements. In part II we shall demonstrate the algorithm's capabilities, by using GOAT to optimize fast high-accuracy pulses for two leading superconducting qubits architectures - Xmons and IBM's flux-tunable couplers.

  8. The Optimal Cut-Off Value of Neutrophil-to-Lymphocyte Ratio for Predicting Prognosis in Adult Patients with Henoch–Schönlein Purpura

    PubMed Central

    Park, Chan Hyuk; Han, Dong Soo; Jeong, Jae Yoon; Eun, Chang Soo; Yoo, Kyo-Sang; Jeon, Yong Cheol; Sohn, Joo Hyun

    2016-01-01

    Background The development of gastrointestinal (GI) bleeding and end-stage renal disease (ESRD) can be a concern in the management of Henoch–Schönlein purpura (HSP). We aimed to evaluate whether the neutrophil-to-lymphocyte ratio (NLR) is associated with the prognosis of adult patients with HSP. Methods Clinical data including the NLR of adult patients with HSP were retrospectively analyzed. Patients were classified into three groups as follows: (a) simple recovery, (b) wax & wane without GI bleeding, and (c) development of GI bleeding. The optimal cut-off value was determined using a receiver operating characteristics curve and the Youden index. Results A total of 66 adult patients were enrolled. The NLR was higher in the GI bleeding group than in the simple recovery or wax & wane group (simple recovery vs. wax & wane vs. GI bleeding; median [IQR], 2.32 [1.61–3.11] vs. 3.18 [2.16–3.71] vs. 7.52 [4.91–10.23], P<0.001). For the purpose of predicting simple recovery, the optimal cut-off value of NLR was 3.18, and the sensitivity and specificity were 74.1% and 75.0%, respectively. For predicting development of GI bleeding, the optimal cut-off value was 3.90 and the sensitivity and specificity were 87.5% and 88.6%, respectively. Conclusions The NLR is useful for predicting development of GI bleeding as well as simple recovery without symptom relapse. Two different cut-off values of NLR, 3.18 for predicting an easy recovery without symptom relapse and 3.90 for predicting GI bleeding can be used in adult patients with HSP. PMID:27073884

  9. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Quantum dot ternary-valued full-adder: Logic synthesis by a multiobjective design optimization based on a genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klymenko, M. V.; Remacle, F., E-mail: fremacle@ulg.ac.be

    2014-10-28

    A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables formore » the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.« less

  11. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    2010-01-01

    Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.

  12. A Differential Evolution Based Approach to Estimate the Shape and Size of Complex Shaped Anomalies Using EIT Measurements

    NASA Astrophysics Data System (ADS)

    Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn

    EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.

  13. Cliff-edge model of obstetric selection in humans.

    PubMed

    Mitteroecker, Philipp; Huttegger, Simon M; Fischer, Barbara; Pavlicev, Mihaela

    2016-12-20

    The strikingly high incidence of obstructed labor due to the disproportion of fetal size and the mother's pelvic dimensions has puzzled evolutionary scientists for decades. Here we propose that these high rates are a direct consequence of the distinct characteristics of human obstetric selection. Neonatal size relative to the birth-relevant maternal dimensions is highly variable and positively associated with reproductive success until it reaches a critical value, beyond which natural delivery becomes impossible. As a consequence, the symmetric phenotype distribution cannot match the highly asymmetric, cliff-edged fitness distribution well: The optimal phenotype distribution that maximizes population mean fitness entails a fraction of individuals falling beyond the "fitness edge" (i.e., those with fetopelvic disproportion). Using a simple mathematical model, we show that weak directional selection for a large neonate, a narrow pelvic canal, or both is sufficient to account for the considerable incidence of fetopelvic disproportion. Based on this model, we predict that the regular use of Caesarean sections throughout the last decades has led to an evolutionary increase of fetopelvic disproportion rates by 10 to 20%.

  14. MRI of perfluorocarbon emulsion kinetics in rodent mammary tumours

    NASA Astrophysics Data System (ADS)

    Fan, Xiaobing; River, Jonathan N.; Muresan, Adrian S.; Popescu, Carmen; Zamora, Marta; Culp, Rita M.; Karczmar, Gregory S.

    2006-01-01

    Perfluorocarbon (PFC) emulsions can be imaged directly by fluorine-19 MRI. We developed an optimized protocol for preparing PFC droplets of uniform size, evaluated use of the resulting droplets as blood pool contrast agents, studied their uptake by tumours and determined the spatial resolution with which they can be imaged at 4.7 T. Perfluorocarbon droplets of three different average sizes (324, 293 and 225 nm) were prepared using a microemulsifier. Images of PFC droplets with good signal-to-noise ratio were acquired with 625 µm in-plane resolution, 3 mm slice thickness and acquisition time of ~4.5 min per image. Kinetics of washout were determined using a simple mathematical model. The maximum uptake of the PFC droplets was three times greater at the tumour rim than in muscle, but the washout rate was two to three times slower in the tumour. The results are consistent with leakage of the droplets into the tumour extravascular space due to the hyper-permeability of tumour capillaries. PFC droplets may allow practical and quantitative measurements of blood volume and capillary permeability in tumours with reasonable spatial resolution.

  15. Behavior of Cackling Canada Geese during brood rearing

    USGS Publications Warehouse

    Fowler, Ada C.; Ely, Craig R.

    1997-01-01

    We studied behavior of Cackling Canada Goose (Branta canadensis minima, cacklers) broods between 1992 and 1996 on the Yukon Delta National Wildlife Refuge in western Alaska. An increase in time spent foraging by goslings during our study was weakly correlated with an increase in the size of the local breeding population. Amount of time spent feeding by adults and goslings increased throughout the brood rearing period. Overall, goslings spent more time feeding than either adult females or males, and adult males spent the most time alert. Time alert varied among brood rearing areas and increased with brood size, but there was no variation in time spent alert among years. Increases in feeding or alert behaviors were at a cost to time spent in all other behaviors. We suggest that there is not a simple trade-off between feeding and alert behavior in cacklers, but instead that time spent feeding and alert are optimized against all other behaviors. We suggest that forage quality and availability determines the amount of time spent feeding, whereas the threat of predation or disturbance determines the amount of time spent alert.

  16. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  17. A DC electrophoresis method for determining electrophoretic mobility through the pressure driven negation of electro osmosis

    NASA Astrophysics Data System (ADS)

    Karam, Pascal; Pennathur, Sumita

    2016-11-01

    Characterization of the electrophoretic mobility and zeta potential of micro and nanoparticles is important for assessing properties such as stability, charge and size. In electrophoretic techniques for such characterization, the bulk fluid motion due to the interaction between the fluid and the charged surface must be accounted for. Unlike current industrial systems which rely on DLS and oscillating potentials to mitigate electroosmotic flow (EOF), we propose a simple alternative electrophoretic method for optically determining electrophoretic mobility using a DC electric fields. Specifically, we create a system where an adverse pressure gradient counters EOF, and design the geometry of the channel so that the flow profile of the pressure driven flow matches that of the EOF in large regions of the channel (ie. where we observe particle flow). Our specific COMSOL-optimized geometry is two large cross sectional areas adjacent to a central, high aspect ratio channel. We show that this effectively removes EOF from a large region of the channel and allows for the accurate optical characterization of electrophoretic particle mobility, no matter the wall charge or particle size.

  18. Extended domains of organized nanorings of silver grains as surface-enhanced Raman scattering sensors for molecular detection

    NASA Astrophysics Data System (ADS)

    Bechelany, M.; Brodard, P.; Philippe, L.; Michler, J.

    2009-11-01

    The possibility to synthesize large areas of silver grains organized in nanorings using a simple technique based on nanosphere lithography and electroless plating as a metal deposition method is described for the first time. In addition, we present a systematic SERS study of the obtained long-range ordered silver nanodots and nanorings. The possibility to precisely control the size, the interdistance and the morphology of these nanostructures allows us to systematically investigate the influence of these parameters on SERS. We show that the best possible SERS substrates should not only present optimal sizes, interdistances and shapes, but also a grain-like structure composed of sub-100 nm grains in order to maximize the number of hot-spots. In addition, we show that grains arranged in nanorings present higher enhancement factors (EF = 5.5 × 105) as compared to similar arrays made of nanodots. A wide range of applications, including real-time monitoring of catalytic surface reactions, environmental and security monitoring as well as clinical and pharmaceutical screening, can be envisaged for these SERS substrates.

  19. Extended domains of organized nanorings of silver grains as surface-enhanced Raman scattering sensors for molecular detection.

    PubMed

    Bechelany, M; Brodard, P; Philippe, L; Michler, J

    2009-11-11

    The possibility to synthesize large areas of silver grains organized in nanorings using a simple technique based on nanosphere lithography and electroless plating as a metal deposition method is described for the first time. In addition, we present a systematic SERS study of the obtained long-range ordered silver nanodots and nanorings. The possibility to precisely control the size, the interdistance and the morphology of these nanostructures allows us to systematically investigate the influence of these parameters on SERS. We show that the best possible SERS substrates should not only present optimal sizes, interdistances and shapes, but also a grain-like structure composed of sub-100 nm grains in order to maximize the number of hot-spots. In addition, we show that grains arranged in nanorings present higher enhancement factors (E(F) = 5.5 x 10(5)) as compared to similar arrays made of nanodots. A wide range of applications, including real-time monitoring of catalytic surface reactions, environmental and security monitoring as well as clinical and pharmaceutical screening, can be envisaged for these SERS substrates.

  20. Dynamics of Nearshore Sand Bars and Infra-gravity Waves: The Optimal Theory Point of View

    NASA Astrophysics Data System (ADS)

    Bouchette, F.; Mohammadi, B.

    2016-12-01

    It is well known that the dynamics of near-shore sand bars are partly controlled by the features (location of nodes, amplitude, length, period) of the so-called infra-gravity waves. Reciprocally, changes in the location, size and shape of near-shore sand bars can control wave/wave interactions which in their turn alter the infra-gravity content of the near-shore wave energy spectrum. The coupling infra-gravity / near-shore bar is thus definitely two ways. Regarding numerical modelling, several approaches have already been considered to analyze such coupled dynamics. Most of them are based on the following strategy: 1) define an energy spectrum including infra-gravity, 2) tentatively compute the radiation stresses driven by this energy spectrum, 3) compute sediment transport and changes in the seabottom elevation including sand bars, 4) loop on the computation of infra-gravity taking into account the morphological changes. In this work, we consider an alternative approach named Nearshore Optimal Theory, which is a kind of breakdown point of view for the modeling of near-shore hydro-morphodynamics and wave/ wave/ seabottom interactions. Optimal theory applied to near-shore hydro-morphodynamics arose with the design of solid coastal defense structures by shape optimization methods, and is being now extended in order to model dynamics of any near-shore system combining waves and sand. The basics are the following: the near-shore system state is through a functional J representative of the energy of the system in some way. This J is computed from a model embedding the physics to be studied only (here hydrodynamics forced by simple infra-gravity). Then the paradigm is to say that the system will evolve so that the energy J tends to minimize. No really matter the complexity of wave propagation nor wave/bottom interactions. As soon as J embeds the physics to be explored, the method does not require a comprehensive modeling. Near-shore Optimal Theory has already given promising results for the generation of near-shore sand bar from scratch and their growth when forced by fair-weather waves. Here, we use it to explore the coupling between a very simple infra-gravity content and the nucleation of near-shore sand-bars. It is shown that even a very poor infra-gravity content strongly improves the generation of sand bars.

  1. Multidisciplinary design optimization for sonic boom mitigation

    NASA Astrophysics Data System (ADS)

    Ozcer, Isik A.

    Automated, parallelized, time-efficient surface definition and grid generation and flow simulation methods are developed for sharp and accurate sonic boom signal computation in three dimensions in the near and mid-field of an aircraft using Euler/Full-Potential unstructured/structured computational fluid dynamics. The full-potential mid-field sonic boom prediction code is an accurate and efficient solver featuring automated grid generation, grid adaptation and shock fitting, and parallel processing. This program quickly marches the solution using a single nonlinear equation for large distances that cannot be covered with Euler solvers due to large memory and long computational time requirements. The solver takes into account variations in temperature and pressure with altitude. The far-field signal prediction is handled using the classical linear Thomas Waveform Parameter Method where the switching altitude from the nonlinear to linear prediction is determined by convergence of the ground signal pressure impulse value. This altitude is determined as r/L ≈ 10 from the source for a simple lifting wing, and r/L ≈ 40 for a real complex aircraft. Unstructured grid adaptation and shock fitting methodology developed for the near-field analysis employs an Hessian based anisotropic grid adaptation based on error equidistribution. A special field scalar is formulated to be used in the computation of the Hessian based error metric which enhances significantly the adaptation scheme for shocks. The entire cross-flow of a complex aircraft is resolved with high fidelity using only 500,000 grid nodes after only about 10 solution/adaptation cycles. Shock fitting is accomplished using Roe's Flux-Difference Splitting scheme which is an approximate Riemann type solver and by proper alignment of the cell faces with respect to shock surfaces. Simple to complex real aircraft geometries are handled with no user-interference required making the simulation methods suitable tools for product design. The simulation tools are used to optimize three geometries for sonic boom mitigation. The first is a simple axisymmetric shape to be used as a generic nose component, the second is a delta wing with lift, and the third is a real aircraft with nose and wing optimization. The objectives are to minimize the pressure impulse or the peak pressure in the sonic boom signal, while keeping the drag penalty under feasible limits. The design parameters for the meridian profile of the nose shape are the lengths and the half-cone angles of the linear segments that make up the profile. The design parameters for the lifting wing are the dihedral angle, angle of attack, non-linear span-wise twist and camber distribution. The test-bed aircraft is the modified F-5E aircraft built by Northrop Grumman, designated the Shaped Sonic Boom Demonstrator. This aircraft is fitted with an optimized axisymmetric nose, and the wings are optimized to demonstrate optimization for sonic boom mitigation for a real aircraft. The final results predict 42% reduction in bow shock strength, 17% reduction in peak Deltap, 22% reduction in pressure impulse, 10% reduction in foot print size, 24% reduction in inviscid drag, and no loss in lift for the optimized aircraft. Optimization is carried out using response surface methodology, and the design matrices are determined using standard DoE techniques for quadratic response modeling.

  2. Influence of monitoring data selection for optimization of a steady state multimedia model on the magnitude and nature of the model prediction bias.

    PubMed

    Kim, Hee Seok; Lee, Dong Soo

    2017-11-01

    SimpleBox is an important multimedia model used to estimate the predicted environmental concentration for screening-level exposure assessment. The main objectives were (i) to quantitatively assess how the magnitude and nature of prediction bias of SimpleBox vary with the selection of observed concentration data set for optimization and (ii) to present the prediction performance of the optimized SimpleBox. The optimization was conducted using a total of 9604 observed multimedia data for 42 chemicals of four groups (i.e., polychlorinated dibenzo-p-dioxins/furans (PCDDs/Fs), polybrominated diphenyl ethers (PBDEs), phthalates, and polycyclic aromatic hydrocarbons (PAHs)). The model performance was assessed based on the magnitude and skewness of prediction bias. Monitoring data selection in terms of number of data and kind of chemicals plays a significant role in optimization of the model. The coverage of the physicochemical properties was found to be very important to reduce the prediction bias. This suggests that selection of observed data should be made such that the physicochemical property (such as vapor pressure, octanol-water partition coefficient, octanol-air partition coefficient, and Henry's law constant) range of the selected chemical groups be as wide as possible. With optimization, about 55%, 90%, and 98% of the total number of the observed concentration ratios were predicted within factors of three, 10, and 30, respectively, with negligible skewness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  4. Design optimization of large-size format edge-lit light guide units

    NASA Astrophysics Data System (ADS)

    Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.

    2016-04-01

    In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.

  5. Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Hug, Gabriela; Li, Xin

    Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less

  6. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  7. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  8. Comparing kinetic curves in liquid chromatography

    NASA Astrophysics Data System (ADS)

    Kurganov, A. A.; Kanat'eva, A. Yu.; Yakubenko, E. E.; Popova, T. P.; Shiryaeva, V. E.

    2017-01-01

    Five equations for kinetic curves which connect the number of theoretical plates N and time of analysis t 0 for five different versions of optimization, depending on the parameters being varied (e.g., mobile phase flow rate, pressure drop, sorbent grain size), are obtained by means of mathematical modeling. It is found that a method based on the optimization of a sorbent grain size at fixed pressure is most suitable for the optimization of rapid separations. It is noted that the advantages of the method are limited by an area of relatively low efficiency, and the advantage of optimization is transferred to a method based on the optimization of both the sorbent grain size and the drop in pressure across a column in the area of high efficiency.

  9. The Effect of Family Size on Spanish Simple and Complex Words

    ERIC Educational Resources Information Center

    Lazaro, Miguel; Sainz, Javier S.

    2012-01-01

    This study presents the results of three experiments in which the Family Size (FS) effect is explored. The first experiment is carried out with no prime on simple words. The second and third experiments are carried out with morphological priming on complex words. In the first experiment a facilitatory effect of FS is observed: high FS targets…

  10. Size and shape effects on diffusion and absorption of colloidal particles near a partially absorbing sphere: implications for uptake of nanoparticles in animal cells.

    PubMed

    Shi, Wendong; Wang, Jizeng; Fan, Xiaojun; Gao, Huajian

    2008-12-01

    A mechanics model describing how a cell membrane with diffusive mobile receptors wraps around a ligand-coated cylindrical or spherical particle has been recently developed to model the role of particle size in receptor-mediated endocytosis. The results show that particles in the size range of tens to hundreds of nanometers can enter cells even in the absence of clathrin or caveolin coats. Here we report further progress on modeling the effects of size and shape in diffusion, interaction, and absorption of finite-sized colloidal particles near a partially absorbing sphere. Our analysis indicates that, from the diffusion and interaction point of view, there exists an optimal hydrodynamic size of particles, typically in the nanometer regime, for the maximum rate of particle absorption. Such optimal size arises as a result of balance between the diffusion constant of the particles and the interaction energy between the particles and the absorbing sphere relative to the thermal energy. Particles with a smaller hydrodynamic radius have larger diffusion constant but weaker interaction with the sphere while larger particles have smaller diffusion constant but stronger interaction with the sphere. Since the hydrodynamic radius is also determined by the particle shape, an optimal hydrodynamic radius implies an optimal size as well as an optimal aspect ratio for a nonspherical particle. These results show broad agreement with experimental observations and may have general implications on interaction between nanoparticles and animal cells.

  11. Size and shape effects on diffusion and absorption of colloidal particles near a partially absorbing sphere: Implications for uptake of nanoparticles in animal cells

    NASA Astrophysics Data System (ADS)

    Shi, Wendong; Wang, Jizeng; Fan, Xiaojun; Gao, Huajian

    2008-12-01

    A mechanics model describing how a cell membrane with diffusive mobile receptors wraps around a ligand-coated cylindrical or spherical particle has been recently developed to model the role of particle size in receptor-mediated endocytosis. The results show that particles in the size range of tens to hundreds of nanometers can enter cells even in the absence of clathrin or caveolin coats. Here we report further progress on modeling the effects of size and shape in diffusion, interaction, and absorption of finite-sized colloidal particles near a partially absorbing sphere. Our analysis indicates that, from the diffusion and interaction point of view, there exists an optimal hydrodynamic size of particles, typically in the nanometer regime, for the maximum rate of particle absorption. Such optimal size arises as a result of balance between the diffusion constant of the particles and the interaction energy between the particles and the absorbing sphere relative to the thermal energy. Particles with a smaller hydrodynamic radius have larger diffusion constant but weaker interaction with the sphere while larger particles have smaller diffusion constant but stronger interaction with the sphere. Since the hydrodynamic radius is also determined by the particle shape, an optimal hydrodynamic radius implies an optimal size as well as an optimal aspect ratio for a nonspherical particle. These results show broad agreement with experimental observations and may have general implications on interaction between nanoparticles and animal cells.

  12. Risk-Based Sampling: I Don't Want to Weight in Vain.

    PubMed

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  13. The Role of Nanoparticle Design in Determining Analytical Performance of Lateral Flow Immunoassays.

    PubMed

    Zhan, Li; Guo, Shuang-Zhuang; Song, Fayi; Gong, Yan; Xu, Feng; Boulware, David R; McAlpine, Michael C; Chan, Warren C W; Bischof, John C

    2017-12-13

    Rapid, simple, and cost-effective diagnostics are needed to improve healthcare at the point of care (POC). However, the most widely used POC diagnostic, the lateral flow immunoassay (LFA), is ∼1000-times less sensitive and has a smaller analytical range than laboratory tests, requiring a confirmatory test to establish truly negative results. Here, a rational and systematic strategy is used to design the LFA contrast label (i.e., gold nanoparticles) to improve the analytical sensitivity, analytical detection range, and antigen quantification of LFAs. Specifically, we discovered that the size (30, 60, or 100 nm) of the gold nanoparticles is a main contributor to the LFA analytical performance through both the degree of receptor interaction and the ultimate visual or thermal contrast signals. Using the optimal LFA design, we demonstrated the ability to improve the analytical sensitivity by 256-fold and expand the analytical detection range from 3 log 10 to 6 log 10 for diagnosing patients with inflammatory conditions by measuring C-reactive protein. This work demonstrates that, with appropriate design of the contrast label, a simple and commonly used diagnostic technology can compete with more expensive state-of-the-art laboratory tests.

  14. Trajectory Optimization of Electric Aircraft Subject to Subsystem Thermal Constraints

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Chin, Jeffrey C.; Schnulo, Sydney L.; Burt, Jonathan M.; Gray, Justin S.

    2017-01-01

    Electric aircraft pose a unique design challenge in that they lack a simple way to reject waste heat from the power train. While conventional aircraft reject most of their excess heat in the exhaust stream, for electric aircraft this is not an option. To examine the implications of this challenge on electric aircraft design and performance, we developed a model of the electric subsystems for the NASA X-57 electric testbed aircraft. We then coupled this model with a model of simple 2D aircraft dynamics and used a Legendre-Gauss-Lobatto collocation optimal control approach to find optimal trajectories for the aircraft with and without thermal constraints. The results show that the X-57 heat rejection systems are well designed for maximum-range and maximum-efficiency flight, without the need to deviate from an optimal trajectory. Stressing the thermal constraints by reducing the cooling capacity or requiring faster flight has a minimal impact on performance, as the trajectory optimization technique is able to find flight paths which honor the thermal constraints with relatively minor deviations from the nominal optimal trajectory.

  15. A simple technique to increase profits in wood products marketing

    Treesearch

    George B. Harpole

    1971-01-01

    Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...

  16. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  17. Evaluating Lexical Coverage in Simple English Wikipedia Articles: A Corpus-Driven Study

    ERIC Educational Resources Information Center

    Hendry, Clinton; Sheepy, Emily

    2017-01-01

    Simple English Wikipedia is a user-contributed online encyclopedia intended for young readers and readers whose first language is not English. We compiled a corpus of the entirety of Simple English Wikipedia as of June 20th, 2017. We used lexical frequency profiling tools to investigate the vocabulary size needed to comprehend Simple English…

  18. Estimated Benefits of Variable-Geometry Wing Camber Control for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Bolonkin, Alexander; Gilyard, Glenn B.

    1999-01-01

    Analytical benefits of variable-camber capability on subsonic transport aircraft are explored. Using aerodynamic performance models, including drag as a function of deflection angle for control surfaces of interest, optimal performance benefits of variable camber are calculated. Results demonstrate that if all wing trailing-edge surfaces are available for optimization, drag can be significantly reduced at most points within the flight envelope. The optimization approach developed and illustrated for flight uses variable camber for optimization of aerodynamic efficiency (maximizing the lift-to-drag ratio). Most transport aircraft have significant latent capability in this area. Wing camber control that can affect performance optimization for transport aircraft includes symmetric use of ailerons and flaps. In this paper, drag characteristics for aileron and flap deflections are computed based on analytical and wind-tunnel data. All calculations based on predictions for the subject aircraft and the optimal surface deflection are obtained by simple interpolation for given conditions. An algorithm is also presented for computation of optimal surface deflection for given conditions. Benefits of variable camber for a transport configuration using a simple trailing-edge control surface system can approach more than 10 percent, especially for nonstandard flight conditions. In the cruise regime, the benefit is 1-3 percent.

  19. Seeding the initial population with feasible solutions in metaheuristic optimization of steel trusses

    NASA Astrophysics Data System (ADS)

    Kazemzadeh Azad, Saeid

    2018-01-01

    In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient metaheuristic optimization of large-scale structural systems.

  20. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  1. Motor unit recruitment by size does not provide functional advantages for motor performance

    PubMed Central

    Dideriksen, Jakob L; Farina, Dario

    2013-01-01

    It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers. PMID:24144879

  2. Motor unit recruitment by size does not provide functional advantages for motor performance.

    PubMed

    Dideriksen, Jakob L; Farina, Dario

    2013-12-15

    It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers.

  3. Conditional Optimal Design in Three- and Four-Level Experiments

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Borenstein, Michael

    2014-01-01

    The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…

  4. The relationship between offspring size and fitness: integrating theory and empiricism.

    PubMed

    Rollinson, Njal; Hutchings, Jeffrey A

    2013-02-01

    How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.

  5. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  6. Wireless sensor placement for structural monitoring using information-fusing firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Guang-Dong; Yi, Ting-Hua; Xie, Mei-Xi; Li, Hong-Nan

    2017-10-01

    Wireless sensor networks (WSNs) are promising technology in structural health monitoring (SHM) applications for their low cost and high efficiency. The limited wireless sensors and restricted power resources in WSNs highlight the significance of optimal wireless sensor placement (OWSP) during designing SHM systems to enable the most useful information to be captured and to achieve the longest network lifetime. This paper presents a holistic approach, including an optimization criterion and a solution algorithm, for optimally deploying self-organizing multi-hop WSNs on large-scale structures. The combination of information effectiveness represented by the modal independence and the network performance specified by the network connectivity and network lifetime is first formulated to evaluate the performance of wireless sensor configurations. Then, an information-fusing firefly algorithm (IFFA) is developed to solve the OWSP problem. The step sizes drawn from a Lévy distribution are adopted to drive fireflies toward brighter individuals. Following the movement with Lévy flights, information about the contributions of wireless sensors to the objective function as carried by the fireflies is fused and applied to move inferior wireless sensors to better locations. The reliability of the proposed approach is verified via a numerical example on a long-span suspension bridge. The results demonstrate that the evaluation criterion provides a good performance metric of wireless sensor configurations, and the IFFA outperforms the simple discrete firefly algorithm.

  7. SU-F-BRD-13: Quantum Annealing Applied to IMRT Beamlet Intensity Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazareth, D; Spaans, J

    Purpose: We report on the first application of quantum annealing (QA) to the process of beamlet intensity optimization for IMRT. QA is a new technology, which employs novel hardware and software techniques to address various discrete optimization problems in many fields. Methods: We apply the D-Wave Inc. proprietary hardware, which natively exploits quantum mechanical effects for improved optimization. The new QA algorithm, running on this hardware, is most similar to simulated annealing, but relies on natural processes to directly minimize the free energy of a system. A simple quantum system is slowly evolved into a classical system, representing the objectivemore » function. To apply QA to IMRT-type optimization, two prostate cases were considered. A reduced number of beamlets were employed, due to the current QA hardware limitation of ∼500 binary variables. The beamlet dose matrices were computed using CERR, and an objective function was defined based on typical clinical constraints, including dose-volume objectives. The objective function was discretized, and the QA method was compared to two standard optimization Methods: simulated annealing and Tabu search, run on a conventional computing cluster. Results: Based on several runs, the average final objective function value achieved by the QA was 16.9 for the first patient, compared with 10.0 for Tabu and 6.7 for the SA. For the second patient, the values were 70.7 for the QA, 120.0 for Tabu, and 22.9 for the SA. The QA algorithm required 27–38% of the time required by the other two methods. Conclusion: In terms of objective function value, the QA performance was similar to Tabu but less effective than the SA. However, its speed was 3–4 times faster than the other two methods. This initial experiment suggests that QA-based heuristics may offer significant speedup over conventional clinical optimization methods, as quantum annealing hardware scales to larger sizes.« less

  8. Power Electronics for a Miniaturized Arcjet

    NASA Technical Reports Server (NTRS)

    Pinero, Luis R.; Bowers, Glen E.

    1997-01-01

    A 0.3 kW Power Processing Unit (PPU) was designed, tested on resistive loads, and then integrated with a miniaturized arcjet. The main goal of the design was to minimize size and mass while maintaining reasonable efficiency. In order to obtain the desired reductions in mass, simple topologies and control methods were considered. The PPU design incorporates a 50 kHz, current-mode-control, pulse-width-modulated (PWM), push-pull topology. An input voltage of 28 +/- 4V was chosen for compatibility with typical unregulated low voltage busses anticipated for smallsats. An efficiency of 0.90 under nominal operating conditions was obtained. The component mass of the PPU was 0.475 kg and could be improved by optimization of the output filter design. The estimated mass for a flight PPU based on this design is less than a kilogram.

  9. Space shuttle environmental control/life support systems

    NASA Technical Reports Server (NTRS)

    1972-01-01

    This study analyzes and defines a baseline Environmental Control/Life Support System (EC/LSS) for a four-man, seven-day orbital shuttle. In addition, the impact of various mission parameters, crew size, mission length, etc. are examined for their influence on the selected system. Pacing technology items are identified to serve as a guide for application of effort to enhance the total system optimization. A fail safe-fail operation philosophy was utilized in designing the system. This has resulted in a system that requires only one daily routine operation. All other critical item malfunctions are automatically resolved by switching to redundant modes of operation. As a result of this study, it is evident that a practical, flexible, simple and long life, EC/LSS can be designed and manufactured for the shuttle orbiter within the time phase required.

  10. Preparation of NASICON-Type Nanosized Solid Electrolyte Li1.4Al0.4Ti1.6(PO4)3 by Evaporation-Induced Self-Assembly for Lithium-Ion Battery

    NASA Astrophysics Data System (ADS)

    Liu, Xingang; Fu, Ju; Zhang, Chuhong

    2016-12-01

    A simple and practicable evaporation-induced self-assembly (EISA) method is introduced for the first time to prepare nanosized solid electrolyte Li1.4Al0.4Ti1.6(PO4)3 (LATP) for all-solid-state lithium-ion batteries. A pure Na+ super ion conductor (NASICON) phase is confirmed by X-ray diffraction (XRD) analysis, and its primary particle size is down to 70 nm by optimizing evaporation rate of the solvent. Excellent room temperature bulk and total lithium-ion conductivities of 2.09 × 10-3 S cm-1 and 3.63 × 10-4 S cm-1 are obtained, with an ion-hopping activation energy as low as 0.286 eV.

  11. Recognition, neutralization, and clearance of target peptides in the bloodstream of living mice by molecularly imprinted polymer nanoparticles: a plastic antibody.

    PubMed

    Hoshino, Yu; Koide, Hiroyuki; Urakami, Takeo; Kanazawa, Hiroaki; Kodama, Takashi; Oku, Naoto; Shea, Kenneth J

    2010-05-19

    We report that simple, synthetic organic polymer nanoparticles (NPs) can capture and clear a target peptide toxin in the bloodstream of living mice. The protein-sized polymer nanoparticles, with a binding affinity and selectivity comparable to those of natural antibodies, were prepared by combining a functional monomer optimization strategy with molecular-imprinting nanoparticle synthesis. As a result of binding and removal of melittin by NPs in vivo, the mortality and peripheral toxic symptoms due to melittin were significantly diminished. In vivo imaging of the polymer nanoparticles (or "plastic antibodies") established that the NPs accelerate clearance of the peptide from blood and accumulate in the liver. Coupled with their biocompatibility and nontoxic characteristics, plastic antibodies offer the potential for neutralizing a wide range of biomacromolecules in vivo.

  12. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  13. Sail Plan Configuration Optimization for a Modern Clipper Ship

    NASA Astrophysics Data System (ADS)

    Gerritsen, Margot; Doyle, Tyler; Iaccarino, Gianluca; Moin, Parviz

    2002-11-01

    We investigate the use of gradient-based and evolutionary algorithms for sail shape optimization. We present preliminary results for the optimization of sheeting angles for the rig of the future three-masted clipper yacht Maltese Falcon. This yacht will be equipped with square-rigged masts made up of yards of circular arc cross sections. This design is especially attractive for megayachts because it provides a large sail area while maintaining aerodynamic and structural efficiency. The rig remains almost rigid in a large range of wind conditions and therefore a simple geometrical model can be constructed without accounting for the true flying shape. The sheeting angle optimization studies are performed using both gradient-based cost function minimization and evolutionary algorithms. The fluid flow is modeled by the Reynolds-averaged Navier-Stokes equations with the Spallart-Allmaras turbulence model. Unstructured non-conforming grids are used to increase robustness and computational efficiency. The optimization process is automated by integrating the system components (geometry construction, grid generation, flow solver, force calculator, optimization). We compare the optimization results to those done previously by user-controlled parametric studies using simple cost functions and user intuition. We also investigate the effectiveness of various cost functions in the optimization (driving force maximization, ratio of driving force to heeling force maximization).

  14. Planning a Target Renewable Portfolio using Atmospheric Modeling and Stochastic Optimization

    NASA Astrophysics Data System (ADS)

    Hart, E.; Jacobson, M. Z.

    2009-12-01

    A number of organizations have suggested that an 80% reduction in carbon emissions by 2050 is a necessary step to mitigate climate change and that decarbonization of the electricity sector is a crucial component of any strategy to meet this target. Integration of large renewable and intermittent generators poses many new problems in power system planning. In this study, we attempt to determine an optimal portfolio of renewable resources to meet best the fluctuating California load while also meeting an 80% carbon emissions reduction requirement. A stochastic optimization scheme is proposed that is based on a simplified model of the California electricity grid. In this single-busbar power system model, the load is met with generation from wind, solar thermal, photovoltaic, hydroelectric, geothermal, and natural gas plants. Wind speeds and insolation are calculated using GATOR-GCMOM, a global-through-urban climate-weather-air pollution model. Fields were produced for California and Nevada at 21km SN by 14 km WE spatial resolution every 15 minutes for the year 2006. Load data for 2006 were obtained from the California ISO OASIS database. Maximum installed capacities for wind and solar thermal generation were determined using a GIS analysis of potential development sites throughout the state. The stochastic optimization scheme requires that power balance be achieved in a number of meteorological and load scenarios that deviate from the forecasted (or modeled) data. By adjusting the error distributions of the forecasts, the model describes how improvements in wind speed and insolation forecasting may affect the optimal renewable portfolio. Using a simple model, we describe the diversity, size, and sensitivities of a renewable portfolio that is best suited to the resources and needs of California and that contributes significantly to reduction of the state’s carbon emissions.

  15. Performance Analysis and Design Synthesis (PADS) computer program. Volume 2: Program description, part 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.

  16. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    PubMed Central

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184

  17. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    PubMed

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  18. Kinship-based politics and the optimal size of kin groups

    PubMed Central

    Hammel, E. A.

    2005-01-01

    Kin form important political groups, which change in size and relative inequality with demographic shifts. Increases in the rate of population growth increase the size of kin groups but decrease their inequality and vice versa. The optimal size of kin groups may be evaluated from the marginal political product (MPP) of their members. Culture and institutions affect levels and shapes of MPP. Different optimal group sizes, from different perspectives, can be suggested for any MPP schedule. The relative dominance of competing groups is determined by their MPP schedules. Groups driven to extremes of sustainability may react in Malthusian fashion, including fission and fusion, or in Boserupian fashion, altering social technology to accommodate changes in size. The spectrum of alternatives for actors and groups, shaped by existing institutions and natural and cultural selection, is very broad. Nevertheless, selection may result in survival of particular kinds of political structures. PMID:16091466

  19. Kinship-based politics and the optimal size of kin groups.

    PubMed

    Hammel, E A

    2005-08-16

    Kin form important political groups, which change in size and relative inequality with demographic shifts. Increases in the rate of population growth increase the size of kin groups but decrease their inequality and vice versa. The optimal size of kin groups may be evaluated from the marginal political product (MPP) of their members. Culture and institutions affect levels and shapes of MPP. Different optimal group sizes, from different perspectives, can be suggested for any MPP schedule. The relative dominance of competing groups is determined by their MPP schedules. Groups driven to extremes of sustainability may react in Malthusian fashion, including fission and fusion, or in Boserupian fashion, altering social technology to accommodate changes in size. The spectrum of alternatives for actors and groups, shaped by existing institutions and natural and cultural selection, is very broad. Nevertheless, selection may result in survival of particular kinds of political structures.

  20. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.

    PubMed

    Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-06-01

    This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.

  1. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method

    PubMed Central

    Huh, Kyung-Hoe; Baik, Jee-Seon; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-01-01

    Purpose This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Materials and Methods Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. Results The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. Conclusion The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm. PMID:21977478

  2. Multi-parameter optimization of piezoelectric actuators for multi-mode active vibration control of cylindrical shells

    NASA Astrophysics Data System (ADS)

    Hu, K. M.; Li, Hua

    2018-07-01

    A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.

  3. Facile high-yield synthesis of polyaniline nanosticks with intrinsic stability and electrical conductivity.

    PubMed

    Li, Xin-Gui; Li, Ang; Huang, Mei-Rong

    2008-01-01

    Chemical oxidative polymerization at 15 degrees C was used for the simple and productive synthesis of polyaniline (PAN) nanosticks. The effect of polymerization media on the yield, size, stability, and electrical conductivity of the PAN nanosticks was studied by changing the concentration and nature of the acid medium and oxidant and by introducing organic solvent. Molecular and supramolecular structure, size, and size distribution of the PAN nanosticks were characterized by UV/Vis and IR spectroscopy, X-ray diffraction, laser particle-size analysis, and transmission electron microscopy. Introduction of organic solvent is advantageous for enhancing the yield of PAN nanosticks but disadvantageous for formation of PAN nanosticks with small size and high conductivity. The concentration and nature of the acid medium have a major influence on the polymerization yield and conductivity of the nanosized PAN. The average diameter and length of PAN nanosticks produced with 2 M HNO(3) and 0.5 M H(2)SO(4) as acid media are about 40 and 300 nm, respectively. The PAN nanosticks obtained in an optimal medium (i.e., 2 M HNO(3)) exhibit the highest conductivity of 2.23 S cm(-1) and the highest yield of 80.7 %. A mechanism of formation of nanosticks instead of nanoparticles is proposed. Nanocomposite films of the PAN nanosticks with poly(vinyl alcohol) show a low percolation threshold of 0.2 wt %, at which the film retains almost the same transparency and strength as pure poly(vinyl alcohol) but 262 000 times the conductivity of pure poly(vinyl alcohol) film. The present synthesis of PAN nanosticks requires no external stabilizer and provides a facile and direct route for fabrication of PAN nanosticks with high yield, controllable size, intrinsic self-stability, strong redispersibility, high purity, and optimizable conductivity.

  4. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  5. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Congenital heart disease affects cerebral size but not brain growth.

    PubMed

    Ortinau, Cynthia; Inder, Terrie; Lambeth, Jennifer; Wallendorf, Michael; Finucane, Kirsten; Beca, John

    2012-10-01

    Infants with congenital heart disease (CHD) have delayed brain maturation and alterations in brain volume. Brain metrics is a simple measurement technique that can be used to evaluate brain growth. This study used brain metrics to test the hypothesis that alterations in brain size persist at 3 months of age and that infants with CHD have slower rates of brain growth than control infants. Fifty-seven infants with CHD underwent serial brain magnetic resonance imaging (MRI). To evaluate brain growth across the first 3 months of life, brain metrics were undertaken using 19 tissue and fluid spaces shown on MRIs performed before surgery and again at 3 months of age. Before surgery, infants with CHD have smaller frontal, parietal, cerebellar, and brain stem measures (p < 0.001). At 3 months of age, alterations persisted in all measures except the cerebellum. There was no difference between control and CHD infants in brain growth. However, the cerebellum trended toward greater growth in infants with CHD. Somatic growth was the primary factor that related to brain growth. Presence of focal white matter lesions before and after surgery did not relate to alterations in brain size or growth. Although infants with CHD have persistent alterations in brain size at 3 months of age, rates of brain growth are similar to that of healthy term infants. Somatic growth was the primary predictor of brain growth, emphasizing the importance of optimal weight gain in this population.

  7. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  8. Implementation and Performance Issues in Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian

    1996-01-01

    Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.

  9. In vitro/in vivo evaluation of agar nanospheres for pulmonary delivery of bupropion HCl.

    PubMed

    Varshosaz, Jaleh; Minaiyan, Mohsen; Zaki, Mohammad Reza; Fathi, Milad; Jaleh, Hossein

    2016-07-01

    Bupropion HCl is an atypical antidepressant drug with rapid and high first-pass metabolism. Sustained release dosage form of this drug is suggested for reducing its side effects which are mainly seizures. The aim of the present study was to design pulmonary agar nanospheres of bupropion HCl with effective systemic absorption and extended release properties. Bupropion HCl was encapsulated in agar nanospheres by ionic gelation, and characterized for physical and release properties. Pharmacokinetic studies on nanospheres were performed on rats by intratracheal spraying of 5 mg/kg of drug in form of nanospheres compared to intravenous and pulmonary delivery of the same dose as simple solution of the drug. The optimized nanoparticles showed particle size of 320 ± 90 nm with polydispersity index of 0.85, the zeta potential of -29.6 mV, drug loading efficiency of 43.1 ± 0.28% and release efficiency of 66.7 ± 2%. The area under the serum concentration-time profile for the pulmonary nanospheres versus simple solution was 10 237.84 versus 28.8 µg/ml min, Tmax of 360 versus 60 min and the Cmax of 1927.93 versus9.93 ng/ml, respectively. The absolute bioavailability of the drug was 86.69% for nanospheres and 0.25% for pulmonary simple solution. Our results indicate that pulmonary delivery of bupropion loaded agar nanospheres achieves systemic exposure and extends serum levels of the drug.

  10. Assessing grain-size correspondence between flow and deposits of controlled floods in the Colorado River, USA

    USGS Publications Warehouse

    Draut, Amy; Rubin, David M.

    2013-01-01

    Flood-deposited sediment has been used to decipher environmental parameters such as variability in watershed sediment supply, paleoflood hydrology, and channel morphology. It is not well known, however, how accurately the deposits reflect sedimentary processes within the flow, and hence what sampling intensity is needed to decipher records of recent or long-past conditions. We examine these problems using deposits from dam-regulated floods in the Colorado River corridor through Marble Canyon–Grand Canyon, Arizona, U.S.A., in which steady-peaked floods represent a simple end-member case. For these simple floods, most deposits show inverse grading that reflects coarsening suspended sediment (a result of fine-sediment-supply limitation), but there is enough eddy-scale variability that some profiles show normal grading that did not reflect grain-size evolution in the flow as a whole. To infer systemwide grain-size evolution in modern or ancient depositional systems requires sampling enough deposit profiles that the standard error of the mean of grain-size-change measurements becomes small relative to the magnitude of observed changes. For simple, steady-peaked floods, 5–10 profiles or fewer may suffice to characterize grain-size trends robustly, but many more samples may be needed from deposits with greater variability in their grain-size evolution.

  11. Optimizing homogenization by chaotic unmixing?

    NASA Astrophysics Data System (ADS)

    Weijs, Joost; Bartolo, Denis

    2016-11-01

    A number of industrial processes rely on the homogeneous dispersion of non-brownian particles in a viscous fluid. An ideal mixing would yield a so-called hyperuniform particle distribution. Such configurations are characterized by density fluctuations that grow slower than the standard √{ N}-fluctuations. Even though such distributions have been found in several natural structures, e.g. retina receptors in birds, they have remained out of experimental reach until very recently. Over the last 5 years independent experiments and numerical simulations have shown that periodically driven suspensions can self-assemble hyperuniformally. Simple as the recipe may be, it has one important disadvantage. The emergence of hyperuniform states co-occurs with a critical phase transition from reversible to non reversible particle dynamics. As a consequence the homogenization dynamics occurs over a time that diverges with the system size (critical slowing down). Here, we discuss how this process can be sped up by exploiting the stirring properties of chaotic advection. Among the questions that we answer are: What are the physical mechanisms in a chaotic flow that are relevant for hyperuniformity? How can we tune the flow parameters such to obtain optimal hyperuniformity in the fastest way? JW acknowledges funding by NWO (Netherlands Organisation for Scientific Research) through a Rubicon Grant.

  12. [Near infrared spectroscopy system structure with MOEMS scanning mirror array].

    PubMed

    Luo, Biao; Wen, Zhi-Yu; Wen, Zhong-Quan; Chen, Li; Qian, Rong-Rong

    2011-11-01

    A method which uses MOEMS mirror array optical structure to reduce the high cost of infrared spectrometer is given in the present paper. This method resolved the problem that MOEMS mirror array can not be used in simple infrared spectrometer because the problem of imaging irregularity in infrared spectroscopy and a new structure for spectral imaging was designed. According to the requirements of imaging spot, this method used optical design software ZEMAX and standard-specific aberrations of the optimization algorithm, designed and optimized the optical structure. It works from 900 to 1 400 nm. The results of design analysis showed that with the light source slit width of 50 microm, the spectrophotometric system is superior to the theoretical resolution of 6 nm, and the size of the available spot is 0.042 mm x 0.08 mm. Verification examples show that the design meets the requirements of the imaging regularity, and can be used for MOEMS mirror reflectance scan. And it was also verified that the use of a new MOEMS mirror array spectrometer model is feasible. Finally, analyze the relationship between the location of the detector and the maximum deflection angle of micro-mirror was analyzed.

  13. An improved exploratory search technique for pure integer linear programming problems

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1990-01-01

    The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

  14. Quantitative modeling and optimization of magnetic tweezers.

    PubMed

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H

    2009-06-17

    Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply > or = 40 pN stretching forces on approximately 1-microm tethered beads.

  15. Quantitative Modeling and Optimization of Magnetic Tweezers

    PubMed Central

    Lipfert, Jan; Hao, Xiaomin; Dekker, Nynke H.

    2009-01-01

    Abstract Magnetic tweezers are a powerful tool to manipulate single DNA or RNA molecules and to study nucleic acid-protein interactions in real time. Here, we have modeled the magnetic fields of permanent magnets in magnetic tweezers and computed the forces exerted on superparamagnetic beads from first principles. For simple, symmetric geometries the magnetic fields can be calculated semianalytically using the Biot-Savart law. For complicated geometries and in the presence of an iron yoke, we employ a finite-element three-dimensional PDE solver to numerically solve the magnetostatic problem. The theoretical predictions are in quantitative agreement with direct Hall-probe measurements of the magnetic field and with measurements of the force exerted on DNA-tethered beads. Using these predictive theories, we systematically explore the effects of magnet alignment, magnet spacing, magnet size, and of adding an iron yoke to the magnets on the forces that can be exerted on tethered particles. We find that the optimal configuration for maximal stretching forces is a vertically aligned pair of magnets, with a minimal gap between the magnets and minimal flow cell thickness. Following these principles, we present a configuration that allows one to apply ≥40 pN stretching forces on ≈1-μm tethered beads. PMID:19527664

  16. Phospholipid hydrolysis in a pharmaceutical emulsion assessed by physicochemical parameters and a new analytical method.

    PubMed

    Rabinovich-Guilatt, Laura; Dubernet, Catherine; Gaudin, Karen; Lambert, Gregory; Couvreur, Patrick; Chaminade, Pierre

    2005-09-01

    The aim of this work was to develop a simple high-performance liquid chromatography (HPLC) technique with evaporative light scattering detection (ELSD) for the separation and quantification of the major phospholipid (PL) and lysophospholipid (LPL) classes contained in a pharmaceutical phospholipid-based emulsion. In the established method, phosphatidylcholine (PC), phosphatidylethanolamine (PE), sphingomyeline (SM), lysophosphatidylcholine (LPC) and lysophosphatidylethanolamine (LPE) were separated with a PVA-Sil stationary phase and a binary gradient from pure chloroform to methanol:water (94:6 v/v) at 3.4%/min. The ELSD detection was enhanced using 0.1% triethylamine and formic acid in each gradient mobile phases. Factors such as stationary phase and ELSD drift tube temperature were optimized, concluding in optimal temperatures of 25 degrees C for separation and 50 degrees C for evaporation. This HPLC-ELSD method was then applied to a PL-emulsion exposed to autoclaving and accelerated thermal conditions at 50 degrees C. Hydrolysis of PC and PE followed first-order kinetics, representing only 45% of the total lipid mass after 3 months. The chemical stability was correlated to commonly measured formulation physical and physico-chemical parameters such as droplet size, emulsion pH and zeta-potential.

  17. Assessing predation risk: optimal behaviour and rules of thumb.

    PubMed

    Welton, Nicky J; McNamara, John M; Houston, Alasdair I

    2003-12-01

    We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.

  18. Effect of Data Assimilation Parameters on The Optimized Surface CO2 Flux in Asia

    NASA Astrophysics Data System (ADS)

    Kim, Hyunjung; Kim, Hyun Mee; Kim, Jinwoong; Cho, Chun-Ho

    2018-02-01

    In this study, CarbonTracker, an inverse modeling system based on the ensemble Kalman filter, was used to evaluate the effects of data assimilation parameters (assimilation window length and ensemble size) on the estimation of surface CO2 fluxes in Asia. Several experiments with different parameters were conducted, and the results were verified using CO2 concentration observations. The assimilation window lengths tested were 3, 5, 7, and 10 weeks, and the ensemble sizes were 100, 150, and 300. Therefore, a total of 12 experiments using combinations of these parameters were conducted. The experimental period was from January 2006 to December 2009. Differences between the optimized surface CO2 fluxes of the experiments were largest in the Eurasian Boreal (EB) area, followed by Eurasian Temperate (ET) and Tropical Asia (TA), and were larger in boreal summer than in boreal winter. The effect of ensemble size on the optimized biosphere flux is larger than the effect of the assimilation window length in Asia, but the importance of them varies in specific regions in Asia. The optimized biosphere flux was more sensitive to the assimilation window length in EB, whereas it was sensitive to the ensemble size as well as the assimilation window length in ET. The larger the ensemble size and the shorter the assimilation window length, the larger the uncertainty (i.e., spread of ensemble) of optimized surface CO2 fluxes. The 10-week assimilation window and 300 ensemble size were the optimal configuration for CarbonTracker in the Asian region based on several verifications using CO2 concentration measurements.

  19. Tapping insertional torque allows prediction for better pedicle screw fixation and optimal screw size selection.

    PubMed

    Helgeson, Melvin D; Kang, Daniel G; Lehman, Ronald A; Dmitriev, Anton E; Luhmann, Scott J

    2013-08-01

    There is currently no reliable technique for intraoperative assessment of pedicle screw fixation strength and optimal screw size. Several studies have evaluated pedicle screw insertional torque (IT) and its direct correlation with pullout strength. However, there is limited clinical application with pedicle screw IT as it must be measured during screw placement and rarely causes the spine surgeon to change screw size. To date, no study has evaluated tapping IT, which precedes screw insertion, and its ability to predict pedicle screw pullout strength. The objective of this study was to investigate tapping IT and its ability to predict pedicle screw pullout strength and optimal screw size. In vitro human cadaveric biomechanical analysis. Twenty fresh-frozen human cadaveric thoracic vertebral levels were prepared and dual-energy radiographic absorptiometry scanned for bone mineral density (BMD). All specimens were osteoporotic with a mean BMD of 0.60 ± 0.07 g/cm(2). Five specimens (n=10) were used to perform a pilot study, as there were no previously established values for optimal tapping IT. Each pedicle during the pilot study was measured using a digital caliper as well as computed tomography measurements, and the optimal screw size was determined to be equal to or the first size smaller than the pedicle diameter. The optimal tap size was then selected as the tap diameter 1 mm smaller than the optimal screw size. During optimal tap size insertion, all peak tapping IT values were found to be between 2 in-lbs and 3 in-lbs. Therefore, the threshold tapping IT value for optimal pedicle screw and tap size was determined to be 2.5 in-lbs, and a comparison tapping IT value of 1.5 in-lbs was selected. Next, 15 test specimens (n=30) were measured with digital calipers, probed, tapped, and instrumented using a paired comparison between the two threshold tapping IT values (Group 1: 1.5 in-lbs; Group 2: 2.5 in-lbs), randomly assigned to the left or right pedicle on each specimen. Each pedicle was incrementally tapped to increasing size (3.75, 4.00, 4.50, and 5.50 mm) until the threshold value was reached based on the assigned group. Pedicle screw size was determined by adding 1 mm to the tap size that crossed the threshold torque value. Torque measurements were recorded with each revolution during tap and pedicle screw insertion. Each specimen was then individually potted and pedicle screws pulled out "in-line" with the screw axis at a rate of 0.25 mm/sec. Peak pullout strength (POS) was measured in Newtons (N). The peak tapping IT was significantly increased (50%) in Group 2 (3.23 ± 0.65 in-lbs) compared with Group 1 (2.15 ± 0.56 in-lbs) (p=.0005). The peak screw IT was also significantly increased (19%) in Group 2 (8.99 ± 2.27 in-lbs) compared with Group 1 (7.52 ± 2.96 in-lbs) (p=.02). The pedicle screw pullout strength was also significantly increased (23%) in Group 2 (877.9 ± 235.2 N) compared with Group 1 (712.3 ± 223.1 N) (p=.017). The mean pedicle screw diameter was significantly increased in Group 2 (5.70 ± 1.05 mm) compared with Group 1 (5.00 ± 0.80 mm) (p=.0002). There was also an increased rate of optimal pedicle screw size selection in Group 2 with 9 of 15 (60%) pedicle screws compared with Group 1 with 4 of 15 (26.7%) pedicle screws within 1 mm of the measured pedicle width. There was a moderate correlation for tapping IT with both screw IT (r=0.54; p=.002) and pedicle screw POS (r=0.55; p=.002). Our findings suggest that tapping IT directly correlates with pedicle screw IT, pedicle screw pullout strength, and optimal pedicle screw size. Therefore, tapping IT may be used during thoracic pedicle screw instrumentation as an adjunct to preoperative imaging and clinical experience to maximize fixation strength and optimize pedicle "fit and fill" with the largest screw possible. However, further prospective, in vivo studies are necessary to evaluate the intraoperative use of tapping IT to predict screw loosening/complications. Published by Elsevier Inc.

  20. Risk factors associated with conversion of laparoscopic simple closure in perforated duodenal ulcer.

    PubMed

    Kim, Ji-Hyun; Chin, Hyung-Min; Bae, You-Jin; Jun, Kyong-Hwa

    2015-03-01

    Precise patient selection criteria are necessary to guide the surgeon in selecting laparoscopic repair for patients with perforated peptic ulcers. The aims of this study are to report surgical outcomes after surgery for perforated duodenal ulcers and identify risk factors for predicting failure of laparoscopic simple closure for perforated duodenal ulcer. In total, 77 patients who underwent laparoscopic simple closure for perforated duodenal ulcers from January 2007 to September 2013 were retrospectively analyzed. Patients were divided into totally laparoscopic and conversion groups. The characteristics of patients, intraoperative findings, postoperative complications, conversion rates and suture leakage rates of each group were investigated. Laparoscopic repair was completed in 69 (89.6%) of 77 patients, while 8 (10.4%) underwent conversion to open repair. Patients in the conversion group had longer perforation time, larger perforation size, more suture leakage, longer hospital stay, and higher 30-day mortality rate than those in the totally laparoscopic group. The size of perforation was the only risk factor for conversion in multivariable analysis. Patients with an ulcer perforation size of ≥9 mm or with perforation duration of ≥12.5 h had a significantly increased risk for conversion and suture leakage. Ulcer size of ≥9 mm is a significant risk factor for predicting conversion in laparoscopic simple closure. Suture leakage is associated with ulcer size (9 mm) and duration of perforation (12.5 h). Copyright © 2015 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  1. A survey of methods of feasible directions for the solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1972-01-01

    Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.

  2. Rapid and semi-analytical design and simulation of a toroidal magnet made with YBCO and MgB 2 superconductors

    DOE PAGES

    Dimitrov, I. K.; Zhang, X.; Solovyov, V. F.; ...

    2015-07-07

    Recent advances in second-generation (YBCO) high-temperature superconducting wire could potentially enable the design of super high performance energy storage devices that combine the high energy density of chemical storage with the high power of superconducting magnetic storage. However, the high aspect ratio and the considerable filament size of these wires require the concomitant development of dedicated optimization methods that account for the critical current density in type-II superconductors. In this study, we report on the novel application and results of a CPU-efficient semianalytical computer code based on the Radia 3-D magnetostatics software package. Our algorithm is used to simulate andmore » optimize the energy density of a superconducting magnetic energy storage device model, based on design constraints, such as overall size and number of coils. The rapid performance of the code is pivoted on analytical calculations of the magnetic field based on an efficient implementation of the Biot-Savart law for a large variety of 3-D “base” geometries in the Radia package. The significantly reduced CPU time and simple data input in conjunction with the consideration of realistic input variables, such as material-specific, temperature, and magnetic-field-dependent critical current densities, have enabled the Radia-based algorithm to outperform finite-element approaches in CPU time at the same accuracy levels. Comparative simulations of MgB 2 and YBCO-based devices are performed at 4.2 K, in order to ascertain the realistic efficiency of the design configurations.« less

  3. Deeper and sparser nets are optimal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiu, V.; Makaruk, H.E.

    1998-03-01

    The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions (Home and Hush, 1994); and (2) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Home and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will provemore » that size-optimal solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less

  4. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  5. A Study on Optimal Sizing of Pipeline Transporting Equi-sized Particulate Solid-Liquid Mixture

    NASA Astrophysics Data System (ADS)

    Asim, Taimoor; Mishra, Rakesh; Pradhan, Suman; Ubbi, Kuldip

    2012-05-01

    Pipelines transporting solid-liquid mixtures are of practical interest to the oil and pipe industry throughout the world. Such pipelines are known as slurry pipelines where the solid medium of the flow is commonly known as slurry. The optimal designing of such pipelines is of commercial interests for their widespread acceptance. A methodology has been evolved for the optimal sizing of a pipeline transporting solid-liquid mixture. Least cost principle has been used in sizing such pipelines, which involves the determination of pipe diameter corresponding to the minimum cost for given solid throughput. The detailed analysis with regard to transportation of slurry having solids of uniformly graded particles size has been included. The proposed methodology can be used for designing a pipeline for transporting any solid material for different solid throughput.

  6. Trajectory optimization and guidance law development for national aerospace plane applications

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1988-01-01

    The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.

  7. SU-C-BRC-05: Monte Carlo Calculations to Establish a Simple Relation of Backscatter Dose Enhancement Around High-Z Dental Alloy to Its Atomic Number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utsunomiya, S; Kushima, N; Katsura, K

    Purpose: To establish a simple relation of backscatter dose enhancement around a high-Z dental alloy in head and neck radiation therapy to its average atomic number based on Monte Carlo calculations. Methods: The PHITS Monte Carlo code was used to calculate dose enhancement, which is quantified by the backscatter dose factor (BSDF). The accuracy of the beam modeling with PHITS was verified by comparing with basic measured data namely PDDs and dose profiles. In the simulation, a high-Z alloy of 1 cm cube was embedded into a tough water phantom irradiated by a 6-MV (nominal) X-ray beam of 10 cmmore » × 10 cm field size of Novalis TX (Brainlab). The ten different materials of high-Z alloys (Al, Ti, Cu, Ag, Au-Pd-Ag, I, Ba, W, Au, Pb) were considered. The accuracy of calculated BSDF was verified by comparing with measured data by Gafchromic EBT3 films placed at from 0 to 10 mm away from a high-Z alloy (Au-Pd-Ag). We derived an approximate equation to determine the relation of BSDF and range of backscatter to average atomic number of high-Z alloy. Results: The calculated BSDF showed excellent agreement with measured one by Gafchromic EBT3 films at from 0 to 10 mm away from the high-Z alloy. We found the simple linear relation of BSDF and range of backscatter to average atomic number of dental alloys. The latter relation was proven by the fact that energy spectrum of backscatter electrons strongly depend on average atomic number. Conclusion: We found a simple relation of backscatter dose enhancement around high-Z alloys to its average atomic number based on Monte Carlo calculations. This work provides a simple and useful method to estimate backscatter dose enhancement from dental alloys and corresponding optimal thickness of dental spacer to prevent mucositis effectively.« less

  8. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words

    PubMed Central

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants’ performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words. PMID:26909052

  9. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words.

    PubMed

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants' performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words.

  10. Deeper sparsely nets are size-optimal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiu, V.; Makaruk, H.E.

    1997-12-01

    The starting points of this paper are two size-optimal solutions: (i) one for implementing arbitrary Boolean functions (Horne, 1994); and (ii) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Horne and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will prove that size-optimalmore » solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e. minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less

  11. Determination of a temperature sensor location for monitoring weld pool size in GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boo, K.S.; Cho, H.S.

    1994-11-01

    This paper describes a method of determining the optimal sensor location to measure weldment surface temperature, which has a close correlation with weld pool size in the gas metal arc (GMA) welding process. Due to the inherent complexity and nonlinearity in the GMA welding process, the relationship between the weldment surface temperature and the weld pool size varies with the point of measurement. This necessitates an optimal selection of the measurement point to minimize the process nonlinearity effect in estimating the weld pool size from the measured temperature. To determine the optimal sensor location on the top surface of themore » weldment, the correlation between the measured temperature and the weld pool size is analyzed. The analysis is done by calculating the correlation function, which is based upon an analytical temperature distribution model. To validate the optimal sensor location, a series of GMA bead-on-plate welds are performed on a medium-carbon steel under various welding conditions. A comparison study is given in detail based upon the simulation and experimental results.« less

  12. Emergence of an optimal search strategy from a simple random walk

    PubMed Central

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-01-01

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445

  13. Emergence of an optimal search strategy from a simple random walk.

    PubMed

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  14. X-ray peak profile analysis of zinc oxide nanoparticles formed by simple precipitation method

    NASA Astrophysics Data System (ADS)

    Pelicano, Christian Mark; Rapadas, Nick Joaquin; Magdaluyo, Eduardo

    2017-12-01

    Zinc oxide (ZnO) nanoparticles were successfully synthesized by a simple precipitation method using zinc acetate and tetramethylammonium hydroxide. The synthesized ZnO nanoparticles were characterized by X-ray Diffraction analysis (XRD) and Transmission Electron Microscopy (TEM). The XRD result revealed a hexagonal wurtzite structure for the ZnO nanoparticles. The TEM image showed spherical nanoparticles with an average crystallite size of 6.70 nm. For x-ray peak analysis, Williamson-Hall (W-H) and Size-Strain Plot (SSP) methods were applied to examine the effects of crystallite size and lattice strain on the peak broadening of the ZnO nanoparticles. Based on the calculations, the estimated crystallite sizes and lattice strains obtained are in good agreement with each other.

  15. The accuracy of matrix population model projections for coniferous trees in the Sierra Nevada, California

    USGS Publications Warehouse

    van Mantgem, P.J.; Stephenson, N.L.

    2005-01-01

    1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.

  16. An Intensified Vibratory Milling Process for Enhancing the Breakage Kinetics during the Preparation of Drug Nanosuspensions.

    PubMed

    Li, Meng; Zhang, Lu; Davé, Rajesh N; Bilgili, Ecevit

    2016-04-01

    As a drug-sparing approach in early development, vibratory milling has been used for the preparation of nanosuspensions of poorly water-soluble drugs. The aim of this study was to intensify this process through a systematic increase in vibration intensity and bead loading with the optimal bead size for faster production. Griseofulvin, a poorly water-soluble drug, was wet-milled using yttrium-stabilized zirconia beads with sizes ranging from 50 to 1500 μm at low power density (0.87 W/g). Then, this process was intensified with the optimal bead size by sequentially increasing vibration intensity and bead loading. Additional experiments with several bead sizes were performed at high power density (16 W/g), and the results were compared to those from wet stirred media milling. Laser diffraction, scanning electron microscopy, X-ray diffraction, differential scanning calorimetry, and dissolution tests were used for characterization. Results for the low power density indicated 800 μm as the optimal bead size which led to a median size of 545 nm with more than 10% of the drug particles greater than 1.8 μm albeit the fastest breakage. An increase in either vibration intensity or bead loading resulted in faster breakage. The most intensified process led to 90% of the particles being smaller than 300 nm. At the high power intensity, 400 μm beads were optimal, which enhanced griseofulvin dissolution significantly and signified the importance of bead size in view of the power density. Only the optimally intensified vibratory milling led to a comparable nanosuspension to that prepared by the stirred media milling.

  17. Enhanced Solubility and Dissolution Rate of Lacidipine Nanosuspension: Formulation Via Antisolvent Sonoprecipitation Technique and Optimization Using Box-Behnken Design.

    PubMed

    Kassem, Mohamed A A; ElMeshad, Aliaa N; Fares, Ahmed R

    2017-05-01

    Lacidipine (LCDP) is a highly lipophilic calcium channel blocker of poor aqueous solubility leading to poor oral absorption. This study aims to prepare and optimize LCDP nanosuspensions using antisolvent sonoprecipitation technique to enhance the solubility and dissolution of LCDP. A three-factor, three-level Box-Behnken design was employed to optimize the formulation variables to obtain LCDP nanosuspension of small and uniform particle size. Formulation variables were as follows: stabilizer to drug ratio (A), sodium deoxycholate percentage (B), and sonication time (C). LCDP nanosuspensions were assessed for particle size, zeta potential, and polydispersity index. The formula with the highest desirability (0.969) was chosen as the optimized formula. The values of the formulation variables (A, B, and C) in the optimized nanosuspension were 1.5, 100%, and 8 min, respectively. Optimal LCDP nanosuspension had particle size (PS) of 273.21 nm, zeta potential (ZP) of -32.68 mV and polydispersity index (PDI) of 0.098. LCDP nanosuspension was characterized using x-ray powder diffraction, differential scanning calorimetry, and transmission electron microscopy. LCDP nanosuspension showed saturation solubility 70 times that of raw LCDP in addition to significantly enhanced dissolution rate due to particle size reduction and decreased crystallinity. These results suggest that the optimized LCDP nanosuspension could be promising to improve oral absorption of LCDP.

  18. Integrated topology and shape optimization in structural design

    NASA Technical Reports Server (NTRS)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  19. Pre-breeding food restriction promotes the optimization of parental investment in house mice, Mus musculus.

    PubMed

    Dušek, Adam; Bartoš, Luděk; Sedláček, František

    2017-01-01

    Litter size is one of the most reliable state-dependent life-history traits that indicate parental investment in polytocous (litter-bearing) mammals. The tendency to optimize litter size typically increases with decreasing availability of resources during the period of parental investment. To determine whether this tactic is also influenced by resource limitations prior to reproduction, we examined the effect of experimental, pre-breeding food restriction on the optimization of parental investment in lactating mice. First, we investigated the optimization of litter size in 65 experimental and 72 control families (mothers and their dependent offspring). Further, we evaluated pre-weaning offspring mortality, and the relationships between maternal and offspring condition (body weight), as well as offspring mortality, in 24 experimental and 19 control families with litter reduction (the death of one or more offspring). Assuming that pre-breeding food restriction would signal unpredictable food availability, we hypothesized that the optimization of parental investment would be more effective in the experimental rather than in the control mice. In comparison to the controls, the experimental mice produced larger litters and had a more selective (size-dependent) offspring mortality and thus lower litter reduction (the proportion of offspring deaths). Selective litter reduction helped the experimental mothers to maintain their own optimum condition, thereby improving the condition and, indirectly, the survival of their remaining offspring. Hence, pre-breeding resource limitations may have facilitated the mice to optimize their inclusive fitness. On the other hand, in the control females, the absence of environmental cues indicating a risky environment led to "maternal optimism" (overemphasizing good conditions at the time of breeding), which resulted in the production of litters of super-optimal size and consequently higher reproductive costs during lactation, including higher offspring mortality. Our study therefore provides the first evidence that pre-breeding food restriction promotes the optimization of parental investment, including offspring number and developmental success.

  20. Optimizing Equivalence-Based Instruction: Effects of Training Protocols on Equivalence Class Formation

    ERIC Educational Resources Information Center

    Fienup, Daniel M.; Wright, Nicole A.; Fields, Lanny

    2015-01-01

    Two experiments evaluated the effects of the simple-to-complex and simultaneous training protocols on the formation of academically relevant equivalence classes. The simple-to-complex protocol intersperses derived relations probes with training baseline relations. The simultaneous protocol conducts all training trials and test trials in separate…

  1. Offspring fitness and individual optimization of clutch size

    PubMed Central

    Both, C.; Tinbergen, J. M.; Noordwijk, A. J. van

    1998-01-01

    Within-year variation in clutch size has been claimed to be an adaptation to variation in the individual capacity to raise offspring. We tested this hypothesis by manipulating brood size to one common size, and predicted that if clutch size is individually optimized, then birds with originally large clutches have a higher fitness than birds with originally small clutches. No evidence was found that fitness was related to the original clutch size, and in this population clutch size is thus not related to the parental capacity to raise offspring. However, offspring from larger original clutches recruited better than their nest mates that came from smaller original clutches. This suggests that early maternal or genetic variation in viability is related to clutch size.

  2. Dylan Cutler | NREL

    Science.gov Websites

    focuses on integration and optimization of distributed energy resources, specifically cost-optimal sizing Campus team which is focusing on NREL's own control system integration and energy informatics sizing and dispatch of distributed energy resources Integration of building and utility control systems

  3. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870

  4. Optimal exploitation strategies for an animal population in a stochastic serially correlated environment

    USGS Publications Warehouse

    Anderson, D.R.

    1974-01-01

    Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.

  5. Optimal joint management of a coastal aquifer and a substitute resource

    NASA Astrophysics Data System (ADS)

    Moreaux, M.; Reynaud, A.

    2004-06-01

    This article characterizes the optimal joint management of a coastal aquifer and a costly water substitute. For this purpose we use a mathematical representation of the aquifer that incorporates the displacement of the interface between the seawater and the freshwater of the aquifer. We identify the spatial cost externalities created by users on each other and we show that the optimal water supply depends on the location of users. Users located in the coastal zone exclusively use the costly substitute. Those located in the more upstream area are supplied from the aquifer. At the optimum their withdrawal must take into account the cost externalities they generate on users located downstream. Last, users located in a median zone use the aquifer with a surface transportation cost. We show that the optimum can be implemented in a decentralized economy through a very simple Pigouvian tax. Finally, the optimal and decentralized extraction policies are simulated on a very simple example.

  6. Synthesis, structure characterization and catalytic activity of nickel tungstate nanoparticles

    NASA Astrophysics Data System (ADS)

    Pourmortazavi, Seied Mahdi; Rahimi-Nasrabadi, Mehdi; Khalilian-Shalamzari, Morteza; Zahedi, Mir Mahdi; Hajimirsadeghi, Seiedeh Somayyeh; Omrani, Ismail

    2012-12-01

    Taguchi robust design was applied to optimize experimental parameters for controllable, simple and fast synthesis of nickel tungstate nanoparticles. NiWO4 nanoparticles were synthesized by precipitation reaction involving addition of nickel ion solution to the tungstate aqueous reagent and then formation of nickel tungstate nucleolus which are insoluble in aqueous media. Effects of various parameters such as nickel and tungstate concentrations, flow rate of reagent addition and reactor temperature on diameter of synthesized nickel tungstate nanoparticles were investigated experimentally by the aid of orthogonal array design. The results for analysis of variance (ANOVA) showed that particle size of nickel tungstate can be effectively tuned by controlling significant variables involving nickel and tungstate concentrations and flow rate; while, temperature of the reactor has a no considerable effect on the size of NiWO4 particles. The ANOVA results proposed the optimum conditions for synthesis of nickel tungstate nanoparticles via this technique. Also, under optimum condition nanoparticles of NiWO4 were prepared and their structure and chemical composition were characterized by means of EDAX, XRD, SEM, FT-IR spectroscopy, UV-vis spectroscopy, and photoluminescence. Finally, catalytic activity of the nanoparticles in a cycloaddition reaction was examined.

  7. A micro-reactor for preparing uniform molecularly imprinted polymer beads.

    PubMed

    Zourob, Mohammed; Mohr, Stephan; Mayes, Andrew G; Macaskill, Alexandra; Pérez-Moral, Natalia; Fielden, Peter R; Goddard, Nicholas J

    2006-02-01

    In this study, uniform spherical molecularly imprinted polymer beads were prepared via controlled suspension polymerization in a spiral-shaped microchannel using mineral oil and perfluorocarbon liquid as continuous phases. Monodisperse droplets containing the monomers, template, initiator, and porogenic solvent were introduced into the microchannel, and particles of uniform size were produced by subsequent UV polymerization, quickly and without wasting polymer materials. The droplet/particle size was varied by changing the flow conditions in the microfluidic device. The diameter of the resulting products typically had a coefficient of variation (CV) below 2%. The specific binding sites that were created during the imprinting process were analysed via radioligand binding analysis. The molecularly imprinted microspheres produced in the liquid perfluorocarbon continuous phase had a higher binding capacity compared with the particles produced in the mineral oil continuous phase, though it should be noted that the aim of this study was not to optimize or maximize imprinting performance, but rather to demonstrate broad applicability and compatibility with known MIP production methods. The successful imprinting against a model compound using two very different continuous phases (one requiring a surfactant to stabilize the droplets the other not) demonstrates the generality of this current simple approach.

  8. Enabling high-rate electrochemical flow capacitors based on mesoporous carbon microspheres suspension electrodes

    NASA Astrophysics Data System (ADS)

    Tian, Meng; Sun, Yueqing; Zhang, Chuanfang (John); Wang, Jitong; Qiao, Wenming; Ling, Licheng; Long, Donghui

    2017-10-01

    Electrochemical flow capacitor (EFC) is a promising technology for grid energy storage, which combines the fast charging/discharging capability of supercapacitors with the scalable energy capacity of flow batteries. In this study, we report a high-power-density EFC using mesoporous carbon microspheres (MCMs) as suspension electrodes. By using a simple yet effective spray-drying technique, monodispersed MCMs with average particle size of 5 μm, high BET surface area of 1150-1267 m2 g-1, large pore volume of 2-4 cm3 g-1 and controllable mesopore size of 7-30 nm have been successfully prepared. The resultant MCMs suspension electrode shows excellent stability and considerable high capacitance of 100 F g-1 and good cycling ability (86% of initial capacitance after 10000 cycles). Specially, the suspension electrode exhibits excellent rate performance with 75% capacitance retention from 2 to 100 mV s-1, significantly higher than that of microporous carbon electrodes (20∼30%), due to the developed mesoporous channels facilitating for rapid ion diffusion. In addition, the electrochemical responses on both negative and positive suspension electrodes are studied, based on which an optimal capacitance matching between them is suggested for large-scale EFC unit.

  9. Paget's disease of the vulva: a clinicopathologic institutional review.

    PubMed

    Mendivil, Alberto A; Abaid, Lisa; Epstein, Howard D; Rettenmaier, Mark A; Brown, John V; Micha, John P; Wabe, Marie A; Goldstein, Bram H

    2012-12-01

    The aim of this study was to assess the clinicopathologic characteristics of patients with Paget's disease of the vulva who were treated by our gynecologic oncology service between 1985 and 2010. Vulvar Paget's disease patient demographics, pathologic diagnosis, treatment and follow-up data were reviewed over a 25-year period. The vulvar Paget's disease patients were primarily (62.5%) treated with a partial simple vulvectomy. Three patients had a history of malignancy, although none of them was intercurrent. Eleven patients had microscopically positive margins, 5 of whom developed progressive disease. Conversely, 5 patients had negative margins, of whom 4 had recurrent disease. There was a significant relationship between the presence of invasive disease and patient progression-free interval (PFI) (p = 0.007), but margin status and lesion size did not correlate with PFI (p > 0.05). Median patient PFI and follow-up was 30 and 53 months, respectively. We found a significant relationship between the presence of invasive disease and patient PFI in vulvar Paget's disease although the presence of microscopic positive margins and lesion size were not prognostic indicators. In patients with high risk factors, prolonged surveillance should be considered an essential component of optimal patient management.

  10. Determination of trigonelline, nicotinic acid, and caffeine in Yunnan Arabica coffee by microwave-assisted extraction and HPLC with two columns in series.

    PubMed

    Liu, Hongcheng; Shao, Jinliang; Li, Qiwan; Li, Yangang; Yan, Hong Mei; He, Lizhong

    2012-01-01

    A simple, rapid method was developed for simultaneous extraction of trigonelline, nicotinic acid, and caffeine from coffee, and separation by two chromatographic columns in series. The trigonelline, nicotinic acid, and caffeine were extracted with microwave-assisted extraction (MAE). The optimal conditions selected were 3 min, 200 psi, and 120 degrees C. The chromatographic separation was performed with two columns in series, polyaromatic hydrocarbon C18 (250 x 4.6 mm id, 5 microm particle size) and Bondapak NH2 (300 x 3.9 mm id, 5 microm particle size). Isocratic elution was with 0.02 M phosphoric acid-methanol (70 + 30, v/v) mobile phase at a flow rate of 0.8 mL/min. Good recoveries and RSD values were found for all analytes in the matrix. The LOD of the three compounds was 0.02 mg/L, and the LOQ was 0.005% in the matrix. The concentrations of trigonelline, nicotinic acid, and caffeine in instant coffee, roasted coffee, and raw coffee (Yunnan Arabica coffee) were assessed by MAE and hot water extraction; the correlation coefficients between concentrations of the three compounds obtained were close to 1.

  11. A challenge for theranostics: is the optimal particle for therapy also optimal for diagnostics?

    NASA Astrophysics Data System (ADS)

    Dreifuss, Tamar; Betzer, Oshra; Shilo, Malka; Popovtzer, Aron; Motiei, Menachem; Popovtzer, Rachela

    2015-09-01

    Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications.Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03119b

  12. A diffusion-based approach to stochastic individual growth and energy budget, with consequences to life-history optimization and population dynamics.

    PubMed

    Filin, I

    2009-06-01

    Using diffusion processes, I model stochastic individual growth, given exogenous hazards and starvation risk. By maximizing survival to final size, optimal life histories (e.g. switching size for habitat/dietary shift) are determined by two ratios: mean growth rate over growth variance (diffusion coefficient) and mortality rate over mean growth rate; all are size dependent. For example, switching size decreases with either ratio, if both are positive. I provide examples and compare with previous work on risk-sensitive foraging and the energy-predation trade-off. I then decompose individual size into reversibly and irreversibly growing components, e.g. reserves and structure. I provide a general expression for optimal structural growth, when reserves grow stochastically. I conclude that increased growth variance of reserves delays structural growth (raises threshold size for its commencement) but may eventually lead to larger structures. The effect depends on whether the structural trait is related to foraging or defence. Implications for population dynamics are discussed.

  13. Search Parameter Optimization for Discrete, Bayesian, and Continuous Search Algorithms

    DTIC Science & Technology

    2017-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CONTINUOUS SEARCH ALGORITHMS by...to 09-22-2017 4. TITLE AND SUBTITLE SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CON- TINUOUS SEARCH ALGORITHMS 5. FUNDING NUMBERS 6...simple search and rescue acts to prosecuting aerial/surface/submersible targets on mission. This research looks at varying the known discrete and

  14. Solvent extraction employing a static micromixer: a simple, robust and versatile technology for the microencapsulation of proteins.

    PubMed

    Freitas, S; Walz, A; Merkle, H P; Gander, B

    2003-01-01

    The potential of a static micromixer for the production of protein-loaded biodegradable polymeric microspheres by a modified solvent extraction process was examined. The mixer consists of an array of microchannels and features a simple set-up, consumes only very small space, lacks moving parts and offers simple control of the microsphere size. Scale-up from lab bench to industrial production is easily feasible through parallel installation of a sufficient number of micromixers ('number-up'). Poly(lactic-co-glycolic acid) microspheres loaded with a model protein, bovine serum albumin (BSA), were prepared. The influence of various process and formulation parameters on the characteristics of the microspheres was examined with special focus on particle size distribution. Microspheres with monomodal size distributions having mean diameters of 5-30 micro m were produced with excellent reproducibility. Particle size distributions were largely unaffected by polymer solution concentration, polymer type and nominal BSA load, but depended on the polymer solvent. Moreover, particle mean diameters could be varied in a considerable range by modulating the flow rates of the mixed fluids. BSA encapsulation efficiencies were mostly in the region of 75-85% and product yields ranged from 90-100%. Because of its simple set-up and its suitability for continuous production, static micromixing is suggested for the automated and aseptic production of protein-loaded microspheres.

  15. Universal GFR determination based on two time points during plasma iohexol disappearance.

    PubMed

    Ng, Derek K S; Schwartz, George J; Jacobson, Lisa P; Palella, Frank J; Margolick, Joseph B; Warady, Bradley A; Furth, Susan L; Muñoz, Alvaro

    2011-08-01

    An optimal measurement of glomerular filtration rate (GFR) should minimize the number of blood draws, and reduce procedural invasiveness and the burden to study personnel and cost, without sacrificing accuracy. Equations have been proposed to calculate GFR from the slow compartment separately for adults and children. To develop a universal equation, we used 1347 GFR measurements from two diverse groups consisting of 527 men in the Multicenter AIDS Cohort Study and 514 children in the Chronic Kidney Disease in Children cohort. Both studies used nearly identical two-compartment (fast and slow) protocols to measure GFR. To estimate the fast component from markers of body size and of the slow component, we used standard linear regression methods with the log-transformed fast area as the dependent variable. The fast area could be accurately estimated from body surface area by a simple parameter (6.4/body surface area) with no residual dependence on the slow area or other markers of body size. Our equation measures only the slow iohexol plasma disappearance curve with as few as two time points and was normalized to 1.73 m2 body surface area. It is of the form: GFR=slowGFR/[1+0.12(slowGFR/100)]. In a random sample utilizing a third of the patients for validation, there was excellent agreement between the calculated and measured GFR with low root mean square errors being 4.6 and 1.5 ml/min per 1.73 m2 for adults and children, respectively. Thus, our proposed simple equation, developed in a combined patient group with a broad range of GFRs, may be applied universally and is independent of the injected amount of iohexol.

  16. Generating compact classifier systems using a simple artificial immune system.

    PubMed

    Leung, Kevin; Cheong, France; Cheong, Christopher

    2007-10-01

    Current artificial immune system (AIS) classifiers have two major problems: 1) their populations of B-cells can grow to huge proportions, and 2) optimizing one B-cell (part of the classifier) at a time does not necessarily guarantee that the B-cell pool (the whole classifier) will be optimized. In this paper, the design of a new AIS algorithm and classifier system called simple AIS is described. It is different from traditional AIS classifiers in that it takes only one B-cell, instead of a B-cell pool, to represent the classifier. This approach ensures global optimization of the whole system, and in addition, no population control mechanism is needed. The classifier was tested on seven benchmark data sets using different classification techniques and was found to be very competitive when compared to other classifiers.

  17. Small fan-in is beautiful

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiu, V.; Makaruk, H.E.

    1997-09-01

    The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions; and (2) another one for implementing certain subclasses of Boolean functions. Because VLSI implementations do not cope well with highly interconnected nets -- the area of a chip grows with the cube of the fan-in -- this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Horne and Hush valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will prove that size-optimal solutions are obtainedmore » for small constant fan-ins for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions, while there are similar small constants relating to the capacity of processing information.« less

  18. Design, implementation and evaluation of a practical pseudoknot folding algorithm based on thermodynamics

    PubMed Central

    Reeder, Jens; Giegerich, Robert

    2004-01-01

    Background The general problem of RNA secondary structure prediction under the widely used thermodynamic model is known to be NP-complete when the structures considered include arbitrary pseudoknots. For restricted classes of pseudoknots, several polynomial time algorithms have been designed, where the O(n6)time and O(n4) space algorithm by Rivas and Eddy is currently the best available program. Results We introduce the class of canonical simple recursive pseudoknots and present an algorithm that requires O(n4) time and O(n2) space to predict the energetically optimal structure of an RNA sequence, possible containing such pseudoknots. Evaluation against a large collection of known pseudoknotted structures shows the adequacy of the canonization approach and our algorithm. Conclusions RNA pseudoknots of medium size can now be predicted reliably as well as efficiently by the new algorithm. PMID:15294028

  19. [Value the correction of corneal astigmatism in cataract surgery].

    PubMed

    Wang, J; Cao, Y X

    2018-05-11

    The aim of modern micro-incision phacoemulsification combined with foldable intraocular lens implantation and femtosecond laser-assisted cataract surgery is evolving from a simple pursuit of recuperation to a refractive procedure, which involves the correction of ametropia according to preoperative and postoperative refractive conditions, especially corneal astigmatism, in order to achieve the goal of optimized postoperative uncorrected full range of vision. Nowadays, due attention to the effect of preoperative corneal astigmatism, surgery-induced astigmatism and residual astigmatism after operation is lacked, which affect postoperative visual acuity significantly. There are many effective ways to reduce corneal astigmatism after cataract surgery including selecting appropriate size and location of clear corneal incision, employing astigmatism keratotomy and the implantation of Toric intraocular lenses, which need to be appropriately applied and popularized. At the same time, surgical indications, predictability and safety should also be taken into account. (Chin J Ophthalmol, 2018, 54: 321-323) .

  20. CdS nanoparticles-enhanced chemiluminescence and determination of baicalin in pharmaceutical preparations.

    PubMed

    Chen, Xiaolan; Tan, Xinmei; Wang, Jianxiu

    2013-01-01

    CdS nanoparticles (CdS NPs) of different sizes were synthesized by the citrate reduction method. It was found that CdS NPs could enhance the chemiluminescence (CL) of the luminol-potassium ferricyanide system and baicalin could inhibit CdS NPs-enhanced luminol-potassium ferricyanide CL signals in alkaline solution. Based on this inhibition, a flow-injection CL method was established for determination of baicalin in pharmaceutical preparations and human urine samples. Under optimized conditions, the linear range for determination of baicalin was 5.0 x 10(-6) to 1.0 x 10(-3) g/L. The detection limit at a signal-to-noise ratio of 3 was 1.7 x 10(-6) g/L. CL spectra, UV-visible spectra and transmission electron microscopy (TEM) were used to investigate the CL mechanism. The method described is simple, selective and obviates the need of extensive sample pretreatment. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Heat exchangers made of polymer-bonded La(Fe,Si)13

    NASA Astrophysics Data System (ADS)

    Skokov, K. P.; Karpenkov, D. Yu.; Kuz'min, M. D.; Radulov, I. A.; Gottschall, T.; Kaeswurm, B.; Fries, M.; Gutfleisch, O.

    2014-05-01

    We report on magnetocaloric properties of polymer-bonded La(Fe,Si)13 heat exchangers with respect to the grain size of the powder used and the pressure applied for compaction of plates. Quite remarkably, it was found that the values of the adiabatic temperature change of polymer-bonded plates are 10% higher than in the initial bulk material. A critical value of the pressure applied during the compaction was found. Exceeding this value leads to a drastic reduction of the magnetocaloric effect due to cracking and comminution of the initial 50-100 μm grains down to 1-10 μm fragments. Compacting the LaFe11.6Si1.4 powder with 5 wt. % of silver epoxy under an optimal pressure of 0.1 GPa resulted in the production of 0.6 mm-thick plates. These plates were subsequently stacked and glued together into a simple porous heat exchanger with straight 0.6 mm-width channels.

  2. Fast particles in a steady-state compact FNS and compact ST reactor

    NASA Astrophysics Data System (ADS)

    Gryaznevich, M. P.; Nicolai, A.; Buxton, P.

    2014-10-01

    This paper presents results of studies of fast particles (ions and alpha particles) in a steady-state compact fusion neutron source (CFNS) and a compact spherical tokamak (ST) reactor with Monte-Carlo and Fokker-Planck codes. Full-orbit simulations of fast particle physics indicate that a compact high field ST can be optimized for energy production by a reduction of the necessary (for the alpha containment) plasma current compared with predictions made using simple analytic expressions, or using guiding centre approximation in a numerical code. Alpha particle losses may result in significant heating and erosion of the first wall, so such losses for an ST pilot plant have been calculated and total and peak wall loads dependence on the plasma current has been studied. The problem of dilution has been investigated and results for compact and big size devices are compared.

  3. Chaos and Forecasting - Proceedings of the Royal Society Discussion Meeting

    NASA Astrophysics Data System (ADS)

    Tong, Howell

    1995-04-01

    The Table of Contents for the full book PDF is as follows: * Preface * Orthogonal Projection, Embedding Dimension and Sample Size in Chaotic Time Series from a Statistical Perspective * A Theory of Correlation Dimension for Stationary Time Series * On Prediction and Chaos in Stochastic Systems * Locally Optimized Prediction of Nonlinear Systems: Stochastic and Deterministic * A Poisson Distribution for the BDS Test Statistic for Independence in a Time Series * Chaos and Nonlinear Forecastability in Economics and Finance * Paradigm Change in Prediction * Predicting Nonuniform Chaotic Attractors in an Enzyme Reaction * Chaos in Geophysical Fluids * Chaotic Modulation of the Solar Cycle * Fractal Nature in Earthquake Phenomena and its Simple Models * Singular Vectors and the Predictability of Weather and Climate * Prediction as a Criterion for Classifying Natural Time Series * Measuring and Characterising Spatial Patterns, Dynamics and Chaos in Spatially-Extended Dynamical Systems and Ecologies * Non-Linear Forecasting and Chaos in Ecology and Epidemiology: Measles as a Case Study

  4. Flexible energy harvesting from hard piezoelectric beams

    NASA Astrophysics Data System (ADS)

    Delnavaz, Aidin; Voix, Jérémie

    2016-11-01

    This paper presents design, multiphysics finite element modeling and experimental validation of a new miniaturized PZT generator that integrates a bulk piezoelectric ceramic onto a flexible platform for energy harvesting from the human body pressing force. In spite of its flexibility, the mechanical structure of the proposed device is simple to fabricate and efficient for the energy conversion. The finite element model involves both mechanical and piezoelectric parts of the device coupled with the electrical circuit model. The energy harvester prototype was fabricated and tested under the low frequency periodic pressing force during 10 seconds. The experimental results show that several nano joules of electrical energy is stored in a capacitor that is quite significant given the size of the device. The finite element model is validated by observing a good agreement between experimental and simulation results. the validated model could be used for optimizing the device for energy harvesting from earcanal deformations.

  5. Aqueous synthesis of near-infrared highly fluorescent platinum nanoclusters

    NASA Astrophysics Data System (ADS)

    García Fernández, Jenifer; Trapiella-Alfonso, Laura; Costa-Fernández, José M.; Pereiro, Rosario; Sanz-Medel, Alfredo

    2015-05-01

    A one-step synthesis of near infrared fluorescent platinum nanoclusters (PtNCs) in aqueous medium is described. The proposed optimized procedure for PtNC synthesis is rather simple, fast and it is based on the direct metal reduction with NaBH4. Bidentated thiol ligands (lipoic acid) were selected as nanoclusters stabilizers in water media. The structural characterization revealed attractive features of the PtNCs, including small size, high water solubility, near-infrared luminescence centered at 680 nm, long-term stability and the highest quantum yield in water reported so far (47%) for PtNCs. Moreover, their stability in different pH media and an ionic strength of 0.2 M NaCl was studied and no significant changes in fluorescence emission were detected. In brief, they offer a new type of fluorescent noble metal nanoprobe with a great potential to be applied in several fields, including biolabeling and imaging experiments.

  6. The generation of myricetin-nicotinamide nanococrystals by top down and bottom up technologies

    NASA Astrophysics Data System (ADS)

    Liu, Mingyu; Hong, Chao; Li, Guowen; Ma, Ping; Xie, Yan

    2016-09-01

    Myricetin-nicotinamide (MYR-NIC) nanococrystal preparation methods were developed and optimized using both top down and bottom up approaches. The grinding (top down) method successfully achieved nanococrystals, but there were some micrometer range particles and aggregation. The key consideration of the grinding technology was to control the milling time to determine a balance between the particle size and distribution. In contrast, a modified bottom up approach based on a solution method in conjunction with sonochemistry resulted in a uniform MYR-NIC nanococrystal that was confirmed by powder x-ray diffraction, scanning electron microscopy, dynamic light scattering, and differential scanning calorimeter, and the particle dissolution rate and amount were significantly greater than that of MYR-NIC cocrystal. Notably, this was a simple method without the addition of any non-solvent. We anticipate our findings will provide some guidance for future nanococrystal preparation as well as its application in both chemical and pharmaceutical area.

  7. Anomalous spectral correlations between SERS enhancement and far-field optical responses in roughened Au mesoparticles

    NASA Astrophysics Data System (ADS)

    Huang, Yu; Chen, Yun; Gao, Weixiang; Yang, Zhengxuan; Wang, Lingling

    2018-04-01

    Depending on the experimental conditions and plasmonic systems, the correlations between near-field surface enhanced Raman scattering (SERS) behaviors and far-field optical responses have sometimes been accepted directly, or argued, or explored. In this work, we have numerically demonstrated the anomalous spectral correlations between the near- and far-field properties for roughened Au mesoparticles. As a counterexample, it is witnessed that the dipole extinction peak of the mesoparticles may mislead us in seeking favorable SERS performance. The simple Rayleigh scattering spectra can also be misguided in the presence of dark modes. For roughened mesoparticles with a moderate size here, the huge near-field enhancement is a synergistic result of the overall dark quadrupole mode and the substructural bonding dipole coupling. The conclusions demonstrated here would be of general interest to the field of plasmonics, especially the optimization of single-particle SERS substrates.

  8. Integrated Lloyd's mirror on planar waveguide facet as a spectrometer.

    PubMed

    Morand, Alain; Benech, Pierre; Gri, Martine

    2017-12-10

    A low-cost and simple Fourier transform spectrometer based on the Lloyd's mirror configuration is proposed in order to have a very stable interferogram. A planar waveguide coupled to a fiber injection is used to spatially disperse the optical beam. A second beam superposed to the previous one is obtained by a total reflection of the incident beam on a vertical glass face integrated in the chip by dicing with a specific circular precision saw. The interferogram at the waveguide output is imaged on a near-infrared camera with an objective lens. The contrast and the fringe period are thus dependent on the type and the fiber position and can be optimized to the pixel size and the length of the camera. Spectral resolution close to λ/Δλ=80 is reached with a camera with 320 pixels of 25 μm width in a wavelength range from O to L bands.

  9. Combined Brayton-JT cycles with refrigerants for natural gas liquefaction

    NASA Astrophysics Data System (ADS)

    Chang, Ho-Myung; Park, Jae Hoon; Lee, Sanggyu; Choe, Kun Hyung

    2012-06-01

    Thermodynamic cycles for natural gas liquefaction with single-component refrigerants are investigated under a governmental project in Korea, aiming at new processes to meet the requirements on high efficiency, large capacity, and simple equipment. Based upon the optimization theory recently published by the present authors, it is proposed to replace the methane-JT cycle in conventional cascade process with a nitrogen-Brayton cycle. A variety of systems to combine nitrogen-Brayton, ethane-JT and propane-JT cycles are simulated with Aspen HYSYS and quantitatively compared in terms of thermodynamic efficiency, flow rate of refrigerants, and estimated size of heat exchangers. A specific Brayton-JT cycle is suggested with detailed thermodynamic data for further process development. The suggested cycle is expected to be more efficient and simpler than the existing cascade process, while still taking advantage of easy and robust operation with single-component refrigerants.

  10. Evaluation method of membrane performance in membrane distillation process for seawater desalination.

    PubMed

    Chung, Seungjoon; Seo, Chang Duck; Choi, Jae-Hoon; Chung, Jinwook

    2014-01-01

    Membrane distillation (MD) is an emerging desalination technology as an energy-saving alternative to conventional distillation and reverse osmosis method. The selection of appropriate membrane is a prerequisite for the design of an optimized MD process. We proposed a simple approximation method to evaluate the performance of membranes for MD process. Three hollow fibre-type commercial membranes with different thicknesses and pore sizes were tested. Experimental results showed that one membrane was advantageous due to the highest flux, whereas another membrane was due to the lowest feed temperature drop. Regression analyses and multi-stage calculations were used to account for the trade-offeffects of flux and feed temperature drop. The most desirable membrane was selected from tested membranes in terms of the mean flux in a multi-stage process. This method would be useful for the selection of the membranes without complicated simulation techniques.

  11. Solid-State Explosive Reaction for Nanoporous Bulk Thermoelectric Materials.

    PubMed

    Zhao, Kunpeng; Duan, Haozhi; Raghavendra, Nunna; Qiu, Pengfei; Zeng, Yi; Zhang, Wenqing; Yang, Jihui; Shi, Xun; Chen, Lidong

    2017-11-01

    High-performance thermoelectric materials require ultralow lattice thermal conductivity typically through either shortening the phonon mean free path or reducing the specific heat. Beyond these two approaches, a new unique, simple, yet ultrafast solid-state explosive reaction is proposed to fabricate nanoporous bulk thermoelectric materials with well-controlled pore sizes and distributions to suppress thermal conductivity. By investigating a wide variety of functional materials, general criteria for solid-state explosive reactions are built upon both thermodynamics and kinetics, and then successfully used to tailor material's microstructures and porosity. A drastic decrease in lattice thermal conductivity down below the minimum value of the fully densified materials and enhancement in thermoelectric figure of merit are achieved in porous bulk materials. This work demonstrates that controlling materials' porosity is a very effective strategy and is easy to be combined with other approaches for optimizing thermoelectric performance. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Concentration of solar radiation by white painted transparent plates.

    PubMed

    Smestad, G; Hamill, P

    1982-04-01

    A simple flat-plate solar concentrator is described in this paper. The device is composed of a white painted transparent plate with a photovoltaic cell fixed to an unpainted area on the bottom of the plate. Light scattering off the white material is either lost or directed to the solar cell. Experimental concentrations of up to 1.9 times the incident solar flux have been achieved using white clays. These values are close to those predicted by theory for the experimental parameters investigated. A theory of the device operation is developed. Using this theory suggestions are made for optimizing the concentrator system. For reasonable choices of cell and plate size and reflectivities of 80% concentrations of over 2x are possible. The concentrator has the advantage over other systems in that the concentration is independent of incidence angle and the concentrator is easy to produce. The device needs no tracking system and will concentrate on a cloudy day.

  13. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  14. A simple numerical model for membrane oxygenation of an artificial lung machine

    NASA Astrophysics Data System (ADS)

    Subraveti, Sai Nikhil; Sai, P. S. T.; Viswanathan Pillai, Vinod Kumar; Patnaik, B. S. V.

    2015-11-01

    Optimal design of membrane oxygenators will have far reaching ramification in the development of artificial heart-lung systems. In the present CFD study, we simulate the gas exchange between the venous blood and air that passes through the hollow fiber membranes on a benchmark device. The gas exchange between the tube side fluid and the shell side venous liquid is modeled by solving mass, momentum conservation equations. The fiber bundle was modelled as a porous block with a bundle porosity of 0.6. The resistance offered by the fiber bundle was estimated by the standard Ergun correlation. The present numerical simulations are validated against available benchmark data. The effect of bundle porosity, bundle size, Reynolds number, non-Newtonian constitutive relation, upstream velocity distribution etc. on the pressure drop, oxygen saturation levels etc. are investigated. To emulate the features of gas transfer past the alveoli, the effect of pulsatility on the membrane oxygenation is also investigated.

  15. Preparation of NASICON-Type Nanosized Solid Electrolyte Li1.4Al0.4Ti1.6(PO4)3 by Evaporation-Induced Self-Assembly for Lithium-Ion Battery.

    PubMed

    Liu, Xingang; Fu, Ju; Zhang, Chuhong

    2016-12-01

    A simple and practicable evaporation-induced self-assembly (EISA) method is introduced for the first time to prepare nanosized solid electrolyte Li 1.4 Al 0.4 Ti 1.6 (PO 4 ) 3 (LATP) for all-solid-state lithium-ion batteries. A pure Na + super ion conductor (NASICON) phase is confirmed by X-ray diffraction (XRD) analysis, and its primary particle size is down to 70 nm by optimizing evaporation rate of the solvent. Excellent room temperature bulk and total lithium-ion conductivities of 2.09 × 10 -3  S cm -1 and 3.63 × 10 -4  S cm -1 are obtained, with an ion-hopping activation energy as low as 0.286 eV.

  16. Multiplex PCR for Rapid Detection of Genes Encoding Class A Carbapenemases

    PubMed Central

    Hong, Sang Sook; Kim, Kyeongmi; Huh, Ji Young; Jung, Bochan; Kang, Myung Seo

    2012-01-01

    In recent years, there have been increasing reports of KPC-producing Klebsiella pneumoniae in Korea. The modified Hodge test can be used as a phenotypic screening test for class A carbapenamase (CAC)-producing clinical isolates; however, it does not distinguish between carbapenemase types. The confirmation of type of CAC is important to ensure optimal therapy and to prevent transmission. This study applied a novel multiplex PCR assay to detect and differentiate CAC genes in a single reaction. Four primer pairs were designed to amplify fragments encoding 4 CAC families (SME, IMI/NMC-A, KPC, and GES). The multiplex PCR detected all genes tested for 4 CAC families that could be differentiated by fragment size according to gene type. This multiplex PCR offers a simple and useful approach for detecting and distinguishing CAC genes in carbapenem-resistant strains that are metallo-β-lactamase nonproducers. PMID:22950072

  17. Multiplex PCR for rapid detection of genes encoding class A carbapenemases.

    PubMed

    Hong, Sang Sook; Kim, Kyeongmi; Huh, Ji Young; Jung, Bochan; Kang, Myung Seo; Hong, Seong Geun

    2012-09-01

    In recent years, there have been increasing reports of KPC-producing Klebsiella pneumoniae in Korea. The modified Hodge test can be used as a phenotypic screening test for class A carbapenamase (CAC)-producing clinical isolates; however, it does not distinguish between carbapenemase types. The confirmation of type of CAC is important to ensure optimal therapy and to prevent transmission. This study applied a novel multiplex PCR assay to detect and differentiate CAC genes in a single reaction. Four primer pairs were designed to amplify fragments encoding 4 CAC families (SME, IMI/NMC-A, KPC, and GES). The multiplex PCR detected all genes tested for 4 CAC families that could be differentiated by fragment size according to gene type. This multiplex PCR offers a simple and useful approach for detecting and distinguishing CAC genes in carbapenem-resistant strains that are metallo-β-lactamase nonproducers.

  18. Focus tunable device actuator based on ionic polymer metal composite

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Wei; Su, Guo-Dung J.

    2015-09-01

    IPMC (Ionic Polymer Metallic Composite) is a kind of electroactive polymer (EAP) which is used as an actuator because of its low driving voltage and small size. The mechanism of IPMC actuator is due to the ionic diffusion when the voltage gradient is applied. In this paper, the complex IPMC fabrication such as Ag-IPMC be further developed in this paper. The comparison of response time and tip bending displacement of Pt-IPMC and Ag-IPMC will also be presented. We also use the optimized IPMC as the lens actuator integrated with curvilinear microlens array, and use the 3D printer to make a simple module and spring stable system. We also used modeling software, ANSYS Workbench, to confirm the effect of spring system. Finally, we successfully drive the lens system in 200μm stroke under 2.5V driving voltage within 1 seconds, and the resonant frequency is approximately 500 Hz.

  19. Fabrication of Al2O3 coated 2D TiO2 nanoparticle photonic crystal layers by reverse nano-imprint lithography and plasma enhanced atomic layer deposition.

    PubMed

    Kim, Ki-Kang; Ko, Ki-Young; Ahn, Jinho

    2013-10-01

    This paper reports simple process to enhance the extraction efficiency of photoluminescence (PL) from Eu-doped yttrium oxide (Y2O3:Eu3+) thin-film phosphor (TFP). Two-dimensional (2D) photonic crystal layer (PCL) was fabricated on Y2O3:Eu3+ phosphor films by reverse nano-imprint method using TiO2 nanoparticle solution as a nano-imprint resin and a 2D hole-patterned PDMS stamp. Atomic scale controlled Al2O3 deposition was performed onto this 2D nanoparticle PCL for the optimization of the photonic crystal pattern size and stabilization of TiO2 nanoparticle column structure. As a result, the light extraction efficiency of the Y2O3:Eu3+ phosphor film was improved by 2.0 times compared to the conventional Y2O3:Eu3+ phosphor film.

  20. Lesion size affects diagnostic performance of IOTA logistic regression models, IOTA simple rules and risk of malignancy index in discriminating between benign and malignant adnexal masses.

    PubMed

    Di Legge, A; Testa, A C; Ameye, L; Van Calster, B; Lissoni, A A; Leone, F P G; Savelli, L; Franchi, D; Czekierdowski, A; Trio, D; Van Holsbeke, C; Ferrazzi, E; Scambia, G; Timmerman, D; Valentin, L

    2012-09-01

    To estimate the ability to discriminate between benign and malignant adnexal masses of different size using: subjective assessment, two International Ovarian Tumor Analysis (IOTA) logistic regression models (LR1 and LR2), the IOTA simple rules and the risk of malignancy index (RMI). We used a multicenter IOTA database of 2445 patients with at least one adnexal mass, i.e. the database previously used to prospectively validate the diagnostic performance of LR1 and LR2. The masses were categorized into three subgroups according to their largest diameter: small tumors (diameter < 4 cm; n = 396), medium-sized tumors (diameter, 4-9.9 cm; n = 1457) and large tumors (diameter ≥ 10 cm, n = 592). Subjective assessment, LR1 and LR2, IOTA simple rules and the RMI were applied to each of the three groups. Sensitivity, specificity, positive and negative likelihood ratio (LR+, LR-), diagnostic odds ratio (DOR) and area under the receiver-operating characteristics curve (AUC) were used to describe diagnostic performance. A moving window technique was applied to estimate the effect of tumor size as a continuous variable on the AUC. The reference standard was the histological diagnosis of the surgically removed adnexal mass. The frequency of invasive malignancy was 10% in small tumors, 19% in medium-sized tumors and 40% in large tumors; 11% of the large tumors were borderline tumors vs 3% and 4%, respectively, of the small and medium-sized tumors. The type of benign histology also differed among the three subgroups. For all methods, sensitivity with regard to malignancy was lowest in small tumors (56-84% vs 67-93% in medium-sized tumors and 74-95% in large tumors) while specificity was lowest in large tumors (60-87%vs 83-95% in medium-sized tumors and 83-96% in small tumors ). The DOR and the AUC value were highest in medium-sized tumors and the AUC was largest in tumors with a largest diameter of 7-11 cm. Tumor size affects the performance of subjective assessment, LR1 and LR2, the IOTA simple rules and the RMI in discriminating correctly between benign and malignant adnexal masses. The likely explanation, at least in part, is the difference in histology among tumors of different size. Copyright © 2012 ISUOG. Published by John Wiley & Sons, Ltd.

  1. Prediction of Petermann I and II Spot Sizes for Single-mode Dispersion-shifted and Dispersion-flattened Fibers by a Simple Technique

    NASA Astrophysics Data System (ADS)

    Kamila, Kiranmay; Panda, Anup Kumar; Gangopadhyay, Sankar

    2013-09-01

    Employing the series expression for the fundamental modal field of dispersion-shifted trapezoidal and dispersion-flattened graded and step W fibers, we present simple but accurate analytical expressions for Petermann I and II spot sizes of such kind of fibers. Choosing some typical dispersion-shifted trapezoidal and dispersion-flattened graded and step W fibers as examples, we show that our estimations match excellently with the exact numerical results. The evaluation of the concerned propagation parameters by our formalism needs very little computations. This accurate but simple formalism will benefit the system engineers working in the field of all optical technology.

  2. A Constrained Maximization Model for inspecting the impact of leaf shape on optimal leaf size and stoma resistance

    NASA Astrophysics Data System (ADS)

    Ding, J.; Johnson, E. A.; Martin, Y. E.

    2017-12-01

    Leaf is the basic production unit of plants. Water is the most critical resource of plants. Its availability controls primary productivity of plants by affecting leaf carbon budget. To avoid the damage of cavitation from lowering vein water potential t caused by evapotranspiration, the leaf must increase the stomatal resistance to reduce evapotranspiration rate. This comes at the cost of reduced carbon fixing rate as increasing stoma resistance meanwhile slows carbon intake rate. Studies suggest that stoma will operate at an optimal resistance to maximize the carbon gain with respect to water. Different plant species have different leaf shapes, a genetically determined trait. Further, on the same plant leaf size can vary many times in size that is related to soil moisture, an indicator of water availability. According to metabolic scaling theory, increasing leaf size will increase total xylem resistance of vein, which may also constrain leaf carbon budget. We present a Constrained Maximization Model of leaf (leaf CMM) that incorporates metabolic theory into the coupling of evapotranspiration and carbon fixation to examine how leaf size, stoma resistance and maximum net leaf primary productivity change with petiole xylem water potential. The model connects vein network structure to leaf shape and use the difference between petiole xylem water potential and the critical minor vein cavitation forming water potential as the budget. The CMM shows that both maximum net leaf primary production and optimal leaf size increase with petiole xylem water potential while optimal stoma resistance decreases. Narrow leaf has overall lower optimal leaf size and maximum net leaf carbon gain and higher optimal stoma resistance than those of broad leaf. This is because with small width to length ratio, total xylem resistance increases faster with leaf size. Total xylem resistance of narrow leaf increases faster with leaf size causing higher average and marginal cost of xylem water potential with respect to net leaf carbon gain. With same leaf area, total xylem resistance of narrow leaf is higher than broad leaf. Given same stoma resistance and petiole water potential, narrow leaf will lose more xylem water potential than broad leaf. Consequently, narrow leaf has smaller size and higher stoma resistance at optimum.

  3. Unpredictable food supply modifies costs of reproduction and hampers individual optimization.

    PubMed

    Török, János; Hegyi, Gergely; Tóth, László; Könczey, Réka

    2004-11-01

    Investment into the current reproductive attempt is thought to be at the expense of survival and/or future reproduction. Individuals are therefore expected to adjust their decisions to their physiological state and predictable aspects of environmental quality. The main predictions of the individual optimization hypothesis for bird clutch sizes are: (1) an increase in the number of recruits with an increasing number of eggs in natural broods, with no corresponding impairment of parental survival or future reproduction, and (2) a decrease in the fitness of parents in response to both negative and positive brood size manipulation, as a result of a low number of recruits, poor future reproduction of parents, or both. We analysed environmental influences on costs and optimization of reproduction on 6 years of natural and experimentally manipulated broods in a Central European population of the collared flycatcher. Based on dramatic differences in caterpillar availability, we classified breeding seasons as average and rich food years. The categorization was substantiated by the majority of present and future fitness components of adults and offspring. Neither observational nor experimental data supported the individual optimization hypothesis, in contrast to a Scandinavian population of the species. The quality of fledglings deteriorated, and the number of recruits did not increase with natural clutch size. Manipulation revealed significant costs of reproduction to female parents in terms of future reproductive potential. However, the influence of manipulation on recruitment was linear, with no significant polynomial effect. The number of recruits increased with manipulation in rich food years and tended to decrease in average years, so control broods did not recruit more young than manipulated broods in any of the year types. This indicates that females did not optimize their clutch size, and that they generally laid fewer eggs than optimal in rich food years. Mean yearly clutch size did not follow food availability, which suggests that females cannot predict food supply of the brood-rearing period at the beginning of the season. This lack of information on future food conditions seems to prevent them from accurately estimating their optimal clutch size for each season. Our results suggest that individual optimization may not be a general pattern even within a species, and alternative mechanisms are needed to explain clutch size variation.

  4. Ultra-small dye-doped silica nanoparticles via modified sol-gel technique.

    PubMed

    Riccò, R; Nizzero, S; Penna, E; Meneghello, A; Cretaio, E; Enrichi, F

    2018-01-01

    In modern biosensing and imaging, fluorescence-based methods constitute the most diffused approach to achieve optimal detection of analytes, both in solution and on the single-particle level. Despite the huge progresses made in recent decades in the development of plasmonic biosensors and label-free sensing techniques, fluorescent molecules remain the most commonly used contrast agents to date for commercial imaging and detection methods. However, they exhibit low stability, can be difficult to functionalise, and often result in a low signal-to-noise ratio. Thus, embedding fluorescent probes into robust and bio-compatible materials, such as silica nanoparticles, can substantially enhance the detection limit and dramatically increase the sensitivity. In this work, ultra-small fluorescent silica nanoparticles (NPs) for optical biosensing applications were doped with a fluorescent dye, using simple water-based sol-gel approaches based on the classical Stöber procedure. By systematically modulating reaction parameters, controllable size tuning of particle diameters as low as 10 nm was achieved. Particles morphology and optical response were evaluated showing a possible single-molecule behaviour, without employing microemulsion methods to achieve similar results. Graphical abstractWe report a simple, cheap, reliable protocol for the synthesis and systematic tuning of ultra-small (< 10 nm) dye-doped luminescent silica nanoparticles.

  5. Novel design of high voltage pulse source for efficient dielectric barrier discharge generation by using silicon diodes for alternating current.

    PubMed

    Truong, Hoa Thi; Hayashi, Misaki; Uesugi, Yoshihiko; Tanaka, Yasunori; Ishijima, Tatsuo

    2017-06-01

    This work focuses on design, construction, and optimization of configuration of a novel high voltage pulse power source for large-scale dielectric barrier discharge (DBD) generation. The pulses were generated by using the high-speed switching characteristic of an inexpensive device called silicon diodes for alternating current and the self-terminated characteristic of DBD. The operation started to be powered by a primary DC low voltage power supply flexibly equipped with a commercial DC power supply, or a battery, or DC output of an independent photovoltaic system without transformer employment. This flexible connection to different types of primary power supply could provide a promising solution for the application of DBD, especially in the area without power grid connection. The simple modular structure, non-control requirement, transformer elimination, and a minimum number of levels in voltage conversion could lead to a reduction in size, weight, simple maintenance, low cost of installation, and high scalability of a DBD generator. The performance of this pulse source has been validated by a load of resistor. A good agreement between theoretically estimated and experimentally measured responses has been achieved. The pulse source has also been successfully applied for an efficient DBD plasma generation.

  6. Novel design of high voltage pulse source for efficient dielectric barrier discharge generation by using silicon diodes for alternating current

    NASA Astrophysics Data System (ADS)

    Truong, Hoa Thi; Hayashi, Misaki; Uesugi, Yoshihiko; Tanaka, Yasunori; Ishijima, Tatsuo

    2017-06-01

    This work focuses on design, construction, and optimization of configuration of a novel high voltage pulse power source for large-scale dielectric barrier discharge (DBD) generation. The pulses were generated by using the high-speed switching characteristic of an inexpensive device called silicon diodes for alternating current and the self-terminated characteristic of DBD. The operation started to be powered by a primary DC low voltage power supply flexibly equipped with a commercial DC power supply, or a battery, or DC output of an independent photovoltaic system without transformer employment. This flexible connection to different types of primary power supply could provide a promising solution for the application of DBD, especially in the area without power grid connection. The simple modular structure, non-control requirement, transformer elimination, and a minimum number of levels in voltage conversion could lead to a reduction in size, weight, simple maintenance, low cost of installation, and high scalability of a DBD generator. The performance of this pulse source has been validated by a load of resistor. A good agreement between theoretically estimated and experimentally measured responses has been achieved. The pulse source has also been successfully applied for an efficient DBD plasma generation.

  7. Embedded real-time image processing hardware for feature extraction and clustering

    NASA Astrophysics Data System (ADS)

    Chiu, Lihu; Chang, Grant

    2003-08-01

    Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.

  8. Slow feature analysis: unsupervised learning of invariances.

    PubMed

    Wiskott, Laurenz; Sejnowski, Terrence J

    2002-04-01

    Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

  9. Simultaneous Determination of Benzene and Toluene in Pesticide Emulsifiable Concentrate by Headspace GC-MS

    PubMed Central

    Jiang, Hua; Yang, Jing; Fan, Li; Li, Fengmin; Huang, Qiliang

    2013-01-01

    The toxic inert ingredients in pesticide formulations are strictly regulated in many countries. In this paper, a simple and efficient headspace-gas chromatography-mass spectrometry (HSGC-MS) method using fluorobenzene as an internal standard (IS) for rapid simultaneous determination of benzene and toluene in pesticide emulsifiable concentrate (EC) was established. The headspace and GC-MS conditions were investigated and developed. A nonpolar fused silica Rtx-5 capillary column (30 m × 0.20 mm i.d. and 0.25 μm film thickness) with temperature programming was used. Under optimized headspace conditions, equilibration temperature of 120°C, equilibration time of 5 min, and sample size of 50 μL, the regression of the peak area ratios of benzene and toluene to IS on the concentrations of analytes fitted a linear relationship well at the concentration levels ranging from 3.2 g/L to 16.0 g/L. Standard additions of benzene and toluene to blank different matrix solutions 1ead to recoveries of 100.1%–109.5% with a relative standard deviation (RSD) of 0.3%–8.1%. The method presented here stands out as simple and easily applicable, which provides a way for the determination of toxic volatile adjuvant in liquid pesticide formulations. PMID:23607048

  10. Degradation of Tetracycline with BiFeO3 Prepared by a Simple Hydrothermal Method

    PubMed Central

    Xue, Zhehua; Wang, Ting; Chen, Bingdi; Malkoske, Tyler; Yu, Shuili; Tang, Yulin

    2015-01-01

    BiFeO3 particles (BFO) were prepared by a simple hydrothermal method and characterized. BFO was pure, with a wide particle size distribution, and was visible light responsive. Tetracycline was chosen as the model pollutant in this study. The pH value was an important factor influencing the degradation efficiency. The total organic carbon (TOC) measurement was emphasized as a potential standard to evaluate the visible light photocatalytic degradation efficiency. The photo-Fenton process showed much better degradation efficiency and a wider pH adaptive range than photocatalysis or the Fenton process solely. The optimal residual TOC concentrations of the photocatalysis, Fenton and photo-Fenton processes were 81%, 65% and 21%, while the rate constants of the three processes under the same condition where the best residual TOC was acquired were 9.7 × 10−3, 3.2 × 10−2 and 1.5 × 10−1 min−1, respectively. BFO was demonstrated to have excellent stability and reusability. A comparison among different reported advanced oxidation processes removing tetracycline (TC) was also made. Our findings showed that the photo-Fenton process had good potential for antibiotic-containing waste water treatment. It provides a new method to deal with antibiotic pollution. PMID:28793568

  11. Understanding quantum tunneling using diffusion Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.

    2018-03-01

    In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.

  12. Functional and Structural Optimality in Plant Growth: A Crop Modelling Case Study

    NASA Astrophysics Data System (ADS)

    Caldararu, S.; Purves, D. W.; Smith, M. J.

    2014-12-01

    Simple mechanistic models of vegetation processes are essential both to our understanding of plant behaviour and to our ability to predict future changes in vegetation. One concept that can take us closer to such models is that of plant optimality, the hypothesis that plants aim to achieve an optimal state. Conceptually, plant optimality can be either structural or functional optimality. A structural constraint would mean that plants aim to achieve a certain structural characteristic such as an allometric relationship or nutrient content that allows optimal function. A functional condition refers to plants achieving optimal functionality, in most cases by maximising carbon gain. Functional optimality conditions are applied on shorter time scales and lead to higher plasticity, making plants more adaptable to changes in their environment. In contrast, structural constraints are optimal given the specific environmental conditions that plants are adapted to and offer less flexibility. We exemplify these concepts using a simple model of crop growth. The model represents annual cycles of growth from sowing date to harvest, including both vegetative and reproductive growth and phenology. Structural constraints to growth are represented as an optimal C:N ratio in all plant organs, which drives allocation throughout the vegetative growing stage. Reproductive phenology - i.e. the onset of flowering and grain filling - is determined by a functional optimality condition in the form of maximising final seed mass, so that vegetative growth stops when the plant reaches maximum nitrogen or carbon uptake. We investigate the plants' response to variations in environmental conditions within these two optimality constraints and show that final yield is most affected by changes during vegetative growth which affect the structural constraint.

  13. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  14. Economic Analysis and Optimal Sizing for behind-the-meter Battery Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael CW; Yang, Tao

    This paper proposes methods to estimate the potential benefits and determine the optimal energy and power capacity for behind-the-meter BSS. In the proposed method, a linear programming is first formulated only using typical load profiles, energy/demand charge rates, and a set of battery parameters to determine the maximum saving in electric energy cost. The optimization formulation is then adapted to include battery cost as a function of its power and energy capacity in order to capture the trade-off between benefits and cost, and therefore to determine the most economic battery size. Using the proposed methods, economic analysis and optimal sizingmore » have been performed for a few commercial buildings and utility rate structures that are representative of those found in the various regions of the Continental United States. The key factors that affect the economic benefits and optimal size have been identified. The proposed methods and case study results cannot only help commercial and industrial customers or battery vendors to evaluate and size the storage system for behind-the-meter application, but can also assist utilities and policy makers to design electricity rate or subsidies to promote the development of energy storage.« less

  15. Automatic CT simulation optimization for radiation therapy: A general strategy.

    PubMed

    Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa

    2014-03-01

    In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.

  16. An integrated Gaussian process regression for prediction of remaining useful life of slow speed bearings based on acoustic emission

    NASA Astrophysics Data System (ADS)

    Aye, S. A.; Heyns, P. S.

    2017-02-01

    This paper proposes an optimal Gaussian process regression (GPR) for the prediction of remaining useful life (RUL) of slow speed bearings based on a novel degradation assessment index obtained from acoustic emission signal. The optimal GPR is obtained from an integration or combination of existing simple mean and covariance functions in order to capture the observed trend of the bearing degradation as well the irregularities in the data. The resulting integrated GPR model provides an excellent fit to the data and improves over the simple GPR models that are based on simple mean and covariance functions. In addition, it achieves a low percentage error prediction of the remaining useful life of slow speed bearings. These findings are robust under varying operating conditions such as loading and speed and can be applied to nonlinear and nonstationary machine response signals useful for effective preventive machine maintenance purposes.

  17. Simple colorimetric detection of doxycycline and oxytetracycline using unmodified gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Li, Jie; Fan, Shumin; Li, Zhigang; Xie, Yuanzhe; Wang, Rui; Ge, Baoyu; Wu, Jing; Wang, Ruiyong

    2014-08-01

    The interaction between tetracycline antibiotics and gold nanoparticles was studied. With citrate-coated gold nanoparticles as colorimetric probe, a simple and rapid detection method for doxycycline and oxytetracycline has been developed. This method relies on the distance-dependent optical properties of gold nanoparticles. In weakly acidic buffer medium, doxycycline and oxytetracycline could rapidly induce the aggregation of gold nanoparticles, resulting in red-to-blue (or purple) colour change. The experimental parameters were optimized with regard to pH, the concentration of the gold nanoparticles and the reaction time. Under optimal experimental conditions, the linear range of the colorimetric sensor for doxycycline/oxytetracycline was 0.06-0.66 and 0.59-8.85 μg mL-1, respectively. The corresponding limit of detection for doxycycline and oxytetracycline was 0.0086 and 0.0838 μg mL-1, respectively. This assay was sensitive, selective, simple and readily used to detect tetracycline antibiotics in food products.

  18. Adaptive behaviors in multi-agent source localization using passive sensing.

    PubMed

    Shaukat, Mansoor; Chitre, Mandar

    2016-12-01

    In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent's adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent's sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.

  19. Examining Errors in Simple Spreadsheet Modeling from Different Research Perspectives

    ERIC Educational Resources Information Center

    Kadijevich, Djordje M.

    2012-01-01

    By using a sample of 1st-year undergraduate business students, this study dealt with the development of simple (deterministic and non-optimization) spreadsheet models of income statements within an introductory course on business informatics. The study examined students' errors in doing this for business situations of their choice and found three…

  20. Electrospun Collagen/Silk Tissue Engineering Scaffolds: Fiber Fabrication, Post-Treatment Optimization, and Application in Neural Differentiation of Stem Cells

    NASA Astrophysics Data System (ADS)

    Zhu, Bofan

    Biocompatible scaffolds mimicking the locally aligned fibrous structure of native extracellular matrix (ECM) are in high demand in tissue engineering. In this thesis research, unidirectionally aligned fibers were generated via a home-built electrospinning system. Collagen type I, as a major ECM component, was chosen in this study due to its support of cell proliferation and promotion of neuroectodermal commitment in stem cell differentiation. Synthetic dragline silk proteins, as biopolymers with remarkable tensile strength and superior elasticity, were also used as a model material. Good alignment, controllable fiber size and morphology, as well as a desirable deposition density of fibers were achieved via the optimization of solution and electrospinning parameters. The incorporation of silk proteins into collagen was found to significantly enhance mechanical properties and stability of electrospun fibers. Glutaraldehyde (GA) vapor post-treatment was demonstrated as a simple and effective way to tune the properties of collagen/silk fibers without changing their chemical composition. With 6-12 hours GA treatment, electrospun collagen/silk fibers were not only biocompatible, but could also effectively induce the polarization and neural commitment of stem cells, which were optimized on collagen rich fibers due to the unique combination of biochemical and biophysical cues imposed to cells. Taken together, electrospun collagen rich composite fibers are mechanically strong, stable and provide excellent cell adhesion. The unidirectionally aligned fibers can accelerate neural differentiation of stem cells, representing a promising therapy for neural tissue degenerative diseases and nerve injuries.

  1. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  2. Predictive power of food web models based on body size decreases with trophic complexity.

    PubMed

    Jonsson, Tomas; Kaartinen, Riikka; Jonsson, Mattias; Bommarco, Riccardo

    2018-05-01

    Food web models parameterised using body size show promise to predict trophic interaction strengths (IS) and abundance dynamics. However, this remains to be rigorously tested in food webs beyond simple trophic modules, where indirect and intraguild interactions could be important and driven by traits other than body size. We systematically varied predator body size, guild composition and richness in microcosm insect webs and compared experimental outcomes with predictions of IS from models with allometrically scaled parameters. Body size was a strong predictor of IS in simple modules (r 2  = 0.92), but with increasing complexity the predictive power decreased, with model IS being consistently overestimated. We quantify the strength of observed trophic interaction modifications, partition this into density-mediated vs. behaviour-mediated indirect effects and show that model shortcomings in predicting IS is related to the size of behaviour-mediated effects. Our findings encourage development of dynamical food web models explicitly including and exploring indirect mechanisms. © 2018 John Wiley & Sons Ltd/CNRS.

  3. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    PubMed

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  4. Confidence bands for measured economically optimal nitrogen rates

    USDA-ARS?s Scientific Manuscript database

    While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...

  5. Modified dwell time optimization model and its applications in subaperture polishing.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-05-20

    The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200  mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920  mm Zerodur paraboloid and Φ100  mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8  nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.

  6. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.

  7. Optimal current waveforms for brushless permanent magnet motors

    NASA Astrophysics Data System (ADS)

    Moehle, Nicholas; Boyd, Stephen

    2015-07-01

    In this paper, we give energy-optimal current waveforms for a permanent magnet synchronous motor that result in a desired average torque. Our formulation generalises previous work by including a general back-electromotive force (EMF) wave shape, voltage and current limits, an arbitrary phase winding connection, a simple eddy current loss model, and a trade-off between power loss and torque ripple. Determining the optimal current waveforms requires solving a small convex optimisation problem. We show how to use the alternating direction method of multipliers to find the optimal current in milliseconds or hundreds of microseconds, depending on the processor used, which allows the possibility of generating optimal waveforms in real time. This allows us to adapt in real time to changes in the operating requirements or in the model, such as a change in resistance with winding temperature, or even gross changes like the failure of one winding. Suboptimal waveforms are available in tens or hundreds of microseconds, allowing for quick response after abrupt changes in the desired torque. We demonstrate our approach on a simple numerical example, in which we give the optimal waveforms for a motor with a sinusoidal back-EMF, and for a motor with a more complicated, nonsinusoidal waveform, in both the constant-torque region and constant-power region.

  8. Optimal Sizing Tool for Battery Storage in Grid Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-24

    The battery storage sizing tool developed at Pacific Northwest National Laboratory can be used to evaluate economic performance and determine the optimal size of battery storage in different use cases considering multiple power system applications. The considered use cases include i) utility owned battery storage, and ii) battery storage behind customer meter. The power system applications from energy storage include energy arbitrage, balancing services, T&D deferral, outage mitigation, demand charge reduction etc. Most of existing solutions consider only one or two grid services simultaneously, such as balancing service and energy arbitrage. ES-select developed by Sandia and KEMA is able tomore » consider multiple grid services but it stacks the grid services based on priorities instead of co-optimization. This tool is the first one that provides a co-optimization for systematic and local grid services.« less

  9. Optimal placement and sizing of wind / solar based DG sources in distribution system

    NASA Astrophysics Data System (ADS)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  10. Optimal synthesis and characterization of Ag nanofluids by electrical explosion of wires in liquids

    PubMed Central

    2011-01-01

    Silver nanoparticles were produced by electrical explosion of wires in liquids with no additive. In this study, we optimized the fabrication method and examined the effects of manufacturing process parameters. Morphology and size of the Ag nanoparticles were determined using transmission electron microscopy and field-emission scanning electron microscopy. Size and zeta potential were analyzed using dynamic light scattering. A response optimization technique showed that optimal conditions were achieved when capacitance was 30 μF, wire length was 38 mm, liquid volume was 500 mL, and the liquid type was deionized water. The average Ag nanoparticle size in water was 118.9 nm and the zeta potential was -42.5 mV. The critical heat flux of the 0.001-vol.% Ag nanofluid was higher than pure water. PMID:21711757

  11. Game Theory and Risk-Based Levee System Design

    NASA Astrophysics Data System (ADS)

    Hui, R.; Lund, J. R.; Madani, K.

    2014-12-01

    Risk-based analysis has been developed for optimal levee design for economic efficiency. Along many rivers, two levees on opposite riverbanks act as a simple levee system. Being rational and self-interested, land owners on each river bank would tend to independently optimize their levees with risk-based analysis, resulting in a Pareto-inefficient levee system design from the social planner's perspective. Game theory is applied in this study to analyze decision making process in a simple levee system in which the land owners on each river bank develop their design strategies using risk-based economic optimization. For each land owner, the annual expected total cost includes expected annual damage cost and annualized construction cost. The non-cooperative Nash equilibrium is identified and compared to the social planner's optimal distribution of flood risk and damage cost throughout the system which results in the minimum total flood cost for the system. The social planner's optimal solution is not feasible without appropriate level of compensation for the transferred flood risk to guarantee and improve conditions for all parties. Therefore, cooperative game theory is then employed to develop an economically optimal design that can be implemented in practice. By examining the game in the reversible and irreversible decision making modes, the cost of decision making myopia is calculated to underline the significance of considering the externalities and evolution path of dynamic water resource problems for optimal decision making.

  12. Optimization of ecosystem model parameters with different temporal variabilities using tower flux data and an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.

    2013-12-01

    Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that the optimized fw is best correlated linearly to soil water content at 5 to 10 cm depth. We also found that both the temporal scale or window size and the priori uncertainty of Vcmax (given as its standard deviation) are important in determining the seasonal trajectory of Vcmax. During the leaf expansion stage, an appropriate window size leads to reasonable estimate of Vcmax. In the summer, the fluctuation of optimized Vcmax is mainly caused by the uncertainties in Vcmax but not the window size. Our study suggests that a smooth Vcmax curve optimized from an optimal time window size is close to the reality though the RMSE of GPP at this window is not the minimum. It also suggests that for the accurate optimization of Vcmax, it is necessary to set appropriate levels of uncertainty of Vcmax in the spring and summer because the rate of leaf nitrogen concentration change is different over the season. Parameter optimizations for more sites and multi-years are in progress.

  13. A Simple Effect Size Estimator for Single Case Designs Using WinBUGS

    ERIC Educational Resources Information Center

    Rindskopf, David; Shadish, William; Hedges, Larry

    2012-01-01

    Data from single case designs (SCDs) have traditionally been analyzed by visual inspection rather than statistical models. As a consequence, effect sizes have been of little interest. Lately, some effect-size estimators have been proposed, but most are either (i) nonparametric, and/or (ii) based on an analogy incompatible with effect sizes from…

  14. The preliminary SOL (Sizing and Optimization Language) reference manual

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1989-01-01

    The Sizing and Optimization Language, SOL, a high-level special-purpose computer language has been developed to expedite application of numerical optimization to design problems and to make the process less error-prone. This document is a reference manual for those wishing to write SOL programs. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler and runtime library routines. An overview of SOL appears in NASA TM 100565.

  15. A mathematical framework for the selection of an optimal set of peptides for epitope-based vaccines.

    PubMed

    Toussaint, Nora C; Dönnes, Pierre; Kohlbacher, Oliver

    2008-12-01

    Epitope-based vaccines (EVs) have a wide range of applications: from therapeutic to prophylactic approaches, from infectious diseases to cancer. The development of an EV is based on the knowledge of target-specific antigens from which immunogenic peptides, so-called epitopes, are derived. Such epitopes form the key components of the EV. Due to regulatory, economic, and practical concerns the number of epitopes that can be included in an EV is limited. Furthermore, as the major histocompatibility complex (MHC) binding these epitopes is highly polymorphic, every patient possesses a set of MHC class I and class II molecules of differing specificities. A peptide combination effective for one person can thus be completely ineffective for another. This renders the optimal selection of these epitopes an important and interesting optimization problem. In this work we present a mathematical framework based on integer linear programming (ILP) that allows the formulation of various flavors of the vaccine design problem and the efficient identification of optimal sets of epitopes. Out of a user-defined set of predicted or experimentally determined epitopes, the framework selects the set with the maximum likelihood of eliciting a broad and potent immune response. Our ILP approach allows an elegant and flexible formulation of numerous variants of the EV design problem. In order to demonstrate this, we show how common immunological requirements for a good EV (e.g., coverage of epitopes from each antigen, coverage of all MHC alleles in a set, or avoidance of epitopes with high mutation rates) can be translated into constraints or modifications of the objective function within the ILP framework. An implementation of the algorithm outperforms a simple greedy strategy as well as a previously suggested evolutionary algorithm and has runtimes on the order of seconds for typical problem sizes.

  16. Ultrasound assisted extraction of Maxilon Red GRL dye from water samples using cobalt ferrite nanoparticles loaded on activated carbon as sorbent: Optimization and modeling.

    PubMed

    Mehrabi, Fatemeh; Vafaei, Azam; Ghaedi, Mehrorang; Ghaedi, Abdol Mohammad; Alipanahpour Dil, Ebrahim; Asfaram, Arash

    2017-09-01

    In this research, a selective, simple and rapid ultrasound assisted dispersive solid-phase micro-microextraction (UA-DSPME) was developed using cobalt ferrite nanoparticles loaded on activated carbon (CoFe 2 O 4 -NPs-AC) as an efficient sorbent for the preconcentration and determination of Maxilon Red GRL (MR-GRL) dye. The properties of sorbent are characterized by X-ray diffraction (XRD), Transmission Electron Microscopy (TEM), Vibrating sample magnetometers (VSM), Fourier transform infrared spectroscopy (FTIR), Particle size distribution (PSD) and Scanning Electron Microscope (SEM) techniques. The factors affecting on the determination of MR-GRL dye were investigated and optimized by central composite design (CCD) and artificial neural networks based on genetic algorithm (ANN-GA). CCD and ANN-GA were used for optimization. Using ANN-GA, optimum conditions were set at 6.70, 1.2mg, 5.5min and 174μL for pH, sorbent amount, sonication time and volume of eluent, respectively. Under the optimized conditions obtained from ANN-GA, the method exhibited a linear dynamic range of 30-3000ngmL -1 with a detection limit of 5.70ngmL -1 . The preconcentration factor and enrichment factor were 57.47 and 93.54, respectively with relative standard deviations (RSDs) less than 4.0% (N=6). The interference effect of some ions and dyes was also investigated and the results show a good selectivity for this method. Finally, the method was successfully applied to the preconcentration and determination of Maxilon Red GRL in water and wastewater samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Trade-off between disease resistance and crop yield: a landscape-scale mathematical modelling perspective.

    PubMed

    Vyska, Martin; Cunniffe, Nik; Gilligan, Christopher

    2016-10-01

    The deployment of crop varieties that are partially resistant to plant pathogens is an important method of disease control. However, a trade-off may occur between the benefits of planting the resistant variety and a yield penalty, whereby the standard susceptible variety outyields the resistant one in the absence of disease. This presents a dilemma: deploying the resistant variety is advisable only if the disease occurs and is sufficient for the resistant variety to outyield the infected standard variety. Additionally, planting the resistant variety carries with it a further advantage in that the resistant variety reduces the probability of disease invading. Therefore, viewed from the perspective of a grower community, there is likely to be an optimal trade-off and thus an optimal cropping density for the resistant variety. We introduce a simple stochastic, epidemiological model to investigate the trade-off and the consequences for crop yield. Focusing on susceptible-infected-removed epidemic dynamics, we use the final size equation to calculate the surviving host population in order to analyse the yield, an approach suitable for rapid epidemics in agricultural crops. We identify a single compound parameter, which we call the efficacy of resistance and which incorporates the changes in susceptibility, infectivity and durability of the resistant variety. We use the compound parameter to inform policy plots that can be used to identify the optimal strategy for given parameter values when an outbreak is certain. When the outbreak is uncertain, we show that for some parameter values planting the resistant variety is optimal even when it would not be during the outbreak. This is because the resistant variety reduces the probability of an outbreak occurring. © 2016 The Author(s).

  18. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants

    PubMed Central

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-01-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323

  19. Optimal word sizes for dissimilarity measures and estimation of the degree of dissimilarity between DNA sequences.

    PubMed

    Wu, Tiee-Jian; Huang, Ying-Hsueh; Li, Lung-An

    2005-11-15

    Several measures of DNA sequence dissimilarity have been developed. The purpose of this paper is 3-fold. Firstly, we compare the performance of several word-based or alignment-based methods. Secondly, we give a general guideline for choosing the window size and determining the optimal word sizes for several word-based measures at different window sizes. Thirdly, we use a large-scale simulation method to simulate data from the distribution of SK-LD (symmetric Kullback-Leibler discrepancy). These simulated data can be used to estimate the degree of dissimilarity beta between any pair of DNA sequences. Our study shows (1) for whole sequence similiarity/dissimilarity identification the window size taken should be as large as possible, but probably not >3000, as restricted by CPU time in practice, (2) for each measure the optimal word size increases with window size, (3) when the optimal word size is used, SK-LD performance is superior in both simulation and real data analysis, (4) the estimate beta of beta based on SK-LD can be used to filter out quickly a large number of dissimilar sequences and speed alignment-based database search for similar sequences and (5) beta is also applicable in local similarity comparison situations. For example, it can help in selecting oligo probes with high specificity and, therefore, has potential in probe design for microarrays. The algorithm SK-LD, estimate beta and simulation software are implemented in MATLAB code, and are available at http://www.stat.ncku.edu.tw/tjwu

  20. Optimal control of hydroelectric facilities

    NASA Astrophysics Data System (ADS)

    Zhao, Guangzhi

    This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the challenging problem of optimizing a sequence of two hydro dams sharing the same river system. The complexity of this problem is magnified and we just scratch its surface here. The thesis concludes with suggestions for future work in this fertile area. Keywords: dynamic programming, hydroelectric facility, optimization, optimal control, switching cost, turbine efficiency.

  1. Multidisciplinary optimization in aircraft design using analytic technology models

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1991-01-01

    An approach to multidisciplinary optimization is presented which combines the Global Sensitivity Equation method, parametric optimization, and analytic technology models. The result is a powerful yet simple procedure for identifying key design issues. It can be used both to investigate technology integration issues very early in the design cycle, and to establish the information flow framework between disciplines for use in multidisciplinary optimization projects using much more computational intense representations of each technology. To illustrate the approach, an examination of the optimization of a short takeoff heavy transport aircraft is presented for numerous combinations of performance and technology constraints.

  2. A simple method for estimating the size of nuclei on fractal surfaces

    NASA Astrophysics Data System (ADS)

    Zeng, Qiang

    2017-10-01

    Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.

  3. Portion Size Versus Serving Size

    MedlinePlus

    ... Simple Cooking and Recipes Dining Out Choosing a Restaurant Deciphering the Menu Ordering Your Meal Eating Fast ... don’t know what a healthy portion is. Restaurants offer extras like breads, chips and other appetizers ...

  4. Optimizing Mississippi aggregates for concrete bridge decks.

    DOT National Transportation Integrated Search

    2012-12-01

    AASHTO M 43 Standard Specification for Sizes of Aggregate for Road and Bridge Construction : addresses particle size distribution of material included in various maximum nominal size aggregates. This : particle size distribution requires additi...

  5. A simple blind placement of the left-sided double-lumen tubes.

    PubMed

    Zong, Zhi Jun; Shen, Qi Ying; Lu, Yao; Li, Yuan Hai

    2016-11-01

    One-lung ventilation (OLV) has been commonly provided by using a double-lumen tube (DLT). Previous reports have indicated the high incidence of inappropriate DLT positioning in conventional maneuvers.After obtaining approval from the medical ethics committee of First Affiliated Hospital of Anhui Medical University and written consent from patients, 88 adult patients belonging to American society of anesthesiologists (ASA) physical status grade I or II, and undergoing elective thoracic surgery requiring a left-side DLT for OLV were enrolled in this prospective, single-blind, randomized controlled study. Patients were randomly allocated to 1 of 2 groups: simple maneuver group or conventional maneuver group. The simple maneuver is a method that relies on partially inflating the bronchial balloon and recreating the effect of a carinal hook on the DLTs to give an idea of orientation and depth. After the induction of anesthesia the patients were intubated with a left-sided Robertshaw DLT using one of the 2 intubation techniques. After intubation of each DLT, an anesthesiologist used flexible bronchoscopy to evaluate the patient while the patient lay in a supine position. The number of optimal position and the time required to place DLT in correct position were recorded.Time for the intubation of DLT took 100 ± 16.2 seconds (mean ± SD) in simple maneuver group and 95.1 ± 20.8 seconds in conventional maneuver group. The difference was not statistically significant (P = 0.221). Time for fiberoptic bronchoscope (FOB) took 22 ± 4.8 seconds in simple maneuver group and was statistically faster than that in conventional maneuver group (43.6 ± 23.7 seconds, P < 0.001). Nearly 98% of the 44 intubations in simple maneuver group were considered as in optimal position while only 52% of the 44 intubations in conventional maneuver group were in optimal position, and the difference was statistically significant (P < 0.001).This simple maneuver is more rapid and more accurate to position left-sided DLTs, it may be substituted for FOB during positioning of a left-sided DLT in condition that FOB is unavailable or inapplicable.

  6. ONLINE MINIMIZATION OF VERTICAL BEAM SIZES AT APS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yipeng

    In this paper, online minimization of vertical beam sizes along the APS (Advanced Photon Source) storage ring is presented. A genetic algorithm (GA) was developed and employed for the online optimization in the APS storage ring. A total of 59 families of skew quadrupole magnets were employed as knobs to adjust the coupling and the vertical dispersion in the APS storage ring. Starting from initially zero current skew quadrupoles, small vertical beam sizes along the APS storage ring were achieved in a short optimization time of one hour. The optimization results from this method are briefly compared with the onemore » from LOCO (Linear Optics from Closed Orbits) response matrix correction.« less

  7. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    PubMed

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  8. The role of oxidized regenerate cellulose to prevent cosmetic defects in oncoplastic breast surgery.

    PubMed

    Franceschini, G; Visconti, G; Terribile, D; Fabbri, C; Magno, S; Di Leone, A; Salgarello, M; Masetti, R

    2012-07-01

    Breast conserving surgery (BCS) combined with postoperative radiotherapy has become the gold standard of locoregional treatment for the majority of patients with early-stage breast cancer, offering equivalent survival and improved body image and lifestyle scores as compared to mastectomy. In an attempt to optimize the oncologic safety and cosmetic results of BCS, oncoplastic procedures (OPP) have been introduced in recent years combining the best principles of surgical oncology with those of plastic surgery. However, even with the use of OPP, cosmetic outcomes may result unsatisfying when a large volume of parenchyma has to be removed, particularly in small-medium size breasts. The aim of this article is to report our preliminary results with the use of oxidized regenerate cellulose (ORC) (Tabotamp fibrillar, Johnson & Johnson; Ethicon, USA) as an agent to prevent cosmetic defects in patients undergoing OPP for breast cancer and to analyze the technical refinements that can enhance its efficacy in optimizing cosmetic defects. Different OPP are selected based on the location and size of the tumor as well as volume and shape of the breast. After excision of the tumor, glandular flaps are created by dissection of the residual parenchyma from the pectoralis and serratus muscles and from the skin. After careful haemostasis, five layers of ORC are positioned on the pectoralis major in the residual cavity and covered by advancement of the glandular flaps. Two additional layers of ORC are positioned above the flaps and covered by cutaneous-subcutaenous flaps. The use of ORC after OPP has shown promising preliminary results, indicating a good tolerability and positive effects on cosmesis. This simple and reliable surgical technique may allow not only to reduce the rate of post-operative bleeding and infection at the surgical site but also to improve cosmetic results.

  9. NEW DEVELOPMENTS ON INVERSE POLYGON MAPPING TO CALCULATE GRAVITATIONAL LENSING MAGNIFICATION MAPS: OPTIMIZED COMPUTATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mediavilla, E.; Lopez, P.; Mediavilla, T.

    2011-11-01

    We derive an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps. It is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells (inverse polygon mapping, IPM), not including critical points (except perhaps at the cell boundaries). The zeroth-order term of the series expansion leads to the method described by Mediavilla et al. The first-order term is used to study the error induced by the truncation of the series at zeroth order, explaining the high accuracy of the IPM even at this low order of approximation.more » Interpreting the Inverse Ray Shooting (IRS) method in terms of IPM, we explain the previously reported N {sup -3/4} dependence of the IRS error with the number of collected rays per pixel. Cells intersected by critical curves (critical cells) transform to non-simply connected regions with topological pathologies like auto-overlapping or non-preservation of the boundary under the transformation. To define a non-critical partition, we use a linear approximation of the critical curve to divide each critical cell into two non-critical subcells. The optimal choice of the cell size depends basically on the curvature of the critical curves. For typical applications in which the pixel of the magnification map is a small fraction of the Einstein radius, a one-to-one relationship between the cell and pixel sizes in the absence of lensing guarantees both the consistence of the method and a very high accuracy. This prescription is simple but very conservative. We show that substantially larger cells can be used to obtain magnification maps with huge savings in computation time.« less

  10. Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury

    NASA Astrophysics Data System (ADS)

    Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.

    2008-02-01

    Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.

  11. Productivity growth, case mix and optimal size of hospitals. A 16-year study of the Norwegian hospital sector.

    PubMed

    Anthun, Kjartan Sarheim; Kittelsen, Sverre Andreas Campbell; Magnussen, Jon

    2017-04-01

    This paper analyses productivity growth in the Norwegian hospital sector over a period of 16 years, 1999-2014. This period was characterized by a large ownership reform with subsequent hospital reorganizations and mergers. We describe how technological change, technical productivity, scale efficiency and the estimated optimal size of hospitals have evolved during this period. Hospital admissions were grouped into diagnosis-related groups using a fixed-grouper logic. Four composite outputs were defined and inputs were measured as operating costs. Productivity and efficiency were estimated with bootstrapped data envelopment analyses. Mean productivity increased by 24.6% points from 1999 to 2014, an average annual change of 1.5%. There was a substantial growth in productivity and hospital size following the ownership reform. After the reform (2003-2014), average annual growth was <0.5%. There was no evidence of technical change. Estimated optimal size was smaller than the actual size of most hospitals, yet scale efficiency was high even after hospital mergers. However, the later hospital mergers have not been followed by similar productivity growth as around time of the reform. This study addresses the issues of both cross-sectional and longitudinal comparability of case mix between hospitals, and thus provides a framework for future studies. The study adds to the discussion on optimal hospital size. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  13. Effects of Planetary Gear Ratio on Mean Service Life

    NASA Technical Reports Server (NTRS)

    Savage, M.; Rubadeux, K. L.; Coe, H. H.

    1996-01-01

    Planetary gear transmissions are compact, high-power speed reductions which use parallel load paths. The range of possible reduction ratios is bounded from below and above by limits on the relative size of the planet gears. For a single plane transmission, the planet gear has no size at a ratio of two. As the ratio increases, so does the size of the planets relative to the sizes of the sun and ring. Which ratio is best for a planetary reduction can be resolved by studying a series of optimal designs. In this series, each design is obtained by maximizing the service life for a planetary with a fixed size, gear ratio, input speed power and materials. The planetary gear reduction service life is modeled as a function of the two-parameter Weibull distributed service lives of the bearings and gears in the reduction. Planet bearing life strongly influences the optimal reduction lives which point to an optimal planetary reduction ratio in the neighborhood of four to five.

  14. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  15. Particle-size distribution modified effective medium theory and validation by magneto-dielectric Co-Ti substituted BaM ferrite composites

    NASA Astrophysics Data System (ADS)

    Li, Qifan; Chen, Yajie; Harris, Vincent G.

    2018-05-01

    This letter reports an extended effective medium theory (EMT) including particle-size distribution functions to maximize the magnetic properties of magneto-dielectric composites. It is experimentally verified by Co-Ti substituted barium ferrite (BaCoxTixFe12-2xO19)/wax composites with specifically designed particle-size distributions. In the form of an integral equation, the extended EMT formula essentially takes the size-dependent parameters of magnetic particle fillers into account. It predicts the effective permeability of magneto-dielectric composites with various particle-size distributions, indicating an optimal distribution for a population of magnetic particles. The improvement of the optimized effective permeability is significant concerning magnetic particles whose properties are strongly size dependent.

  16. A logical approach to optimize the nanostructured lipid carrier system of irinotecan: efficient hybrid design methodology

    NASA Astrophysics Data System (ADS)

    Mohan Negi, Lalit; Jaggi, Manu; Talegaonkar, Sushama

    2013-01-01

    Development of an effective formulation involves careful optimization of a number of excipient and process variables. Sometimes the number of variables is so large that even the most efficient optimization designs require a very large number of trials which put stress on costs as well as time. A creative combination of a number of design methods leads to a smaller number of trials. This study was aimed at the development of nanostructured lipid carriers (NLCs) by using a combination of different optimization methods. A total of 11 variables were first screened using the Plackett-Burman design for their effects on formulation characteristics like size and entrapment efficiency. Four out of 11 variables were found to have insignificant effects on the formulation parameters and hence were screened out. Out of the remaining seven variables, four (concentration of tween-80, lecithin, sodium taurocholate, and total lipid) were found to have significant effects on the size of the particles while the other three (phase ratio, drug to lipid ratio, and sonication time) had a higher influence on the entrapment efficiency. The first four variables were optimized for their effect on size using the Taguchi L9 orthogonal array. The optimized values of the surfactants and lipids were kept constant for the next stage, where the sonication time, phase ratio, and drug:lipid ratio were varied using the Box-Behnken design response surface method to optimize the entrapment efficiency. Finally, by performing only 38 trials, we have optimized 11 variables for the development of NLCs with a size of 143.52 ± 1.2 nm, zeta potential of -32.6 ± 0.54 mV, and 98.22 ± 2.06% entrapment efficiency.

  17. Man in Balance with the Environment: Pollution and the Optimal Population Size

    ERIC Educational Resources Information Center

    Ultsch, Gordon R.

    1973-01-01

    Discusses the relationship between population size and pollution, and suggests that the optimal population level toward which we should strive would be that level at which man is in balance with the biosphere in terms of pollution production and degradation, coupled with a harmless steady-state background pollution level. (JR)

  18. Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Benford, Andrew; Tinker, Michael L.

    2004-01-01

    The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.

  19. 77 FR 72766 - Small Business Size Standards: Support Activities for Mining

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-06

    ... its entirety for parties who have an interest in SBA's overall approach to establishing, evaluating....gov , Docket ID: SBA-2009- 0008. SBA continues to welcome comments on its methodology from interested.... Average firm size. SBA computes two measures of average firm size: simple average and weighted average...

  20. On Family Size and Intelligence.

    ERIC Educational Resources Information Center

    Armor, David J.

    2001-01-01

    Critiques research by Rodgers, et al. (June 2000) on the impact of family size on intelligence, explaining that it applied very simple analytic techniques to a very complex question, leading to unwarranted conclusions about family size and intelligence. Loss of cases, omission of an important ability test, and failure to apply multivariate…

Top