Sample records for design variables applied

  1. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  2. Application of variable-gain output feedback for high-alpha control

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1990-01-01

    A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.

  3. The Attributes of a Variable-Diameter Rotor System Applied to Civil Tiltrotor Aircraft

    NASA Technical Reports Server (NTRS)

    Brender, Scott; Mark, Hans; Aguilera, Frank

    1996-01-01

    The attributes of a variable diameter rotor concept applied to civil tiltrotor aircraft are investigated using the V/STOL aircraft sizing and performance computer program (VASCOMP). To begin, civil tiltrotor viability issues that motivate advanced rotor designs are discussed. Current work on the variable diameter rotor and a theoretical basis for the advantages of the rotor system are presented. The size and performance of variable diameter and conventional tiltrotor designs for the same baseline mission are then calculated using a modified NASA Ames version of VASCOMP. The aircraft are compared based on gross weight, fuel required, engine size, and autorotative performance for various hover disk loading values. Conclusions about the viability of the resulting designs are presented and a program for further variable diameter rotor research is recommended.

  4. Research Design and Statistics for Applied Linguistics.

    ERIC Educational Resources Information Center

    Hatch, Evelyn; Farhady, Hossein

    An introduction to the conventions of research design and statistical analysis is presented for graduate students of applied linguistics. The chapters cover such concepts as the definition of research, variables, research designs, research report formats, sorting and displaying data, probability and hypothesis testing, comparing means,…

  5. Continuous variable transmission and regenerative braking devices in bicycles utilizing magnetorheological fluids

    NASA Astrophysics Data System (ADS)

    Cheung, Wai Ming; Liao, Wei-Hsin

    2013-04-01

    The use of magnetorheological (MR) fluids in vehicles has been gaining popular recently due to its controllable nature, which gives automotive designers more dimensions of freedom in functional designs. However, not much attention has been paid to apply it to bicycles. This paper is aimed to study the feasibility of applying MR fluids in different dynamic parts of a bicycle such as the transmission and braking systems. MR continuous variable transmission (CVT) and power generator assisted in braking systems were designed and analyzed. Both prototypes were fabricated and tested to evaluate their performances. Experimental results showed that the proposed designs are promising to be used in bicycles.

  6. A variable-gain output feedback control design methodology

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.

    1989-01-01

    A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.

  7. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  8. Aerodynamic optimization by simultaneously updating flow variables and design parameters with application to advanced propeller designs

    NASA Technical Reports Server (NTRS)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  9. A variable-gain output feedback control design approach

    NASA Technical Reports Server (NTRS)

    Haylo, Nesim

    1989-01-01

    A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.

  10. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  11. Experimental design for evaluating WWTP data by linear mass balances.

    PubMed

    Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P

    2018-05-15

    A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  13. The Taguchi methodology as a statistical tool for biotechnological applications: a critical appraisal.

    PubMed

    Rao, Ravella Sreenivas; Kumar, C Ganesh; Prakasham, R Shetty; Hobbs, Phil J

    2008-04-01

    Success in experiments and/or technology mainly depends on a properly designed process or product. The traditional method of process optimization involves the study of one variable at a time, which requires a number of combinations of experiments that are time, cost and labor intensive. The Taguchi method of design of experiments is a simple statistical tool involving a system of tabulated designs (arrays) that allows a maximum number of main effects to be estimated in an unbiased (orthogonal) fashion with a minimum number of experimental runs. It has been applied to predict the significant contribution of the design variable(s) and the optimum combination of each variable by conducting experiments on a real-time basis. The modeling that is performed essentially relates signal-to-noise ratio to the control variables in a 'main effect only' approach. This approach enables both multiple response and dynamic problems to be studied by handling noise factors. Taguchi principles and concepts have made extensive contributions to industry by bringing focused awareness to robustness, noise and quality. This methodology has been widely applied in many industrial sectors; however, its application in biological sciences has been limited. In the present review, the application and comparison of the Taguchi methodology has been emphasized with specific case studies in the field of biotechnology, particularly in diverse areas like fermentation, food processing, molecular biology, wastewater treatment and bioremediation.

  14. A new design approach based on differential evolution algorithm for geometric optimization of magnetorheological brakes

    NASA Astrophysics Data System (ADS)

    Le-Duc, Thang; Ho-Huu, Vinh; Nguyen-Thoi, Trung; Nguyen-Quoc, Hung

    2016-12-01

    In recent years, various types of magnetorheological brakes (MRBs) have been proposed and optimized by different optimization algorithms that are integrated in commercial software such as ANSYS and Comsol Multiphysics. However, many of these optimization algorithms often possess some noteworthy shortcomings such as the trap of solutions at local extremes, or the limited number of design variables or the difficulty of dealing with discrete design variables. Thus, to overcome these limitations and develop an efficient computation tool for optimal design of the MRBs, an optimization procedure that combines differential evolution (DE), a gradient-free global optimization method with finite element analysis (FEA) is proposed in this paper. The proposed approach is then applied to the optimal design of MRBs with different configurations including conventional MRBs and MRBs with coils placed on the side housings. Moreover, to approach a real-life design, some necessary design variables of MRBs are considered as discrete variables in the optimization process. The obtained optimal design results are compared with those of available optimal designs in the literature. The results reveal that the proposed method outperforms some traditional approaches.

  15. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  16. VARIABLES IN PERSONALITY THEORY AND PERSONALITY TESTING, AN INTERPRETATION.

    ERIC Educational Resources Information Center

    ALLEN, ROBERT M.

    DESIGNED TO GIVE THE READER SOME OF THE DIFFICULTIES OF INTEGRATING PERSONALITY VARIABLES, THEORY, AND TESTING, THIS BOOK DISCUSSES THE DUAL ORIENTATION OF PSYCHOLOGY AS A SCIENTIFIC DISCIPLINE AND AS AN APPLIED SKILL. EXAMPLES OF BOTH NOMOTHETIC AND IDIOGRAPHIC THEORIES OF PERSONALITY ARE CONSIDERED. THE HISTORICAL DEVELOPMENT AND DEBATE…

  17. PHYSICAL AND OPTICAL PROPERTIES OF STEAM-EXPLODED LASER-PRINTED PAPER

    EPA Science Inventory

    Laser-printed paper was pulped by the steam-explosion process. A full-factorial experimental design was applied to determine the effects of key operating variables on the properties of steam-exploded pulp. The variables were addition level for pulping chemicals (NaOH and/or Na2SO...

  18. Optimization under variability and uncertainty: a case study for NOx emissions control for a gasification system.

    PubMed

    Chen, Jianjun; Frey, H Christopher

    2004-12-15

    Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.

  19. Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver

    2015-01-01

    Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.

  20. Compensation for Lithography Induced Process Variations during Physical Design

    NASA Astrophysics Data System (ADS)

    Chin, Eric Yiow-Bing

    This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.

  1. Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Guynn, Mark D.

    2012-01-01

    Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.

  2. Research related to variable sweep aircraft development

    NASA Technical Reports Server (NTRS)

    Polhamus, E. C.; Toll, T. A.

    1981-01-01

    Development in high speed, variable sweep aircraft research is reviewed. The 1946 Langley wind tunnel studies related to variable oblique and variable sweep wings and results from the X-5 and the XF1OF variable sweep aircraft are discussed. A joint program with the British, evaluation of the British "Swallow", development of the outboard pivot wing/aft tail configuration concept by Langley, and the applied research program that followed and which provided the technology for the current, variable sweep military aircraft is outlined. The relative state of variable sweep as a design option is also covered.

  3. Replicates in high dimensions, with applications to latent variable graphical models.

    PubMed

    Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han

    2016-12-01

    In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.

  4. Design and verification of a hybrid nonlinear MRE vibration absorber for controllable broadband performance

    NASA Astrophysics Data System (ADS)

    Sun, S. S.; Yildirim, T.; Wu, Jichu; Yang, J.; Du, H.; Zhang, S. W.; Li, W. H.

    2017-09-01

    In this work, a hybrid nonlinear magnetorheological elastomer (MRE) vibration absorber has been designed, theoretically investigated and experimentally verified. The proposed nonlinear MRE absorber has the dual advantages of a nonlinear force-displacement relationship and variable stiffness technology; the purpose for coupling these two technologies is to achieve a large broadband vibration absorber with controllable capability. To achieve a nonlinear stiffness in the device, two pairs of magnets move at a rotary angle against each other, and the theoretical nonlinear force-displacement relationship has been theoretically calculated. For the experimental investigation, the effects of base excitation, variable currents applied to the device (i.e. variable stiffness of the MRE) and semi-active control have been conducted to determine the enhanced broadband performance of the designed device. It was observed the device was able to change resonance frequency with the applied current; moreover, the hybrid nonlinear MRE absorber displayed a softening-type nonlinear response with clear discontinuous bifurcations observed. Furthermore, the performance of the device under a semi-active control algorithm displayed the optimal performance in attenuating the vibration from a primary system to the absorber over a large frequency bandwidth from 4 to 12 Hz. By coupling nonlinear stiffness attributes with variable stiffness MRE technology, the performance of a vibration absorber is substantially improved.

  5. A novel variable stiffness mechanism for dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Li, Wen-Bo; Zhang, Wen-Ming; Zou, Hong-Xiang; Peng, Zhi-Ke; Meng, Guang

    2017-08-01

    In this paper, a novel variable stiffness mechanism is proposed for the design of a variable stiffness dielectric elastomer actuator (VSDEA) which combines a flexible strip with a DEA in a dielectric elastomer minimum energy structure. The DEA induces an analog tuning of the transverse curvature of the strip, thus conveniently providing a voltage-controllable flexural rigidity. The VSDEA tends to be a fully flexible and compact structure with the advantages of simplicity and fast response. Both experimental and theoretical investigations are carried out to reveal the variable stiffness performances of the VSDEA. The effect of the clamped location on the bending stiffness of the VSDEA is analyzed, and then effects of the lengths, the loading points and the applied voltages on the bending stiffness are experimentally investigated. An analytical model is developed to verify the availability of this variable stiffness mechanism, and the theoretical results demonstrate that the bending stiffness of the VSDEA decreases as the applied voltage increases, which agree well with the experimental data. Moreover, the experimental results show that the maximum change of the relative stiffness can reach about 88.80%. It can be useful for the design and optimization of active variable stiffness structures and DEAs for soft robots, vibration control, and morphing applications.

  6. Applying quality by design (QbD) concept for fabrication of chitosan coated nanoliposomes.

    PubMed

    Pandey, Abhijeet P; Karande, Kiran P; Sonawane, Raju O; Deshmukh, Prashant K

    2014-03-01

    In the present investigation, a quality by design (QbD) strategy was successfully applied to the fabrication of chitosan-coated nanoliposomes (CH-NLPs) encapsulating a hydrophilic drug. The effects of the processing variables on the particle size, encapsulation efficiency (%EE) and coating efficiency (%CE) of CH-NLPs (prepared using a modified ethanol injection method) were investigated. The concentrations of lipid, cholesterol, drug and chitosan; stirring speed, sonication time; organic:aqueous phase ratio; and temperature were identified as the key factors after risk analysis for conducting a screening design study. A separate study was designed to investigate the robustness of the predicted design space. The particle size, %EE and %CE of the optimized CH-NLPs were 111.3 nm, 33.4% and 35.2%, respectively. The observed responses were in accordance with the predicted response, which confirms the suitability and robustness of the design space for CH-NLP formulation. In conclusion, optimization of the selected key variables will help minimize the problems related to size, %EE and %CE that are generally encountered when scaling up processes for NLP formulations. The robustness of the design space will help minimize both intra-batch and inter-batch variations, which are quite common in the pharmaceutical industry.

  7. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  8. Multi-disciplinary optimization of railway wheels

    NASA Astrophysics Data System (ADS)

    Nielsen, J. C. O.; Fredö, C. R.

    2006-06-01

    A numerical procedure for multi-disciplinary optimization of railway wheels, based on Design of Experiments (DOE) methodology and automated design, is presented. The target is a wheel design that meets the requirements for fatigue strength, while minimizing the unsprung mass and rolling noise. A 3-level full factorial (3LFF) DOE is used to collect data points required to set up Response Surface Models (RSM) relating design and response variables in the design space. Computationally efficient simulations are thereafter performed using the RSM to identify the solution that best fits the design target. A demonstration example, including four geometric design variables in a parametric finite element (FE) model, is presented. The design variables are wheel radius, web thickness, lateral offset between rim and hub, and radii at the transitions rim/web and hub/web, but more variables (including material properties) can be added if needed. To improve further the performance of the wheel design, a constrained layer damping (CLD) treatment is applied on the web. For a given load case, compared to a reference wheel design without CLD, a combination of wheel shape and damping optimization leads to the conclusion that a reduction in the wheel component of A-weighted rolling noise of 11 dB can be achieved if a simultaneous increase in wheel mass of 14 kg is accepted.

  9. Combined model of intrinsic and extrinsic variability for computational network design with application to synthetic biology.

    PubMed

    Toni, Tina; Tidor, Bruce

    2013-01-01

    Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA--for example, on the same transcript--was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology.

  10. Combined Model of Intrinsic and Extrinsic Variability for Computational Network Design with Application to Synthetic Biology

    PubMed Central

    Toni, Tina; Tidor, Bruce

    2013-01-01

    Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA – for example, on the same transcript – was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology. PMID:23555205

  11. Feasibility study of a cosmetic cream added with aqueous extract and oil from date (Phoenix dactylifera L.) fruit seed using experimental design.

    PubMed

    Lecheb, Fatma; Benamara, Salem

    2015-01-01

    This article reports on the feasibility study of a cosmetic cream added with aqueous extract and oil from date (Phoenix dactylifera L.) fruit seed using experimental design. First, the mixture design was applied to optimize the cosmetic formula. The responses (dependent variables) were the spreadability (YSp) and viscosity (YVis), the factors (independent variables) being the weight proportions of the fatty phase (X1), the aqueous date seed extract (X2), and the beeswax (X3). Second, the cosmetic stability study was conducted by applying a full factorial design. Here, three responses were considered [spreadability (Sp), viscosity (Vis), and peroxide index (PI)], the independent variables being the concentration of the date seed oil (DSO) (x1), storage temperature (x2), and storage time (x3). Results showed that in the case of mixture design, the second-order polynomial equations correctly described experimental data. Globally, results show that there is a relatively wide composition range to ensure a suitable cosmetic cream from the point of view of Sp and Vis. Regarding the cosmetic stability, the storage time was found to be the most influential factor on both Vis and PI, which are considered here as indicators of physical and chemical stability of the emulsion, respectively. Finally, the elaborated and commercial cosmetics were compared in terms of pH, Sp, and centrifugation test (Ct).

  12. Adaptation by Design: A Context-Sensitive, Dialogic Approach to Interventions

    ERIC Educational Resources Information Center

    Kirshner, Ben; Polman, Joseph L.

    2013-01-01

    Applied researchers, whether working with the framework of design-based research or intervention science, face a similar implementation challenge: practitioners who enact their programs typically do so in varied, context-specific ways. Although this variability is often seen as a problem for those who privilege fidelity and standardization, we…

  13. Vibration Analysis of a Split Path Gearbox

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.; Rashidi, Majid

    1995-01-01

    Split path gearboxes can be attractive alternatives to the common planetary designs for rotorcraft, but because they have seen little use, they are relatively high risk designs. To help reduce the risk of fielding a rotorcraft with a split path gearbox, the vibration and dynamic characteristics of such a gearbox were studied. A mathematical model was developed by using the Lagrangian method, and it was applied to study the effect of three design variables on the natural frequencies and vibration energy of the gearbox. The first design variable, shaft angle, had little influence on the natural frequencies. The second variable, mesh phasing, had a strong effect on the levels of vibration energy, with phase angles of 0 deg and 180 deg producing low vibration levels. The third design variable, the stiffness of the shafts connecting the spur gears to the helical pinions, strongly influenced the natural frequencies of some of the vibration modes, including two of the dominant modes. We found that, to achieve the lowest level of vibration energy, the natural frequencies of these two dominant modes should be less than those of the main excitation sources.

  14. Using Variable Temperature Powder X-Ray Diffraction to Determine the Thermal Expansion Coefficient of Solid MgO

    ERIC Educational Resources Information Center

    Corsepius, Nicholas C.; DeVore, Thomas C.; Reisner, Barbara A.; Warnaar, Deborah L.

    2007-01-01

    A laboratory exercise was developed by using variable temperature powder X-ray diffraction (XRD) to determine [alpha] for MgO (periclase)and was tested in the Applied Physical Chemistry and Materials Characterization Laboratories at James Madison University. The experiment which was originally designed to provide undergraduate students with a…

  15. Effect of spatial variability of storm on the optimal placement of best management practices (BMPs).

    PubMed

    Chang, C L; Chiueh, P T; Lo, S L

    2007-12-01

    It is significant to design best management practices (BMPs) and determine the proper BMPs placement for the purpose that can not only satisfy the water quantity and water quality standard, but also lower the total cost of BMPs. The spatial rainfall variability can have much effect on its relative runoff and non-point source pollution (NPSP). Meantime, the optimal design and placement of BMPs would be different as well. The objective of this study was to discuss the relationship between the spatial variability of rainfall and the optimal BMPs placements. Three synthetic rainfall storms with varied spatial distributions, including uniform rainfall, downstream rainfall and upstream rainfall, were designed. WinVAST model was applied to predict runoff and NPSP. Additionally, detention pond and swale were selected for being structural BMPs. Scatter search was applied to find the optimal BMPs placement. The results show that mostly the total cost of BMPs is higher in downstream rainfall than in upstream rainfall or uniform rainfall. Moreover, the cost of detention pond is much higher than swale. Thus, even though detention pond has larger efficiency for lowering peak flow and pollutant exports, it is not always the determined set in each subbasin.

  16. Compact variable-temperature scanning force microscope.

    PubMed

    Chuang, Tien-Ming; de Lozanne, Alex

    2007-05-01

    A compact design for a cryogenic variable-temperature scanning force microscope using a fiber-optic interferometer to measure cantilever deflection is presented. The tip-sample coarse approach and the lateral tip positioning are performed by piezoelectric positioners in situ. The microscope has been operated at temperatures between 6 and 300 K. It is designed to fit into an 8 T superconducting magnet with the field applied in the out-of-plane direction. The results of scanning in various modes are demonstrated, showing contrast based on magnetic field gradients or surface potentials.

  17. Case-Crossover Analysis of Air Pollution Health Effects: A Systematic Review of Methodology and Application

    PubMed Central

    Carracedo-Martínez, Eduardo; Taracido, Margarita; Tobias, Aurelio; Saez, Marc; Figueiras, Adolfo

    2010-01-01

    Background Case-crossover is one of the most used designs for analyzing the health-related effects of air pollution. Nevertheless, no one has reviewed its application and methodology in this context. Objective We conducted a systematic review of case-crossover (CCO) designs used to study the relationship between air pollution and morbidity and mortality, from the standpoint of methodology and application. Data sources and extraction A search was made of the MEDLINE and EMBASE databases. Reports were classified as methodologic or applied. From the latter, the following information was extracted: author, study location, year, type of population (general or patients), dependent variable(s), independent variable(s), type of CCO design, and whether effect modification was analyzed for variables at the individual level. Data synthesis The review covered 105 reports that fulfilled the inclusion criteria. Of these, 24 addressed methodological aspects, and the remainder involved the design’s application. In the methodological reports, the designs that yielded the best results in simulation were symmetric bidirectional CCO and time-stratified CCO. Furthermore, we observed an increase across time in the use of certain CCO designs, mainly symmetric bidirectional and time-stratified CCO. The dependent variables most frequently analyzed were those relating to hospital morbidity; the pollutants most often studied were those linked to particulate matter. Among the CCO-application reports, 13.6% studied effect modification for variables at the individual level. Conclusions The use of CCO designs has undergone considerable growth; the most widely used designs were those that yielded better results in simulation studies: symmetric bidirectional and time-stratified CCO. However, the advantages of CCO as a method of analysis of variables at the individual level are put to little use. PMID:20356818

  18. A generalized concept for cost-effective structural design. [Statistical Decision Theory applied to aerospace systems

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hawk, J. D.

    1975-01-01

    A generalized concept for cost-effective structural design is introduced. It is assumed that decisions affecting the cost effectiveness of aerospace structures fall into three basic categories: design, verification, and operation. Within these basic categories, certain decisions concerning items such as design configuration, safety factors, testing methods, and operational constraints are to be made. All or some of the variables affecting these decisions may be treated probabilistically. Bayesian statistical decision theory is used as the tool for determining the cost optimum decisions. A special case of the general problem is derived herein, and some very useful parametric curves are developed and applied to several sample structures.

  19. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    ERIC Educational Resources Information Center

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  20. Longitudinal-control design approach for high-angle-of-attack aircraft

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.; Proffitt, Melissa S.

    1993-01-01

    This paper describes a control synthesis methodology that emphasizes a variable-gain output feedback technique that is applied to the longitudinal channel of a high-angle-of-attack aircraft. The aircraft is a modified F/A-18 aircraft with thrust-vectored controls. The flight regime covers a range up to a Mach number of 0.7; an altitude range from 15,000 to 35,000 ft; and an angle-of-attack (alpha) range up to 70 deg, which is deep into the poststall region. A brief overview is given of the variable-gain mathematical formulation as well as a description of the discrete control structure used for the feedback controller. This paper also presents an approximate design procedure with relationships for the optimal weights for the selected feedback control structure. These weights are selected to meet control design guidelines for high-alpha flight controls. Those guidelines that apply to the longitudinal-control design are also summarized. A unique approach is presented for the feed-forward command generator to obtain smooth transitions between load factor and alpha commands. Finally, representative linear analysis results and nonlinear batch simulation results are provided.

  1. The Effects of Training, Modality, and Redundancy on the Development of a Historical Inquiry Strategy in a Multimedia Learning Environment

    ERIC Educational Resources Information Center

    McNeill, Andrea L.; Doolittle, Peter E.; Hicks, David

    2009-01-01

    The purpose of this study was to assess the effects of training, modality, and redundancy on the participants' ability to apply and recall a historical inquiry strategy. An experimental research design was utilized with presentation mode as the independent variable and strategy application and strategy recall as the dependent variables. The…

  2. Implementation of MCA Method for Identification of Factors for Conceptual Cost Estimation of Residential Buildings

    NASA Astrophysics Data System (ADS)

    Juszczyk, Michał; Leśniak, Agnieszka; Zima, Krzysztof

    2013-06-01

    Conceptual cost estimation is important for construction projects. Either underestimation or overestimation of building raising cost may lead to failure of a project. In the paper authors present application of a multicriteria comparative analysis (MCA) in order to select factors influencing residential building raising cost. The aim of the analysis is to indicate key factors useful in conceptual cost estimation in the early design stage. Key factors are being investigated on basis of the elementary information about the function, form and structure of the building, and primary assumptions of technological and organizational solutions applied in construction process. The mentioned factors are considered as variables of the model which aim is to make possible conceptual cost estimation fast and with satisfying accuracy. The whole analysis included three steps: preliminary research, choice of a set of potential variables and reduction of this set to select the final set of variables. Multicriteria comparative analysis is applied in problem solution. Performed analysis allowed to select group of factors, defined well enough at the conceptual stage of the design process, to be used as a describing variables of the model.

  3. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  4. Sensitivity analysis for axis rotation diagrid structural systems according to brace angle changes

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Kwang; Li, Long-Yang; Park, Sung-Soo

    2017-10-01

    General regular shaped diagrid structures can express diverse shapes because braces are installed along the exterior faces of the structures and the structures have no columns. However, since irregular shaped structures have diverse variables, studies to assess behaviors resulting from various variables are continuously required to supplement the imperfections related to such variables. In the present study, materials elastic modulus and yield strength were selected as variables for strength that would be applied to diagrid structural systems in the form of Twisters among the irregular shaped buildings classified by Vollers and that affect the structural design of these structural systems. The purpose of this study is to conduct sensitivity analysis for axial rotation diagrid structural systems according to changes in brace angles in order to identify the design variables that have relatively larger effects and the tendencies of the sensitivity of the structures according to changes in brace angles and axial rotation angles.

  5. Design and optimization of topical methotrexate loaded niosomes for enhanced management of psoriasis: application of Box-Behnken design, in-vitro evaluation and in-vivo skin deposition study.

    PubMed

    Abdelbary, Aly A; AbouGhaly, Mohamed H H

    2015-05-15

    Psoriasis, a skin disorder characterized by impaired epidermal differentiation, is regularly treated by systemic methotrexate (MTX), an effective cytotoxic drug but with numerous side effects. The aim of this work was to design topical MTX loaded niosomes for management of psoriasis to avoid systemic toxicity. To achieve this goal, MTX niosomes were prepared by thin film hydration technique. A Box-Behnken (BB) design, using Design-Expert(®) software, was employed to statistically optimize formulation variables. Three independent variables were evaluated: MTX concentration in hydration medium (X1), total weight of niosomal components (X2) and surfactant: cholesterol ratio (X3). The encapsulation efficiency percent (Y1: EE%) and particle size (Y2: PS) were selected as dependent variables. The optimal formulation (F12) displayed spherical morphology under transmission electron microscopy (TEM), optimum particle size of 1375.00 nm and high EE% of 78.66%. In-vivo skin deposition study showed that the highest value of percentage drug deposited (22.45%) and AUC0-10 (1.15 mg.h/cm(2)) of MTX from niosomes were significantly greater than that of drug solution (13.87% and 0.49 mg.h/cm(2), respectively). Moreover, in-vivo histopathological studies confirmed safety of topically applied niosomes. Concisely, the results showed that targeted MTX delivery might be achieved using topically applied niosomes for enhanced treatment of psoriasis. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. A quality by design approach to understand formulation and process variability in pharmaceutical melt extrusion processes.

    PubMed

    Patwardhan, Ketaki; Asgarzadeh, Firouz; Dassinger, Thomas; Albers, Jessica; Repka, Michael A

    2015-05-01

    In this study, the principles of quality by design (QbD) have been uniquely applied to a pharmaceutical melt extrusion process for an immediate release formulation with a low melting model drug, ibuprofen. Two qualitative risk assessment tools - Fishbone diagram and failure mode effect analysis - were utilized to strategically narrow down the most influential parameters. Selected variables were further assessed using a Plackett-Burman screening study, which was upgraded to a response surface design consisting of the critical factors to study the interactions between the study variables. In process torque, glass transition temperature (Tg ) of the extrudates, assay, dissolution and phase change were measured as responses to evaluate the critical quality attributes (CQAs) of the extrudates. The effect of each study variable on the measured responses was analysed using multiple regression for the screening design and partial least squares for the optimization design. Experimental limits for formulation and process parameters to attain optimum processing have been outlined. A design space plot describing the domain of experimental variables within which the CQAs remained unchanged was developed. A comprehensive approach for melt extrusion product development based on the QbD methodology has been demonstrated. Drug loading concentrations between 40- 48%w/w and extrusion temperature in the range of 90-130°C were found to be the most optimum. © 2015 Royal Pharmaceutical Society.

  7. Designing low-carbon power systems for Great Britain in 2050 that are robust to the spatiotemporal and inter-annual variability of weather

    NASA Astrophysics Data System (ADS)

    Zeyringer, Marianne; Price, James; Fais, Birgit; Li, Pei-Hao; Sharp, Ed

    2018-05-01

    The design of cost-effective power systems with high shares of variable renewable energy (VRE) technologies requires a modelling approach that simultaneously represents the whole energy system combined with the spatiotemporal and inter-annual variability of VRE. Here, we soft-link a long-term energy system model, which explores new energy system configurations from years to decades, with a high spatial and temporal resolution power system model that captures VRE variability from hours to years. Applying this methodology to Great Britain for 2050, we find that VRE-focused power system design is highly sensitive to the inter-annual variability of weather and that planning based on a single year can lead to operational inadequacy and failure to meet long-term decarbonization objectives. However, some insights do emerge that are relatively stable to weather-year. Reinforcement of the transmission system consistently leads to a decrease in system costs while electricity storage and flexible generation, needed to integrate VRE into the system, are generally deployed close to demand centres.

  8. A Probabilistic Design Method Applied to Smart Composite Structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1995-01-01

    A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.

  9. A Complex Systems Approach to Causal Discovery in Psychiatry.

    PubMed

    Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin

    2016-01-01

    Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.

  10. 40 CFR Appendix I to Part 60 - Removable Label and Owner's Manual

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... various label types that may apply. 2.2 Certified Wood Heaters The design and content of certified wood... three variables listed above. Figures 1 and 2 illustrate the variations in label design. Figure 1 is a... general layout, the type font and type size illustrated in Figures 1 and 2. 2.2.1 Identification and...

  11. 40 CFR Appendix I to Part 60 - Removable Label and Owner's Manual

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... various label types that may apply. 2.2 Certified Wood Heaters The design and content of certified wood... three variables listed above. Figures 1 and 2 illustrate the variations in label design. Figure 1 is a... general layout, the type font and type size illustrated in Figures 1 and 2. 2.2.1Identification and...

  12. 40 CFR Appendix I to Part 60 - Removable Label and Owner's Manual

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... various label types that may apply. 2.2 Certified Wood Heaters The design and content of certified wood... three variables listed above. Figures 1 and 2 illustrate the variations in label design. Figure 1 is a... general layout, the type font and type size illustrated in Figures 1 and 2. 2.2.1Identification and...

  13. Selected mesostructure properties in loblolly pine from Arkansas plantations

    Treesearch

    David E. Kretschmann; Steven M. Cramer; Roderic Lakes; Troy Schmidt

    2006-01-01

    Design properties of wood are currently established at the macroscale, assuming wood to be a homogeneous orthotropic material. The resulting variability from the use of such a simplified assumption has been handled by designing with lower percentile values and applying a number of factors to account for the wide statistical variation in properties. With managed...

  14. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  15. Enhancing Induction Coil Reliability

    NASA Astrophysics Data System (ADS)

    Kreter, K.; Goldstein, R.; Yakey, C.; Nemkov, V.

    2014-12-01

    In induction hardening, thermal fatigue is one of the main copper failure modes of induction heat treating coils. There have been papers published that describe this failure mode and others that describe some good design practices. The variables previously identified as the sources of thermal fatigue include radiation from the part surface, frequency, current, concentrator losses, water pressure and coil wall thickness. However, there is very little quantitative data on the factors that influence thermal fatigue in induction coils is available in the public domain. By using finite element analysis software this study analyzes the effect of common design variables of inductor cooling, and quantifies the relative importance of these variables. A comprehensive case study for a single shot induction coil with Fluxtrol A concentrator applied is used for the analysis.

  16. Enhanced production of laccase from Coriolus versicolor NCIM 996 by nutrient optimization using response surface methodology.

    PubMed

    Arockiasamy, Santhiagu; Krishnan, Indira Packialakshmi Gurusamy; Anandakrishnan, Nimalanandan; Seenivasan, Sabitha; Sambath, Agalya; Venkatasubramani, Janani Priya

    2008-12-01

    Plackett and Burman design criterion and central composite design were applied successfully for enhanced production of laccase by Coriolus versicolor NCIM 996 for the first time. Plackett and Burman design criterion was applied to screen the significance of ten nutrients on laccase production by C. versicolor NCIM 996. Out of the ten nutrients tested, starch, yeast extract, MnSO(4), MgSO(4) x 7H(2)O, and phenol were found to have significant effect on laccase production. A central composite design was applied to determine the optimum concentrations of the significant variables obtained from Plackett-Burman design. The optimized medium composition for production of laccase was (g/l): starch, 30.0; yeast extract, 4.53; MnSO(4), 0.002; MgSO(4) x 7H(2)O, 0.755; and phenol, 0.026, and the optimum laccase production was 6,590.26 (U/l), which was 7.6 times greater than the control.

  17. Variable-frequency synchronous motor drives for electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chalmers, B.J.; Musaba, L.; Gosden, D.F.

    1996-07-01

    The performance capability envelope of a variable-frequency, permanent-magnet synchronous motor drive with field weakening is dependent upon the product of maximum current and direct-axis inductance. To obtain a performance characteristic suitable for a typical electric vehicle drive, in which short-term increase of current is applied, it is necessary to design an optimum value of direct-axis inductance. The paper presents an analysis of a hybrid motor design which uses a two-part rotor construction comprising a surface-magnet part and an axially laminated reluctance part. This arrangement combines the properties of all other types of synchronous motor and offers a greater choice ofmore » design variables. It is shown that the desired form of performance may be achieved when the high-inductance axis of the reluctance part is arranged to lead the magnet axis by 90{degree} (elec.).« less

  18. Statistical optimization of lovastatin production by Omphalotus olearius (DC.) singer in submerged fermentation.

    PubMed

    Atlı, Burcu; Yamaç, Mustafa; Yıldız, Zeki; Isikhuemhen, Omoanghe S

    2016-01-01

    In this study, culture conditions were optimized to improve lovastatin production by Omphalotus olearius, isolate OBCC 2002, using statistical experimental designs. The Plackett-Burman design was used to select important variables affecting lovastatin production. Accordingly, glucose, peptone, and agitation speed were determined as the variables that have influence on lovastatin production. In a further experiment, these variables were optimized with a Box-Behnken design and applied in a submerged process; this resulted in 12.51 mg/L lovastatin production on a medium containing glucose (10 g/L), peptone (5 g/L), thiamine (1 mg/L), and NaCl (0.4 g/L) under static conditions. This level of lovastatin production is eight times higher than that produced under unoptimized media and growth conditions by Omphalotus olearius. To the best of our knowledge, this is the first attempt to optimize submerged fermentation process for lovastatin production by Omphalotus olearius.

  19. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  20. Recent advancements in GRACE mascon regularization and uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Loomis, B. D.; Luthcke, S. B.

    2017-12-01

    The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.

  1. Variable frequency inverter for ac induction motors with torque, speed and braking control

    NASA Technical Reports Server (NTRS)

    Nola, F. J. (Inventor)

    1975-01-01

    A variable frequency inverter was designed for driving an ac induction motor which varies the frequency and voltage to the motor windings in response to varying torque requirements for the motor so that the applied voltage amplitude and frequency are of optimal value for any motor load and speed requirement. The slip frequency of the motor is caused to vary proportionally to the torque and feedback is provided so that the most efficient operating voltage is applied to the motor. Winding current surge is limited and a controlled negative slip causes motor braking and return of load energy to a dc power source.

  2. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  3. Characteristics of temperature rise in variable inductor employing magnetorheological fluid driven by a high-frequency pulsed voltage source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho-Young; Kang, In Man, E-mail: imkang@ee.knu.ac.kr; Shon, Chae-Hwa

    2015-05-07

    A variable inductor with magnetorheological (MR) fluid has been successfully applied to power electronics applications; however, its thermal characteristics have not been investigated. To evaluate the performance of the variable inductor with respect to temperature, we measured the characteristics of temperature rise and developed a numerical analysis technique. The characteristics of temperature rise were determined experimentally and verified numerically by adopting a multiphysics analysis technique. In order to accurately estimate the temperature distribution in a variable inductor with an MR fluid-gap, the thermal solver should import the heat source from the electromagnetic solver to solve the eddy current problem. Tomore » improve accuracy, the B–H curves of the MR fluid under operating temperature were obtained using the magnetic property measurement system. In addition, the Steinmetz equation was applied to evaluate the core loss in a ferrite core. The predicted temperature rise for a variable inductor showed good agreement with the experimental data and the developed numerical technique can be employed to design a variable inductor with a high-frequency pulsed voltage source.« less

  4. The Level of Test-Wiseness for the Students of Arts and Science Faculty at Sharourah and Its Relationship with Some Variables

    ERIC Educational Resources Information Center

    Otoum, Abedalqader; Khalaf, Hisham Bani; Bajbeer, Abedalqader; Hamad, Hassan Bani

    2015-01-01

    This study aimed to identify the level of using Test-wiseness strategies for the students of arts and sciences Faculty at Sharourah and its relationship with some variables. a questionnaire was designed which consisted of (29) items measuring three domains of Test-wiseness strategies. It was applied on a sample which consisted of (299) students.…

  5. Sensitivity analysis of navy aviation readiness based sparing model

    DTIC Science & Technology

    2017-09-01

    variability. (See Figure 4.) Figure 4. Research design flowchart 18 Figure 4 lays out the four steps of the methodology , starting in the upper left-hand...as a function of changes in key inputs. We develop NAVARM Experimental Designs (NED), a computational tool created by applying a state-of-the-art...experimental design to the NAVARM model. Statistical analysis of the resulting data identifies the most influential cost factors. Those are, in order of

  6. Learning CAD at University through Summaries of the Rules of Design Intent

    ERIC Educational Resources Information Center

    Barbero, Basilio Ramos; Pedrosa, Carlos Melgosa; Samperio, Raúl Zamora

    2017-01-01

    The ease with which 3D CAD models may be modified and reused are two key aspects that improve the design-intent variable and that can significantly shorten the development timelines of a product. A set of rules are gathered from various authors that take different 3D modelling strategies into account. These rules are then applied to CAD…

  7. Robust optimization of a tandem grating solar thermal absorber

    NASA Astrophysics Data System (ADS)

    Choi, Jongin; Kim, Mingeon; Kang, Kyeonghwan; Lee, Ikjin; Lee, Bong Jae

    2018-04-01

    Ideal solar thermal absorbers need to have a high value of the spectral absorptance in the broad solar spectrum to utilize the solar radiation effectively. Majority of recent studies about solar thermal absorbers focus on achieving nearly perfect absorption using nanostructures, whose characteristic dimension is smaller than the wavelength of sunlight. However, precise fabrication of such nanostructures is not easy in reality; that is, unavoidable errors always occur to some extent in the dimension of fabricated nanostructures, causing an undesirable deviation of the absorption performance between the designed structure and the actually fabricated one. In order to minimize the variation in the solar absorptance due to the fabrication error, the robust optimization can be performed during the design process. However, the optimization of solar thermal absorber considering all design variables often requires tremendous computational costs to find an optimum combination of design variables with the robustness as well as the high performance. To achieve this goal, we apply the robust optimization using the Kriging method and the genetic algorithm for designing a tandem grating solar absorber. By constructing a surrogate model through the Kriging method, computational cost can be substantially reduced because exact calculation of the performance for every combination of variables is not necessary. Using the surrogate model and the genetic algorithm, we successfully design an effective solar thermal absorber exhibiting a low-level of performance degradation due to the fabrication uncertainty of design variables.

  8. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  9. Optimization of Mineral Separator for Recovery of Total Heavy Minerals of Bay of Bengal using Central Composite Design

    NASA Astrophysics Data System (ADS)

    Routray, Sunita; Swain, Ranjita; Rao, Raghupatruni Bhima

    2017-04-01

    The present study is aimed at investigating the optimization of a mineral separator for processing of beach sand minerals of Bay of Bengal along Ganjam-Rushikulya coast. The central composite design matrix and response surface methodology were applied in designing the experiments to evaluate the interactive effects of the three most important operating variables, such as feed quantity, wash water rate and Shake amplitude of the deck. The predicted values were found to be in good agreement with the experimental values (R2 = 0.97 for grade and 0.98 for recovery). To understand the impact of each variable, three dimensional (3D) plots were also developed for the estimated responses.

  10. A Data-Driven Design Evaluation Tool for Handheld Device Soft Keyboards

    PubMed Central

    Trudeau, Matthieu B.; Sunderland, Elsie M.; Jindrich, Devin L.; Dennerlein, Jack T.

    2014-01-01

    Thumb interaction is a primary technique used to operate small handheld devices such as smartphones. Despite the different techniques involved in operating a handheld device compared to a personal computer, the keyboard layouts for both devices are similar. A handheld device keyboard that considers the physical capabilities of the thumb may improve user experience. We developed and applied a design evaluation tool for different geometries of the QWERTY keyboard using a performance evaluation model. The model utilizes previously collected data on thumb motor performance and posture for different tap locations and thumb movement directions. We calculated a performance index (PITOT, 0 is worst and 2 is best) for 663 designs consisting in different combinations of three variables: the keyboard's radius of curvature (R) (mm), orientation (O) (°), and vertical location on the screen (L). The current standard keyboard performed poorly (PITOT = 0.28) compared to other designs considered. Keyboard location (L) contributed to the greatest variability in performance out of the three design variables, suggesting that designers should modify this variable first. Performance was greatest for designs in the middle keyboard location. In addition, having a slightly upward curve (R = −20 mm) and orientated perpendicular to the thumb's long axis (O = −20°) improved performance to PITOT = 1.97. Poorest performances were associated with placement of the keyboard's spacebar in the bottom right corner of the screen (e.g., the worst was for R = 20 mm, O = 40°, L =  Bottom (PITOT = 0.09)). While this evaluation tool can be used in the design process as an ergonomic reference to promote user motor performance, other design variables such as visual access and usability still remain unexplored. PMID:25211465

  11. Designed experiment evaluation of key variables affecting the cutting performance of rotary instruments.

    PubMed

    Funkenbusch, Paul D; Rotella, Mario; Ercoli, Carlo

    2015-04-01

    Laboratory studies of tooth preparation are often performed under a limited range of conditions involving single values for all variables other than the 1 being tested. In contrast, in clinical settings not all variables can be tightly controlled. For example, a new dental rotary cutting instrument may be tested in the laboratory by making a specific cut with a fixed force, but in clinical practice, the instrument must make different cuts with individual dentists applying a range of different forces. Therefore, the broad applicability of laboratory results to diverse clinical conditions is uncertain and the comparison of effects across studies is difficult. The purpose of this study was to examine the effect of 9 process variables on dental cutting in a single experiment, allowing each variable to be robustly tested over a range of values for the other 8 and permitting a direct comparison of the relative importance of each on the cutting process. The effects of 9 key process variables on the efficiency of a simulated dental cutting operation were measured. A fractional factorial experiment was conducted by using a computer-controlled, dedicated testing apparatus to simulate dental cutting procedures and Macor blocks as the cutting substrate. Analysis of Variance (ANOVA) was used to judge the statistical significance (α=.05). Five variables consistently produced large, statistically significant effects (target applied load, cut length, starting rpm, diamond grit size, and cut type), while 4 variables produced relatively small, statistically insignificant effects (number of cooling ports, rotary cutting instrument diameter, disposability, and water flow rate). The control exerted by the dentist, simulated in this study by targeting a specific level of applied force, was the single most important factor affecting cutting efficiency. Cutting efficiency was also significantly affected by factors simulating patient/clinical circumstances as well as hardware choices. These results highlight the importance of local clinical conditions (procedure, dentist) in understanding dental cutting procedures and in designing adequate experimental methodologies for future studies. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  12. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  13. Treatment of dyeing wastewater by TiO2/H2O2/UV process: experimental design approach for evaluating total organic carbon (TOC) removal efficiency.

    PubMed

    Lee, Seung-Mok; Kim, Young-Gyu; Cho, Il-Hyoung

    2005-01-01

    Optimal operating conditions in order to treat dyeing wastewater were investigated by using the factorial design and responses surface methodology (RSM). The experiment was statistically designed and carried out according to a 22 full factorial design with four factorial points, three center points, and four axial points. Then, the linear and nonlinear regression was applied on the data by using SAS package software. The independent variables were TiO2 dosage, H2O2 concentration and total organic carbon (TOC) removal efficiency of dyeing wastewater was dependent variable. From the factorial design and responses surface methodology (RSM), maximum removal efficiency (85%) of dyeing wastewater was obtained at TiO2 dosage (1.82 gL(-1)), H2O2 concentration (980 mgL(-1)) for oxidation reaction (20 min).

  14. Application of design of experiments for formulation development and mechanistic evaluation of iontophoretic tacrine hydrochloride delivery.

    PubMed

    Patel, Niketkumar; Jain, Shashank; Madan, Parshotam; Lin, Senshang

    2016-11-01

    The objective of this investigation is to develop mathematical equation to understand the impact of variables and establish statistical control over transdermal iontophoretic delivery of tacrine hydrochloride. In addition, possibility of using conductivity measurements as a tool of predicting ionic mobility of the participating ions for the application of iontophoretic delivery was explored. Central composite design was applied to study effect of independent variables like current strength, buffer molarity, and drug concentration on iontophoretic tacrine permeation flux. Molar conductivity was determined to evaluate electro-migration of tacrine ions with application of Kohlrausch's law. The developed mathematic equation not only reveals drug concentration as the most significant variable regulating tacrine permeation, followed by current strength and buffer molarity, but also is capable to optimize tacrine permeation with respective combination of independent variables to achieve desired therapeutic plasma concentration of tacrine in treatment of Alzheimer's disease. Moreover, relative higher mobility of sodium and chloride ions was observed as compared to estimated tacrine ion mobility. This investigation utilizes the design of experiment approach and extends the primary understanding of imapct of electronic and formulation variables on the tacrine permeation for the formulation development of iontophoretic tacrine delivery.

  15. Selecting Design Parameters for Flying Vehicles

    NASA Astrophysics Data System (ADS)

    Makeev, V. I.; Strel'nikova, E. A.; Trofimenko, P. E.; Bondar', A. V.

    2013-09-01

    Studying the influence of a number of design parameters of solid-propellant rockets on the longitudinal and lateral dispersion is an important applied problem. A mathematical model of a rigid body of variable mass moving in a disturbed medium exerting both wave drag and friction is considered. The model makes it possible to determine the coefficients of aerodynamic forces and moments, which affect the motion of vehicles, and to assess the effect of design parameters on their accuracy

  16. A multichannel fiber optic photometer present performance and future developments

    NASA Technical Reports Server (NTRS)

    Barwig, H.; Schoembs, R.; Huber, G.

    1988-01-01

    A three channel photometer for simultaneous multicolor observations was designed with the aim of making possible highly efficient photometry of fast variable objects like cataclysmic variables. Experiences with this instrument over a period of three years are presented. Aspects of the special techniques applied are discussed with respect to high precision photometry. In particular, the use of fiber optics is critically analyzed. Finally, the development of a new photometer concept is discussed.

  17. The Influence of PV Module Materials and Design on Solder Joint Thermal Fatigue Durability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosco, Nick; Silverman, Timothy J.; Kurtz, Sarah

    Finite element model (FEM) simulations have been performed to elucidate the effect of flat plate photovoltaic (PV) module materials and design on PbSn eutectic solder joint thermal fatigue durability. The statistical method of Latin Hypercube sampling was employed to investigate the sensitivity of simulated damage to each input variable. Variables of laminate material properties and their thicknesses were investigated. Using analysis of variance, we determined that the rate of solder fatigue was most sensitive to solder layer thickness, with copper ribbon and silicon thickness being the next two most sensitive variables. By simulating both accelerated thermal cycles (ATCs) and PVmore » cell temperature histories through two characteristic days of service, we determined that the acceleration factor between the ATC and outdoor service was independent of the variables sampled in this study. This result implies that an ATC test will represent a similar time of outdoor exposure for a wide range of module designs. This is an encouraging result for the standard ATC that must be universally applied across all modules.« less

  18. Media milling process optimization for manufacture of drug nanoparticles using design of experiments (DOE).

    PubMed

    Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj

    2015-01-01

    Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.

  19. A comprehensive strategy in the development of a cyclodextrin-modified microemulsion electrokinetic chromatographic method for the assay of diclofenac and its impurities: Mixture-process variable experiments and quality by design.

    PubMed

    Orlandini, S; Pasquini, B; Caprini, C; Del Bubba, M; Squarcialupi, L; Colotta, V; Furlanetto, S

    2016-09-30

    A comprehensive strategy involving the use of mixture-process variable (MPV) approach and Quality by Design principles has been applied in the development of a capillary electrophoresis method for the simultaneous determination of the anti-inflammatory drug diclofenac and its five related substances. The selected operative mode consisted in microemulsion electrokinetic chromatography with the addition of methyl-β-cyclodextrin. The critical process parameters included both the mixture components (MCs) of the microemulsion and the process variables (PVs). The MPV approach allowed the simultaneous investigation of the effects of MCs and PVs on the critical resolution between diclofenac and its 2-deschloro-2-bromo analogue and on analysis time. MPV experiments were used both in the screening phase and in the Response Surface Methodology, making it possible to draw MCs and PVs contour plots and to find important interactions between MCs and PVs. Robustness testing was carried out by MPV experiments and validation was performed following International Conference on Harmonisation guidelines. The method was applied to a real sample of diclofenac gastro-resistant tablets. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.

  1. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  2. Robust integral variable structure controller and pulse-width pulse-frequency modulated input shaper design for flexible spacecraft with mismatched uncertainty/disturbance.

    PubMed

    Hu, Qinglei

    2007-10-01

    This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.

  3. A rotor optimization using regression analysis

    NASA Technical Reports Server (NTRS)

    Giansante, N.

    1984-01-01

    The design and development of helicopter rotors is subject to the many design variables and their interactions that effect rotor operation. Until recently, selection of rotor design variables to achieve specified rotor operational qualities has been a costly, time consuming, repetitive task. For the past several years, Kaman Aerospace Corporation has successfully applied multiple linear regression analysis, coupled with optimization and sensitivity procedures, in the analytical design of rotor systems. It is concluded that approximating equations can be developed rapidly for a multiplicity of objective and constraint functions and optimizations can be performed in a rapid and cost effective manner; the number and/or range of design variables can be increased by expanding the data base and developing approximating functions to reflect the expanded design space; the order of the approximating equations can be expanded easily to improve correlation between analyzer results and the approximating equations; gradients of the approximating equations can be calculated easily and these gradients are smooth functions reducing the risk of numerical problems in the optimization; the use of approximating functions allows the problem to be started easily and rapidly from various initial designs to enhance the probability of finding a global optimum; and the approximating equations are independent of the analysis or optimization codes used.

  4. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research.

    PubMed

    Bamberger, Katharine T

    2016-03-01

    The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.

  5. Virtual test rig to improve the design and optimisation process of the vehicle steering and suspension systems

    NASA Astrophysics Data System (ADS)

    Mántaras, Daniel A.; Luque, Pablo

    2012-10-01

    A virtual test rig is presented using a three-dimensional model of the elasto-kinematic behaviour of a vehicle. A general approach is put forward to determine the three-dimensional position of the body and the main parameters which influence the handling of the vehicle. For the design process, the variable input data are the longitudinal and lateral acceleration and the curve radius, which are defined by the user as a design goal. For the optimisation process, once the vehicle has been built, the variable input data are the travel of the four struts and the steering wheel angle, which is obtained through monitoring the vehicle. The virtual test rig has been applied to a standard vehicle and the validity of the results has been proven.

  6. Aerospace applications of integer and combinatorial optimization

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Kincaid, R. K.

    1995-01-01

    Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in solving combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on a large space structure and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.

  7. The art of spacecraft design: A multidisciplinary challenge

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Levine, M.; Austel, L.

    1989-01-01

    Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.

  8. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  9. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  10. Control methods for aiding a pilot during STOL engine failure transients

    NASA Technical Reports Server (NTRS)

    Nelson, E. R.; Debra, D. B.

    1976-01-01

    Candidate autopilot control laws that control the engine failure transient sink rates by demonstrating the engineering application of modern state variable control theory were defined. The results of approximate modal analysis were compared to those derived from full state analyses provided from computer design solutions. The aircraft was described, and a state variable model of its longitudinal dynamic motion due to engine and control variations was defined. The classical fast and slow modes were assumed to be sufficiently different to define reduced order approximations of the aircraft motion amendable to hand analysis control definition methods. The original state equations of motion were also applied to a large scale state variable control design program, in particular OPTSYS. The resulting control laws were compared with respect to their relative responses, ease of application, and meeting the desired performance objectives.

  11. Electrowetting-Based Variable-Focus Lens for Miniature Systems

    NASA Astrophysics Data System (ADS)

    Hendriks, B. H. W.; Kuiper, S.; van As, M. A. J.; et al.

    The meniscus between two immiscible liquids of different refractive indices can be used as a lens. A change in curvature of this meniscus by electrostatic control of the solid/liquid interfacial tension leads to a change in focal distance. It is demonstrated that two liquids in a tube form a self-centred variable-focus lens. The optical properties of this lens were investigated experimentally. We designed and constructed a miniature camera module based on this variable lens suitable for mobile applications. Furthermore, the liquid lens was applied in a Blu-ray Disc optical recording system to enable dual layer disc reading/writing.

  12. Integration of prebend optimization in a holistic wind turbine design tool

    NASA Astrophysics Data System (ADS)

    Sartori, L.; Bortolotti, P.; Croce, A.; Bottasso, C. L.

    2016-09-01

    This paper considers the problem of identifying the optimal combination of blade prebend, rotor cone angle and nacelle uptilt, within an integrated aero-structural design environment. Prebend is designed to reach maximum rotor area at rated conditions, while cone and uptilt are computed together with all other design variables to minimize the cost of energy. Constraints are added to the problem formulation in order to translate various design requirements. The proposed optimization approach is applied to a conceptual 10 MW offshore wind turbine, highlighting the benefits of an optimal combination of blade curvature, cone and uptilt angles.

  13. A review of variables of urban street connectivity for spatial connection

    NASA Astrophysics Data System (ADS)

    Mohamad, W. S. N. W.; Said, I.

    2014-02-01

    Several studies on street connectivity in cities and towns have been modeled on topology, morphology, technology and psychology of people living in the urban environment. Street connectivity means the connection of streets that offers people alternative routes. However, there emerge difficulties to determine the suitable variables and analysis to be chosen in defining the accurate result for studies street connectivity. The aim of this paper is to identify variables of street connectivity by applying GIS and Space Syntax. This paper reviews the variables of street connectivity from 15 past articles done in 1990s to early 2000s from journals of nine disciplines on Environment and Behavior, Planning and Design, Computers, Environment and Urban Systems, Applied Earth Observation and Geo-information, Environment and Planning, Physica A: Statistical Mechanics and its Applications, Environmental Psychology, Social Science and Medicine and Building and Environment. From the review, there are four variables found for street connectivity: link (streets-streets, street-nodes or node-streets, nodes-nodes), accessibility, least-angle, and centrality. Space syntax and GIS are suitable tools to analyze the four variables relating to systematic street systems for pedestrians. This review implies that planners of the street systems, in the aspect of street connectivity in cities and towns, should consider these four variables.

  14. Adaptable structural synthesis using advanced analysis and optimization coupled by a computer operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1979-01-01

    A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.

  15. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  16. Design and optimisation of novel configurations of stormwater constructed wetlands

    NASA Astrophysics Data System (ADS)

    Kiiza, Christopher

    2017-04-01

    Constructed wetlands (CWs) are recognised as a cost-effective technology for wastewater treatment. CWs have been deployed and could be retrofitted into existing urban drainage systems to prevent surface water pollution, attenuate floods and act as sources for reusable water. However, there exist numerous criteria for design configuration and operation of CWs. The aim of the study was to examine effects of design and operational variables on performance of CWs. To achieve this, 8 novel designs of vertical flow CWs were continuously operated and monitored (weekly) for 2years. Pollutant removal efficiency in each CW unit was evaluated from physico-chemical analyses of influent and effluent water samples. Hybrid optimised multi-layer perceptron artificial neural networks (MLP ANNs) were applied to simulate treatment efficiency in the CWs. Subsequently, predictive and analytical models were developed for each design unit. Results show models have sound generalisation abilities; with various design configurations and operational variables influencing performance of CWs. Although some design configurations attained faster and higher removal efficiencies than others; all 8 CW designs produced effluents permissible for discharge into watercourses with strict regulatory standards.

  17. A new approach to children's footwear based on foot type classification.

    PubMed

    Mauch, M; Grau, S; Krauss, I; Maiwald, C; Horstmann, T

    2009-08-01

    Current shoe designs do not allow for the comprehensive 3-D foot shape, which means they are unable to reproduce the wide variability in foot morphology. Therefore, the purpose of this study was to capture these variations of children's feet by classifying them into groups (types) and thereby provide a basis for their implementation in the design of children's shoes. The feet of 2867 German children were measured using a 3-D foot scanner. Cluster analysis was then applied to classify the feet into three different foot types. The characteristics of these foot types differ regarding their volume and forefoot shape both within and between shoe sizes. This new approach is in clear contrast to previous systems, since it captures the variability of foot morphology in a more comprehensive way by using a foot typing system and therefore paves the way for the unimpaired development of children's feet. Previous shoe systems do not allow for the wide variations in foot morphology. A new approach was developed regarding different morphological foot types based on 3-D measurements relevant in shoe construction. This can be directly applied to create specific designs for children's shoes.

  18. Manganese ore tailing: optimization of acid leaching conditions and recovery of soluble manganese.

    PubMed

    Santos, Olívia de Souza Heleno; Carvalho, Cornélio de Freitas; Silva, Gilmare Antônia da; Santos, Cláudio Gouvêa Dos

    2015-01-01

    Manganese recovery from industrial ore processing waste by means of leaching with sulfuric acid was the objective of this study. Experimental conditions were optimized by multivariate experimental design approaches. In order to study the factors affecting leaching, a screening step was used involving a full factorial design with central point for three variables in two levels (2(3)). The three variables studied were leaching time, concentration of sulfuric acid and sample amount. The three factors screened were shown to be relevant and therefore a Doehlert design was applied to determine the best working conditions for leaching and to build the response surface. By applying the best leaching conditions, the concentrations of 12.80 and 13.64 %w/w of manganese for the global sample and for the fraction -44 + 37 μm, respectively, were found. Microbeads of chitosan were tested for removal of leachate acidity and recovering of soluble manganese. Manganese recovery from the leachate was 95.4%. Upon drying the leachate, a solid containing mostly manganese sulfate was obtained, showing that the proposed optimized method is efficient for manganese recovery from ore tailings. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Mathematical modeling to predict residential solid waste generation.

    PubMed

    Benítez, Sara Ojeda; Lozano-Olvera, Gabriela; Morelos, Raúl Adalberto; Vega, Carolina Armijo de

    2008-01-01

    One of the challenges faced by waste management authorities is determining the amount of waste generated by households in order to establish waste management systems, as well as trying to charge rates compatible with the principle applied worldwide, and design a fair payment system for households according to the amount of residential solid waste (RSW) they generate. The goal of this research work was to establish mathematical models that correlate the generation of RSW per capita to the following variables: education, income per household, and number of residents. This work was based on data from a study on generation, quantification and composition of residential waste in a Mexican city in three stages. In order to define prediction models, five variables were identified and included in the model. For each waste sampling stage a different mathematical model was developed, in order to find the model that showed the best linear relation to predict residential solid waste generation. Later on, models to explore the combination of included variables and select those which showed a higher R(2) were established. The tests applied were normality, multicolinearity and heteroskedasticity. Another model, formulated with four variables, was generated and the Durban-Watson test was applied to it. Finally, a general mathematical model is proposed to predict residential waste generation, which accounts for 51% of the total.

  20. The use of experimental design for the development of a capillary zone electrophoresis method for the quantitation of captopril.

    PubMed

    Mukozhiwa, S Y; Khamanga, S M M; Walker, R B

    2017-09-01

    A capillary zone electrophoresis (CZE) method for the quantitation of captopril (CPT) using UV detection was developed. Influence of electrolyte concentration and system variables on electrophoretic separation was evaluated and a central composite design (CCD) was used to optimize the method. Variables investigated were pH, molarity, applied voltage and capillary length. The influence of sodium metabisulphite on the stability of test solutions was also investigated. The use of sodium metabisulphite prevented degradation of CPT over 24 hours. A fused uncoated silica capillary of 67.5cm total and 57.5 cm effective length was used for analysis. The applied voltage and capillary length affected the migration time of CPT significantly. A 20 mM phosphate buffer adjusted to pH 7.0 was used as running buffer and an applied voltage of 23.90 kV was suitable to effect a separation. The optimized electrophoretic conditions produced sharp, well-resolved peaks for CPT and sodium metabisulphite. Linear regression analysis of the response for CPT standards revealed the method was linear (R2 = 0.9995) over the range 5-70 μg/mL. The limits of quantitation and detection were 5 and 1.5 μg/mL. A simple, rapid and reliable CZE method has been developed and successfully applied to the analysis of commercially available CPT products.

  1. Prediction of stream fish assemblages from land use characteristics: implications for cost-effective design of monitoring programmes.

    PubMed

    Kristensen, Esben Astrup; Baattrup-Pedersen, Annette; Andersen, Hans Estrup

    2012-03-01

    Increasing human impact on stream ecosystems has resulted in a growing need for tools helping managers to develop conservations strategies, and environmental monitoring is crucial for this development. This paper describes the development of models predicting the presence of fish assemblages in lowland streams using solely cost-effective GIS-derived land use variables. Three hundred thirty-five stream sites were separated into two groups based on size. Within each group, fish abundance data and cluster analysis were used to determine the composition of fish assemblages. The occurrence of assemblages was predicted using a dataset containing land use variables at three spatial scales (50 m riparian corridor, 500 m riparian corridor and the entire catchment) supplemented by a dataset on in-stream variables. The overall classification success varied between 66.1-81.1% and was only marginally better when using in-stream variables than when applying only GIS variables. Also, the prediction power of a model combining GIS and in-stream variables was only slightly better than prediction based solely on GIS variables. The possibility of obtaining precise predictions without using costly in-stream variables offers great potential in the design of monitoring programmes as the distribution of monitoring sites along a gradient in ecological quality can be done at a low cost.

  2. A regularized variable selection procedure in additive hazards model with stratified case-cohort design.

    PubMed

    Ni, Ai; Cai, Jianwen

    2018-07-01

    Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.

  3. Number needed to treat (NNT) in clinical literature: an appraisal.

    PubMed

    Mendes, Diogo; Alves, Carlos; Batel-Marques, Francisco

    2017-06-01

    The number needed to treat (NNT) is an absolute effect measure that has been used to assess beneficial and harmful effects of medical interventions. Several methods can be used to calculate NNTs, and they should be applied depending on the different study characteristics, such as the design and type of variable used to measure outcomes. Whether or not the most recommended methods have been applied to calculate NNTs in studies published in the medical literature is yet to be determined. The aim of this study is to assess whether the methods used to calculate NNTs in studies published in medical journals are in line with basic methodological recommendations. The top 25 high-impact factor journals in the "General and/or Internal Medicine" category were screened to identify studies assessing pharmacological interventions and reporting NNTs. Studies were categorized according to their design and the type of variables. NNTs were assessed for completeness (baseline risk, time horizon, and confidence intervals [CIs]). The methods used for calculating NNTs in selected studies were compared to basic methodological recommendations published in the literature. Data were analyzed using descriptive statistics. The search returned 138 citations, of which 51 were selected. Most were meta-analyses (n = 23, 45.1%), followed by clinical trials (n = 17, 33.3%), cohort (n = 9, 17.6%), and case-control studies (n = 2, 3.9%). Binary variables were more common (n = 41, 80.4%) than time-to-event (n = 10, 19.6%) outcomes. Twenty-six studies (51.0%) reported only NNT to benefit (NNTB), 14 (27.5%) reported both NNTB and NNT to harm (NNTH), and 11 (21.6%) reported only NNTH. Baseline risk (n = 37, 72.5%), time horizon (n = 38, 74.5%), and CI (n = 32, 62.7%) for NNTs were not always reported. Basic methodological recommendations to calculate NNTs were not followed in 15 studies (29.4%). The proportion of studies applying non-recommended methods was particularly high for meta-analyses (n = 13, 56.5%). A considerable proportion of studies, particularly meta-analyses, applied methods that are not in line with basic methodological recommendations. Despite their usefulness in assisting clinical decisions, NNTs are uninterpretable if incompletely reported, and they may be misleading if calculating methods are inadequate to study designs and variables under evaluation. Further research is needed to confirm the present findings.

  4. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less

  5. Individual Differences in Learning from an Intelligent Discovery World: Smithtown.

    ERIC Educational Resources Information Center

    Shute, Valerie J.

    "Smithtown" is an intelligent computer program designed to enhance an individual's scientific inquiry skills as well as to provide an environment for learning principles of basic microeconomics. It was hypothesized that intelligent computer instruction on applying effective interrogative skills (e.g., changing one variable at a time…

  6. Artificial-neural-network-based failure detection and isolation

    NASA Astrophysics Data System (ADS)

    Sadok, Mokhtar; Gharsalli, Imed; Alouani, Ali T.

    1998-03-01

    This paper presents the design of a systematic failure detection and isolation system that uses the concept of failure sensitive variables (FSV) and artificial neural networks (ANN). The proposed approach was applied to tube leak detection in a utility boiler system. Results of the experimental testing are presented in the paper.

  7. Gamification in Physical Therapy: More Than Using Games.

    PubMed

    Janssen, Joep; Verschuren, Olaf; Renger, Willem Jan; Ermers, Jose; Ketelaar, Marjolijn; van Ee, Raymond

    2017-01-01

    The implementation of computer games in physical therapy is motivated by characteristics such as attractiveness, motivation, and engagement, but these do not guarantee the intended therapeutic effect of the interventions. Yet, these characteristics are important variables in physical therapy interventions because they involve reward-related dopaminergic systems in the brain that are known to facilitate learning through long-term potentiation of neural connections. In this perspective we propose a way to apply game design approaches to therapy development by "designing" therapy sessions in such a way as to trigger physical and cognitive behavioral patterns required for treatment and neurological recovery. We also advocate that improving game knowledge among therapists and improving communication between therapists and game designers may lead to a novel avenue in designing applied games with specific therapeutic input, thereby making gamification in therapy a realistic and promising future that may optimize clinical practice.

  8. Determination of oleamide and erucamide in polyethylene films by pressurised fluid extraction and gas chromatography.

    PubMed

    Garrido-López, Alvaro; Esquiu, Vanesa; Tena, María Teresa

    2006-08-18

    A pressurized fluid extraction (PFE) and gas chromatography-flame ionization detection (GC-FID) method is proposed to determine the slip agents in polyethylene (PE) films. The study of PFE variables was performed using a fractional factorial design (FFD) for screening and a central composite design (CCD) for optimizing the main variables obtained from the Pareto charts. The variables that were studied include temperature, static time, percentage of cyclohexane and the number of extraction cycles. The final condition selected was pure isopropanol (two times) at 105 degrees C for 16min. The recovery of spiked oleamide and erucamide was around 100%. The repeatability of the method was between 9.6% for oleamide and 8% for erucamide, expressed as relative standard deviation. Finally, the method was applied to determine oleamide and erucamide in several polyethylene films and the results were statistically equal to those obtained by pyrolysis and gas-phase chemiluminescence (CL).

  9. Introduction to the use of regression models in epidemiology.

    PubMed

    Bender, Ralf

    2009-01-01

    Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.

  10. Variable-frequency synchronous motor drives for electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chalmers, B.J.; Musaba, L.; Gosden, D.F.

    1995-12-31

    The performance capability envelope of a variable-frequency, permanent-magnet synchronous motor drive with field weakening is dependent upon the product of maximum current and direct-axis inductance. To obtain a performance characteristic suitable for a typical electric vehicle drive, in which short-term increase of current is applied, it is necessary to design an optimum value of direct-axis inductance. The paper presents an analysis of a hybrid motor design which uses a two-part rotor construction comprising a surface-magnet part and an axially-laminated reluctance part. This arrangement combines the properties of all other types of synchronous motor and offers a greater choice of designmore » variables. It is shown that the desired form of performance may be achieved when the high-inductance axis of the reluctance part is arranged to lead the magnet axis by 90{degree} (elec.).« less

  11. Importance of joint efforts for balanced process of designing and education

    NASA Astrophysics Data System (ADS)

    Mayorova, V. I.; Bannova, O. K.; Kristiansen, T.-H.; Igritsky, V. A.

    2015-06-01

    This paper discusses importance of a strategic planning and design process when developing long-term space exploration missions both robotic and manned. The discussion begins with reviewing current and/or traditional international perspectives on space development at the American, Russian and European space agencies. Some analogies and comparisons will be drawn upon analysis of several international student collaborative programs: Summer International workshops at the Bauman Moscow State Technical University, International European Summer Space School "Future Space Technologies and Experiments in Space", Summer school at Stuttgart University in Germany. The paper will focus on discussion about optimization of design and planning processes for successful space exploration missions and will highlight importance of the following: understanding connectivity between different levels of human being and machinery; simultaneous mission planning approach; reflections and correlations between disciplines involved in planning and executing space exploration missions; knowledge gained from different disciplines and through cross-applying and re-applying design approaches between variable space related fields of study and research. The conclusions will summarize benefits and complications of applying balanced design approach at all levels of the design process. Analysis of successes and failures of organizational efforts in space endeavors is used as a methodological approach to identify key questions to be researched as they often cause many planning and design processing problems.

  12. Microstructure and mesh sensitivities of mesoscale surrogate driving force measures for transgranular fatigue cracks in polycrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castelluccio, Gustavo M.; McDowell, David L.

    The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less

  13. Microstructure and mesh sensitivities of mesoscale surrogate driving force measures for transgranular fatigue cracks in polycrystals

    DOE PAGES

    Castelluccio, Gustavo M.; McDowell, David L.

    2015-05-22

    The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less

  14. Structural design optimization with survivability dependent constraints application: Primary wing box of a multi-role fighter

    NASA Technical Reports Server (NTRS)

    Dolvin, Douglas J.

    1992-01-01

    The superior survivability of a multirole fighter is dependent upon balanced integration of technologies for reduced vulnerability and susceptability. The objective is to develop a methodology for structural design optimization with survivability dependent constraints. The design criteria for optimization will be survivability in a tactical laser environment. The following analyses are studied to establish a dependent design relationship between structural weight and survivability: (1) develop a physically linked global design model of survivability variables; and (2) apply conventional constraints to quantify survivability dependent design. It was not possible to develop an exact approach which would include all aspects of survivability dependent design, therefore guidelines are offered for solving similar problems.

  15. Multiobjective optimization applied to structural sizing of low cost university-class microsatellite projects

    NASA Astrophysics Data System (ADS)

    Ravanbakhsh, Ali; Franchini, Sebastián

    2012-10-01

    In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In this paper, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. Finally, a Genetic Algorithm (GA) multiobjective optimization is applied to the design space. The result is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget, which gives a useful insight to the design team at the early phases of the design.

  16. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2005-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  17. Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Lunsford, Charles B.

    2004-01-01

    A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.

  18. Aerospace Applications of Integer and Combinatorial Optimization

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Kincaid, R. K.

    1995-01-01

    Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.

  19. Aerospace applications on integer and combinatorial optimization

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Kincaid, R. K.

    1995-01-01

    Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem. for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.

  20. Augmented Computer Mouse Would Measure Applied Force

    NASA Technical Reports Server (NTRS)

    Li, Larry C. H.

    1993-01-01

    Proposed computer mouse measures force of contact applied by user. Adds another dimension to two-dimensional-position-measuring capability of conventional computer mouse; force measurement designated to represent any desired continuously variable function of time and position, such as control force, acceleration, velocity, or position along axis perpendicular to computer video display. Proposed mouse enhances sense of realism and intuition in interaction between operator and computer. Useful in such applications as three-dimensional computer graphics, computer games, and mathematical modeling of dynamics.

  1. Design Optimization of a Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.; Jones, Scott M.; Gray, Justin S.

    2014-01-01

    NASA's Rotary Wing Project is investigating technologies that will enable the development of revolutionary civil tilt rotor aircraft. Previous studies have shown that for large tilt rotor aircraft to be viable, the rotor speeds need to be slowed significantly during the cruise portion of the flight. This requirement to slow the rotors during cruise presents an interesting challenge to the propulsion system designer as efficient engine performance must be achieved at two drastically different operating conditions. One potential solution to this challenge is to use a transmission with multiple gear ratios and shift to the appropriate ratio during flight. This solution will require a large transmission that is likely to be maintenance intensive and will require a complex shifting procedure to maintain power to the rotors at all times. An alternative solution is to use a fixed gear ratio transmission and require the power turbine to operate efficiently over the entire speed range. This concept is referred to as a variable-speed power-turbine (VSPT) and is the focus of the current study. This paper explores the design of a variable speed power turbine for civil tilt rotor applications using design optimization techniques applied to NASA's new meanline tool, the Object-Oriented Turbomachinery Analysis Code (OTAC).

  2. Topology and layout optimization of discrete and continuum structures

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Kikuchi, Noboru

    1993-01-01

    The basic features of the ground structure method for truss structure an continuum problems are described. Problems with a large number of potential structural elements are considered using the compliance of the structure as the objective function. The design problem is the minimization of compliance for a given structural weight, and the design variables for truss problems are the cross-sectional areas of the individual truss members, while for continuum problems they are the variable densities of material in each of the elements of the FEM discretization. It is shown how homogenization theory can be applied to provide a relation between material density and the effective material properties of a periodic medium with a known microstructure of material and voids.

  3. Quality Assurance in the Presence of Variability

    NASA Astrophysics Data System (ADS)

    Lauenroth, Kim; Metzger, Andreas; Pohl, Klaus

    Software Product Line Engineering (SPLE) is a reuse-driven development paradigm that has been applied successfully in information system engineering and other domains. Quality assurance of the reusable artifacts of the product line (e.g. requirements, design, and code artifacts) is essential for successful product line engineering. As those artifacts are reused in several products, a defect in a reusable artifact can affect several products of the product line. A central challenge for quality assurance in product line engineering is how to consider product line variability. Since the reusable artifacts contain variability, quality assurance techniques from single-system engineering cannot directly be applied to those artifacts. Therefore, different strategies and techniques have been developed for quality assurance in the presence of variability. In this chapter, we describe those strategies and discuss in more detail one of those strategies, the so called comprehensive strategy. The comprehensive strategy aims at checking the quality of all possible products of the product line and thus offers the highest benefits, since it is able to uncover defects in all possible products of the product line. However, the central challenge for applying the comprehensive strategy is the complexity that results from the product line variability and the large number of potential products of a product line. In this chapter, we present one concrete technique that we have developed to implement the comprehensive strategy that addresses this challenge. The technique is based on model checking technology and allows for a comprehensive verification of domain artifacts against temporal logic properties.

  4. A first principles based methodology for design of axial compressor configurations

    NASA Astrophysics Data System (ADS)

    Iyengar, Vishwas

    Axial compressors are widely used in many aerodynamic applications. The design of an axial compressor configuration presents many challenges. Until recently, compressor design was done using 2-D viscous flow analyses that solve the flow field around cascades or in meridional planes or 3-D inviscid analyses. With the advent of modern computational methods it is now possible to analyze the 3-D viscous flow and accurately predict the performance of 3-D multistage compressors. It is necessary to retool the design methodologies to take advantage of the improved accuracy and physical fidelity of these advanced methods. In this study, a first-principles based multi-objective technique for designing single stage compressors is described. The study accounts for stage aerodynamic characteristics, rotor-stator interactions and blade elastic deformations. A parametric representation of compressor blades that include leading and trailing edge camber line angles, thickness and camber distributions was used in this study. A design of experiment approach is used to reduce the large combinations of design variables into a smaller subset. A response surface method is used to approximately map the output variables as a function of design variables. An optimized configuration is determined as the extremum of all extrema. This method has been applied to a rotor-stator stage similar to NASA Stage 35. The study has two parts: a preliminary study where a limited number of design variables were used to give an understanding of the important design variables for subsequent use, and a comprehensive application of the methodology where a larger, more complete set of design variables are used. The extended methodology also attempts to minimize the acoustic fluctuations at the rotor-stator interface by considering a rotor-wake influence coefficient (RWIC). Results presented include performance map calculations at design and off-design speed along with a detailed visualization of the flow field at design and off-design conditions. The present methodology provides a way to systematically screening through the plethora of design variables. By selecting the most influential design parameters and by optimizing the blade leading edge and trailing edge mean camber line angles, phenomenon's such as tip blockages, blade-to-blade shock structures and other loss mechanisms can be weakened or alleviated. It is found that these changes to the configuration can have a beneficial effect on total pressure ratio and stage adiabatic efficiency, thereby improving the performance of the axial compression system. Aeroacoustic benefits were found by minimizing the noise generating mechanisms associated with rotor wake-stator interactions. The new method presented is reliable, low time cost, and easily applicable to industry daily design optimization of turbomachinery blades.

  5. Bayesian regression discontinuity designs: incorporating clinical knowledge in the causal analysis of primary care data.

    PubMed

    Geneletti, Sara; O'Keeffe, Aidan G; Sharples, Linda D; Richardson, Sylvia; Baio, Gianluca

    2015-07-10

    The regression discontinuity (RD) design is a quasi-experimental design that estimates the causal effects of a treatment by exploiting naturally occurring treatment rules. It can be applied in any context where a particular treatment or intervention is administered according to a pre-specified rule linked to a continuous variable. Such thresholds are common in primary care drug prescription where the RD design can be used to estimate the causal effect of medication in the general population. Such results can then be contrasted to those obtained from randomised controlled trials (RCTs) and inform prescription policy and guidelines based on a more realistic and less expensive context. In this paper, we focus on statins, a class of cholesterol-lowering drugs, however, the methodology can be applied to many other drugs provided these are prescribed in accordance to pre-determined guidelines. Current guidelines in the UK state that statins should be prescribed to patients with 10-year cardiovascular disease risk scores in excess of 20%. If we consider patients whose risk scores are close to the 20%  risk score threshold, we find that there is an element of random variation in both the risk score itself and its measurement. We can therefore consider the threshold as a randomising device that assigns statin prescription to individuals just above the threshold and withholds it from those just below. Thus, we are effectively replicating the conditions of an RCT in the area around the threshold, removing or at least mitigating confounding. We frame the RD design in the language of conditional independence, which clarifies the assumptions necessary to apply an RD design to data, and which makes the links with instrumental variables clear. We also have context-specific knowledge about the expected sizes of the effects of statin prescription and are thus able to incorporate this into Bayesian models by formulating informative priors on our causal parameters. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  6. A novel device to stretch multiple tissue samples with variable patterns: application for mRNA regulation in tissue-engineered constructs.

    PubMed

    Imsirovic, Jasmin; Derricks, Kelsey; Buczek-Thomas, Jo Ann; Rich, Celeste B; Nugent, Matthew A; Suki, Béla

    2013-01-01

    A broad range of cells are subjected to irregular time varying mechanical stimuli within the body, particularly in the respiratory and circulatory systems. Mechanical stretch is an important factor in determining cell function; however, the effects of variable stretch remain unexplored. In order to investigate the effects of variable stretch, we designed, built and tested a uniaxial stretching device that can stretch three-dimensional tissue constructs while varying the strain amplitude from cycle to cycle. The device is the first to apply variable stretching signals to cells in tissues or three dimensional tissue constructs. Following device validation, we applied 20% uniaxial strain to Gelfoam samples seeded with neonatal rat lung fibroblasts with different levels of variability (0%, 25%, 50% and 75%). RT-PCR was then performed to measure the effects of variable stretch on key molecules involved in cell-matrix interactions including: collagen 1α, lysyl oxidase, α-actin, β1 integrin, β3 integrin, syndecan-4, and vascular endothelial growth factor-A. Adding variability to the stretching signal upregulated, downregulated or had no effect on mRNA production depending on the molecule and the amount of variability. In particular, syndecan-4 showed a statistically significant peak at 25% variability, suggesting that an optimal variability of strain may exist for production of this molecule. We conclude that cycle-by-cycle variability in strain influences the expression of molecules related to cell-matrix interactions and hence may be used to selectively tune the composition of tissue constructs.

  7. Optimization of an electromagnetic linear actuator using a network and a finite element model

    NASA Astrophysics Data System (ADS)

    Neubert, Holger; Kamusella, Alfred; Lienig, Jens

    2011-03-01

    Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.

  8. Multidisciplinary Design Techniques Applied to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Olds, John Robert; Walberg, Gerald D.

    1993-01-01

    Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.

  9. Congruency of scapula locking plates: implications for implant design.

    PubMed

    Park, Andrew Y; DiStefano, James G; Nguyen, Thuc-Quyen; Buckley, Jenni M; Montgomery, William H; Grimsrud, Chris D

    2012-04-01

    We conducted a study to evaluate the congruency of fit of current scapular plate designs. Three-dimensional image-processing and -analysis software, and computed tomography scans of 12 cadaveric scapulae were used to generate 3 measurements: mean distance from plate to bone, maximum distance, and percentage of plate surface within 2 mm of bone. These measurements were used to quantify congruency. The scapular spine plate had the most congruent fit in all 3 measured variables. The lateral border and glenoid plates performed statistically as well as the scapular spine plate in at least 1 of the measured variables. The medial border plate had the least optimal measurements in all 3 variables. With locking-plate technology used in a wide variety of anatomical locations, the locking scapula plate system can allow for a fixed-angle construct in this region. Our study results showed that the scapular spine, glenoid, and lateral border plates are adequate in terms of congruency. However, design improvements may be necessary for the medial border plate. In addition, we describe a novel method for quantifying hardware congruency, a method that can be applied to any anatomical location.

  10. Olive fruits and vacuum impregnation, an interesting combination for dietetic iron enrichment.

    PubMed

    Zunin, Paola; Turrini, Federica; Leardi, Riccardo; Boggia, Raffaella

    2017-02-01

    In this study vacuum impregnation (VI) was employed for the iron enrichment of olive fruits, which are very interesting as food vehicle for VI mineral supplementation for the porosity of their pulp. NaFeEDTA was chosen for olives fortification since it prevents iron from binding with compounds that could hinder it from being efficiently absorbed and since it causes few organoleptic problems. In order to improve the efficiency of the VI process, several parameters of the whole process were studied by design of experiment techniques. First of all D-optimal design was employed for a preliminary screening of the most significant process variables and showed that the concentration of VI solution was by far the most significant process variable, though its time in contact with olives was also significant. A factorial design was then applied to the remaining variables and it showed that the speed of the addition of VI solution was also significant. Finally, the application of a face centered composite design to the three selected variables allowed to detect processing conditions leading to final iron contents of 1.5-3 mg/g, corresponding to an introduction of 10-15 mg Fe with four or five fortified olive fruits. No effect on olive taste was observed at these concentrations. The results showed that olive fruits were the most interesting vehicles for the supplementation of both iron and other minerals.

  11. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  12. An Assessment Protocol for Selective Mutism: Analogue Assessment Using Parents as Facilitators.

    ERIC Educational Resources Information Center

    Schill, Melissa T.; And Others

    1996-01-01

    Assesses protocol for conducting a functional analysis of maintaining variables for children with selective mutism. A parent was trained in and later applied various behavior strategies designed to increase speech in an eight-year-old girl with selective mutism. Parent and child ratings of treatment were positive. Presents implications for future…

  13. Power Analysis for Complex Mediational Designs Using Monte Carlo Methods

    ERIC Educational Resources Information Center

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…

  14. Planned Missing Data Designs in Educational Psychology Research

    ERIC Educational Resources Information Center

    Rhemtulla, Mijke; Hancock, Gregory R.

    2016-01-01

    Although missing data are often viewed as a challenge for applied researchers, in fact missing data can be highly beneficial. Specifically, when the amount of missing data on specific variables is carefully controlled, a balance can be struck between statistical power and research costs. This article presents the issue of planned missing data by…

  15. Establishing the Validity of the Task-Based English Speaking Test (TBEST) for International Teaching Assistants

    ERIC Educational Resources Information Center

    Witt, Autumn Song

    2010-01-01

    This dissertation follows an oral language assessment tool from initial design and implementation to validity analysis. The specialized variables of this study are the population: international teaching assistants and the purpose: spoken assessment as a hiring prerequisite. However, the process can easily be applied to other populations and…

  16. Culture or No Culture? A Latino Critical Research Analysis of Latino Persistence Research

    ERIC Educational Resources Information Center

    Gonzalez, Roger Geertz; Morrison, Jeaná

    2016-01-01

    The recent literature on Latino persistence does not take into account these students' distinct cultural backgrounds. Most researchers of Latino persistence use the self-designation "Latino" as a proxy variable representing Latino culture. A Latino Critical Theory (LatCrit) lens is applied to the persistence literature to demonstrate the…

  17. Modeling of crude oil biodegradation using two phase partitioning bioreactor.

    PubMed

    Fakhru'l-Razi, A; Peyda, Mazyar; Ab Karim Ghani, Wan Azlina Wan; Abidin, Zurina Zainal; Zakaria, Mohamad Pauzi; Moeini, Hassan

    2014-01-01

    In this work, crude oil biodegradation has been optimized in a solid-liquid two phase partitioning bioreactor (TPPB) by applying a response surface methodology based d-optimal design. Three key factors including phase ratio, substrate concentration in solid organic phase, and sodium chloride concentration in aqueous phase were taken as independent variables, while the efficiency of the biodegradation of absorbed crude oil on polymer beads was considered to be the dependent variable. Commercial thermoplastic polyurethane (Desmopan®) was used as the solid phase in the TPPB. The designed experiments were carried out batch wise using a mixed acclimatized bacterial consortium. Optimum combinations of key factors with a statistically significant cubic model were used to maximize biodegradation in the TPPB. The validity of the model was successfully verified by the good agreement between the model-predicted and experimental results. When applying the optimum parameters, gas chromatography-mass spectrometry showed a significant reduction in n-alkanes and low molecular weight polycyclic aromatic hydrocarbons. This consequently highlights the practical applicability of TPPB in crude oil biodegradation. © 2014 American Institute of Chemical Engineers.

  18. Real-Time Fault Detection Approach for Nonlinear Systems and its Asynchronous T-S Fuzzy Observer-Based Implementation.

    PubMed

    Li, Linlin; Ding, Steven X; Qiu, Jianbin; Yang, Ying

    2017-02-01

    This paper is concerned with a real-time observer-based fault detection (FD) approach for a general type of nonlinear systems in the presence of external disturbances. To this end, in the first part of this paper, we deal with the definition and the design condition for an L ∞ / L 2 type of nonlinear observer-based FD systems. This analytical framework is fundamental for the development of real-time nonlinear FD systems with the aid of some well-established techniques. In the second part, we address the integrated design of the L ∞ / L 2 observer-based FD systems by applying Takagi-Sugeno (T-S) fuzzy dynamic modeling technique as the solution tool. This fuzzy observer-based FD approach is developed via piecewise Lyapunov functions, and can be applied to the case that the premise variables of the FD system is nonsynchronous with the premise variables of the fuzzy model of the plant. In the end, a case study on the laboratory setup of three-tank system is given to show the efficiency of the proposed results.

  19. Design and analysis of variable-twist tiltrotor blades using shape memory alloy hybrid composites

    NASA Astrophysics Data System (ADS)

    Park, Jae-Sang; Kim, Seong-Hwan; Jung, Sung Nam; Lee, Myeong-Kyu

    2011-01-01

    The tiltrotor blade, or proprotor, acts as a rotor in the helicopter mode and as a propeller in the airplane mode. For a better performance, the proprotor should have different built-in twist distributions along the blade span, suitable for each operational mode. This paper proposes a new variable-twist proprotor concept that can adjust the built-in twist distribution for given flight modes. For a variable-twist control, the present proprotor adopts shape memory alloy hybrid composites (SMAHC) containing shape memory alloy (SMA) wires embedded in the composite matrix. The proprotor of the Korea Aerospace Research Institute (KARI) Smart Unmanned Aerial Vehicle (SUAV), which is based on the tiltrotor concept, is used as a baseline proprotor model. The cross-sectional properties of the variable-twist proprotor are designed to maintain the cross-sectional properties of the original proprotor as closely as possible. However, the torsion stiffness is significantly reduced to accommodate the variable-twist control. A nonlinear flexible multibody dynamic analysis is employed to investigate the dynamic characteristics of the proprotor such as natural frequency and damping in the whirl flutter mode, the blade structural loads in a transition flight and the rotor performance in hover. The numerical results show that the present proprotor is designed to have a strong similarity to the baseline proprotor in dynamic and load characteristics. It is demonstrated that the present proprotor concept could be used to improve the hover performance adaptively when the variable-twist control using the SMAHC is applied appropriately.

  20. New design opportunities with OVI

    NASA Astrophysics Data System (ADS)

    Bleikolm, Anton F.

    1998-04-01

    Optically Variable Ink (OVITM) chosen for its unique colour shifting properties is applied to the currencies of more than 50 countries. An significant colour difference at viewing angles of 90 degrees and 30 degrees respectively makes colour copying impossible. New manufacturing techniques for the interference pigment (OVP) provide ever better cost/performance ratios. Screen printing presses newly available on the market guarantee production speeds of 8000 sheets/hour or 130 meters/minute in the case of web printing, perfectly in line with the traditional equipment for manufacturing of currency. Specifically developed ink formulations allow UV-curing at high speed or oxidative drying to create highly mechanically and chemically resistant colour shifting prints. The unique colour shifting characteristics together with overprinting in intaglio give design opportunities providing the best protection against colour copying or commercial reprint. Specific designs of OVP together with high security ingredients allow the formulation of machine readable optically variable inks useful for the authentication and sorting of documents.

  1. A Rule Based Approach to ISS Interior Volume Control and Layout

    NASA Technical Reports Server (NTRS)

    Peacock, Brian; Maida, Jim; Fitts, David; Dory, Jonathan

    2001-01-01

    Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.

  2. HIGH SHEAR GRANULATION PROCESS: ASSESSING IMPACT OF FORMULATION VARIABLES ON GRANULES AND TABLETS CHARACTERISTICS OF HIGH DRUG LOADING FORMULATION USING DESIGN OF EXPERIMENT METHODOLOGY.

    PubMed

    Fayed, Mohamed H; Abdel-Rahman, Sayed I; Alanazi, Fars K; Ahmed, Mahrous O; Tawfeek, Hesham M; Ali, Bahaa E

    2017-03-01

    High shear wet granulation is a significant component procedure in the pharmaceutical industry. The objective of the study was to investigate the influence of two independent formulation variables; polyvinypyrrolidone (PVP) as a binder (X,) and croscarmellose sodium (CCS) as a disintegrant (X2) on the crit- ical quality attributes of acetaminophen granules and their corresponding tablets using design of experiment (DoE) approach. A two factor, three level (32) full factorial design has been applied; each variable was investi- gated at three levels to characterize their strength and interaction. The dried granules have been analyzed for their density, granule size and flowability. Additionally, the produced tablets have been investigated for: break- ing force, friability, disintegration time and t. of drug dissolution. The analysis of variance (ANOVA) showed that the two variables had a significant impact (p < 0.05) on granules and tablets characteristics, while only the binder concentration influenced the tablets friability. Furthermore, significant interactions (p < 0.05) between the two variables, for granules and tablets attributes, were also found. However, variables interaction showed minimal effect for granules flowability as well as tablets friability. Desirability function was carried out to opti- mize the variables under study to obtain product within the USP limit. It was found that the higher desirability (0.985) could be obtained at the medium level of PVP and low level of CCS. Ultimately, this study supplies the formulator with beneficial tools in selecting the proper level of binder and disintegrant to attain product with desired characteristics.

  3. Optimization of headspace solid-phase microextraction by means of an experimental design for the determination of methyl tert.-butyl ether in water by gas chromatography-flame ionization detection.

    PubMed

    Dron, Julien; Garcia, Rosa; Millán, Esmeralda

    2002-07-19

    A procedure for determination of methyl tert.-butyl ether (MTBE) in water by headspace solid-phase microextraction (HS-SPME) has been developed. The analysis was carried out by gas chromatography with flame ionization detection. The extraction procedure, using a 65-microm poly(dimethylsiloxane)-divinylbenzene SPME fiber, was optimized following experimental design. A fractional factorial design for screening and a central composite design for optimizing the significant variables were applied. Extraction temperature and sodium chloride concentration were significant variables, and 20 degrees C and 300 g/l were, respectively chosen for the best extraction response. With these conditions, an extraction time of 5 min was sufficient to extract MTBE. The calibration linear range for MTBE was 5-500 microg/l and the detection limit 0.45 microg/l. The relative standard deviation, for seven replicates of 250 microg/l MTBE in water, was 6.3%.

  4. Rapid Airplane Parametric Input Design(RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.

    2004-01-01

    An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.

  5. Variable Frequency Diverter Actuation for Flow Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.

    2006-01-01

    The design and development of an actively controlled fluidic actuator for flow control applications is explored. The basic device, with one input and two output channels, takes advantage of the Coanda effect to force a fluid jet to adhere to one of two axi-symmetric surfaces. The resultant flow is bi-stable, producing a constant flow from one output channel, until a disturbance force applied at the control point causes the flow to switch to the alternate output channel. By properly applying active control the output flows can be manipulated to provide a high degree of modulation over a wide and variable range of frequency and duty cycle. In this study the momentary operative force is applied by small, high speed isolation valves of which several different types are examined. The active fluidic diverter actuator is shown to work in several configurations including that in which the operator valves are referenced to atmosphere as well as to a source common with the power stream.

  6. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  7. Plug nozzles - The ultimate customer driven propulsion system. [applied to manned lunar and Martian landers

    NASA Technical Reports Server (NTRS)

    Aukerman, Carl A.

    1991-01-01

    This paper presents the results of a study applying the plug cluster nozzle concept to the propulsion system for a typical lunar excursion vehicle. Primary attention for the design criteria is given to user defined factors such as reliability, low volume, and ease of propulsion system development. Total thrust and specific impulse are held constant in the study while other parameters are explored to minimize the design chamber pressure. A brief history of the plug nozzle concept is included to point out the advanced level of technology of the concept and the feasibility of exploiting the variables considered in the study. The plug cluster concept looks very promising as a candidate for consideration for the ultimate customer driven propulsion system.

  8. Fresnel Lens Solar Concentrator Design Based on Geometric Optics and Blackbody Radiation Equations

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Jayroe, Robert, Jr.

    1999-01-01

    Fresnel lenses have been used for years as solar concentrators in a variety of applications. Several variables effect the final design of these lenses including: lens diameter, image spot distance from the lens, and bandwidth focused in the image spot. Defining the image spot as the geometrical optics circle of least confusion and applying blackbody radiation equations the spot energy distribution can be determined. These equations are used to design a fresnel lens to produce maximum flux for a given spot size, lens diameter, and image distance. This approach results in significant increases in solar efficiency over traditional single wavelength designs.

  9. The Effect of External Magnetic Field on Dielectric Permeability of Multiphase Ferrofluids

    NASA Astrophysics Data System (ADS)

    Dotsenko, O. A.; Pavlova, A. A.; Dotsenko, V. S.

    2018-03-01

    Nowadays, ferrofluids are applied in various fields of science and technology, namely space, medicine, geology, biology, automobile production, etc. In order to investigate the feasibility of applying ferrofluids in magnetic field sensors, the paper presents research into the influence of the external magnetic field on dielectric permeability of ferrofluids comprising magnetite nanopowder, multiwall carbon nanotubes, propanetriol and deionized water. The real and imaginary parts of the dielectric permeability change respectively by 3.7 and 0.5% when applying the magnetic field parallel to the electric. The findings suggest that the considered ferrofluid can be used as a magnetic level gauge or in design of variable capacitors.

  10. Linear parameter varying representations for nonlinear control design

    NASA Astrophysics Data System (ADS)

    Carter, Lance Huntington

    Linear parameter varying (LPV) systems are investigated as a framework for gain-scheduled control design and optimal hybrid control. An LPV system is defined as a linear system whose dynamics depend upon an a priori unknown but measurable exogenous parameter. A gain-scheduled autopilot design is presented for a bank-to-turn (BTT) missile. The method is novel in that the gain-scheduled design does not involve linearizations about operating points. Instead, the missile dynamics are brought to LPV form via a state transformation. This idea is applied to the design of a coupled longitudinal/lateral BTT missile autopilot. The pitch and yaw/roll dynamics are separately transformed to LPV form, where the cross axis states are treated as "exogenous" parameters. These are actually endogenous variables, so such a plant is called "quasi-LPV." Once in quasi-LPV form, a family of robust controllers using mu synthesis is designed for both the pitch and yaw/roll channels, using angle-of-attack and roll rate as the scheduling variables. The closed-loop time response is simulated using the original nonlinear model and also using perturbed aerodynamic coefficients. Modeling and control of engine idle speed is investigated using LPV methods. It is shown how generalized discrete nonlinear systems may be transformed into quasi-LPV form. A discrete nonlinear engine model is developed and expressed in quasi-LPV form with engine speed as the scheduling variable. An example control design is presented using linear quadratic methods. Simulations are shown comparing the LPV based controller performance to that using PID control. LPV representations are also shown to provide a setting for hybrid systems. A hybrid system is characterized by control inputs consisting of both analog signals and discrete actions. A solution is derived for the optimal control of hybrid systems with generalized cost functions. This is shown to be computationally intensive, so a suboptimal strategy is proposed that neglects a subset of possible parameter trajectories. A computational algorithm is constructed for this suboptimal solution applied to a class of linear non-quadratic cost functions.

  11. Green design application on campus to enhance student’s quality of life

    NASA Astrophysics Data System (ADS)

    Tamiami, H.; Khaira, F.; Fachrudin, A.

    2018-02-01

    Green design becomes an important thing to applied in the building. Green building will provide comfortability and enhance Quality of Life (QoL) for the users. The purpose of this research is to analyze how green design application on campus to enhance student’s QoL. This research conducted in three campuses which located in North Sumatera Province, namely Universitas Sumatera Utara (USU), Universitas Negeri Medan (Unimed) and Universitas Medan Area (UMA) which have a lot of vegetation, open space, and multi-mass buildings. This research compared the green design application to QoL from three universities. Green design in this research that become independent variables focus on the energy efficiency and conservation (EEC), indoor health and comfort (IHC) and building environment management (BEM) with dependent variable is QoL. This research uses quantitative methods with questionnaire survey techniques. The population is students from the three universities with the sample of each University is 50 samples. The analysis uses multiple regression analysis. The results show that green design application may enhance QoL of students. The campus should have a good green design application to enhance QoL of students and give them comfortability.

  12. Low-complexity piecewise-affine virtual sensors: theory and design

    NASA Astrophysics Data System (ADS)

    Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco

    2014-03-01

    This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.

  13. Thermal radiation characteristics of nonisothermal cylindrical enclosures using a numerical ray tracing technique

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1990-01-01

    Analysis of energy emitted from simple or complex cavity designs can lead to intricate solutions due to nonuniform radiosity and irradiation within a cavity. A numerical ray tracing technique was applied to simulate radiation propagating within and from various cavity designs. To obtain the energy balance relationships between isothermal and nonisothermal cavity surfaces and space, the computer code NEVADA was utilized for its statistical technique applied to numerical ray tracing. The analysis method was validated by comparing results with known theoretical and limiting solutions, and the electrical resistance network method. In general, for nonisothermal cavities the performance (apparent emissivity) is a function of cylinder length-to-diameter ratio, surface emissivity, and cylinder surface temperatures. The extent of nonisothermal conditions in a cylindrical cavity significantly affects the overall cavity performance. Results are presented over a wide range of parametric variables for use as a possible design reference.

  14. SPSS programs for the measurement of nonindependence in standard dyadic designs.

    PubMed

    Alferes, Valentim R; Kenny, David A

    2009-02-01

    Dyadic research is becoming more common in the social and behavioral sciences. The most common dyadic design is one in which two persons are measured on the same set of variables. Very often, the first analysis of dyadic data is to determine the extent to which the responses of the two persons are correlated-that is, whether there is nonindependence in the data. We describe two user-friendly SPSS programs for measuring nonindependence of dyadic data. Both programs can be used for distinguishable and indistinguishable dyad members. Inter1.sps is appropriate for interval measures. Inter2.sps applies to categorical variables. The SPSS syntax and data files related to this article may be downloaded as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  15. Plant growth modeling at the JSC variable pressure growth chamber - An application of experimental design

    NASA Technical Reports Server (NTRS)

    Miller, Adam M.; Edeen, Marybeth; Sirko, Robert J.

    1992-01-01

    This paper describes the approach and results of an effort to characterize plant growth under various environmental conditions at the Johnson Space Center variable pressure growth chamber. Using a field of applied mathematics and statistics known as design of experiments (DOE), we developed a test plan for varying environmental parameters during a lettuce growth experiment. The test plan was developed using a Box-Behnken approach to DOE. As a result of the experimental runs, we have developed empirical models of both the transpiration process and carbon dioxide assimilation for Waldman's Green lettuce over specified ranges of environmental parameters including carbon dioxide concentration, light intensity, dew-point temperature, and air velocity. This model also predicts transpiration and carbon dioxide assimilation for different ages of the plant canopy.

  16. Using the Web as a Strategic Resource: An Applied Classroom Exercise.

    ERIC Educational Resources Information Center

    Wright, Kathleen M.; Granger, Mary J.

    This paper reports the findings of an experiment designed to test extensions of the Technology Acceptance Model (TAM) within the context of using the World Wide Web to gather and analyze financial information. The proposed extensions are three-fold. Based on prior research, cognitive absorption variables are posited as predeterminants of ease of…

  17. Save money by understanding variance and tolerancing.

    PubMed

    Stuart, K

    2007-01-01

    Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.

  18. Modeling the survival of Salmonella on slice cooked ham as a function of apple skin polyphenols, acetic acid, oregano essential oil and carvacrol

    USDA-ARS?s Scientific Manuscript database

    Response surface methodology was applied to investigate the combined effect of apple skin polyphenols (ASP), acetic acid (AA), oregano essential oil (O) and carvacrol (C) on the inactivation of Salmonella on sliced cooked ham. A full factorial experimental design was employed with control variables ...

  19. Physician communication in the operating room: expanding application of face-negotiation theory to the health communication context.

    PubMed

    Kirschbaum, Kristin

    2012-01-01

    Communication variables that are associated with face-negotiation theory were examined in a sample of operating-room physicians. A survey was administered to anesthesiologists and surgeons at a teaching hospital in the southwestern United States to measure three variables commonly associated with face-negotiation theory: conflict-management style, face concern, and self-construal. The survey instrument that was administered to physicians includes items that measured these three variables in previous face-negotiation research with slight modification of item wording for relevance in the medical setting. The physician data were analyzed using confirmatory factor analysis, Pearson's correlations, and t-tests. Results of this initial investigation showed that variables associated with face-negotiation theory were evident in the sample physician population. In addition, the correlations were similar among variables in the medical sample as those found in previous face-negotiation research. Finally, t-tests suggest variance between anesthesiologists and surgeons on specific communication variables. These findings suggest three implications that warrant further investigation with expanded sample size: (1) An intercultural communication theory and instrument can be utilized for health communication research; (2) as applied in a medical context, face-negotiation theory can be expanded beyond traditional intercultural communication boundaries; and (3) theoretically based communication structures applied in a medical context could help explain physician miscommunication in the operating room to assist future design of communication training programs for operating-room physicians.

  20. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  1. A quality by design study applied to an industrial pharmaceutical fluid bed granulation.

    PubMed

    Lourenço, Vera; Lochmann, Dirk; Reich, Gabriele; Menezes, José C; Herdling, Thorsten; Schewitz, Jens

    2012-06-01

    The pharmaceutical industry is encouraged within Quality by Design (QbD) to apply science-based manufacturing principles to assure quality not only of new but also of existing processes. This paper presents how QbD principles can be applied to an existing industrial pharmaceutical fluid bed granulation (FBG) process. A three-step approach is presented as follows: (1) implementation of Process Analytical Technology (PAT) monitoring tools at the industrial scale process, combined with multivariate data analysis (MVDA) of process and PAT data to increase the process knowledge; (2) execution of scaled-down designed experiments at a pilot scale, with adequate PAT monitoring tools, to investigate the process response to intended changes in Critical Process Parameters (CPPs); and finally (3) the definition of a process Design Space (DS) linking CPPs to Critical to Quality Attributes (CQAs), within which product quality is ensured by design, and after scale-up enabling its use at the industrial process scale. The proposed approach was developed for an existing industrial process. Through enhanced process knowledge established a significant reduction in product CQAs, variability already within quality specifications ranges was achieved by a better choice of CPPs values. The results of such step-wise development and implementation are described. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Probabilistic micromechanics of woven ceramic matrix composites

    NASA Astrophysics Data System (ADS)

    Goldsmith, Marlana

    Woven ceramic matrix composites are a special class of composite materials that are of current interest for harsh thermo-structural conditions such as those encountered by hypersonic vehicle systems and turbine engine components. Testing of the materials is expensive, especially as materials are constantly redesigned. Randomness in the tow architecture, as well as the randomly shaped and spaced voids that are produced as a result of the manufacturing process, are features that contribute to variability in stiffness and strength. The goal of the research is to lay a foundation in which characteristics of the geometry can be translated into material properties. The research first includes quantifying the architectural variability based on 2D micrographs of a 5 harness satin CVI (Chemical Vapor Infiltration) SiC/SiC composite. The architectural variability is applied to a 2D representative volume element (RVE) in order to evaluate which aspects of the architecture are important to model in order to capture the variability found in the cross sections. Tow width, tow spacing, and tow volume fraction were found to have some effect on the variability, but voids were found to have a large influence on transverse stiffness, and a separate study was conducted to determine which characteristics of the voids are most critical to model. It was found that the projected area of the void perpendicular to the transverse direction and the number of voids modeled had a significant influence on the stiffness. The effect of varying architecture on the variability of in-plane tensile strength was also studied using the Brittle Cracking Model for Concrete in the commercial finite element software, Abaqus. A maximum stress criterion is used to evaluate failure, and the stiffness of failed elements is gradually degraded such that the energy required to open a crack (fracture energy) is dissipated during this degradation process. While the varying architecture did not create variability in the in-plane stiffness, it does contribute significantly to the variability of in-plane strength as measured by a 0.02% offset method. Applying spatially random strengths for the constituents did not contribute to variability in strength as measured by the 0.02% offset. The results of this research may be of interest to those designing materials, as well as those using the material in their design. Having an idea about which characteristics of the architecture affect variability in stiffness may provide guidance to the material designer with respect to which aspects of the architecture can be controlled or improved to decrease the variability of the material properties. The work will also be useful to those desiring to use the complex materials by determining how to link the architectural properties to the mechanical properties with the ultimate goal of reducing the required number of tests.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Pasquini, Benedetta; Cooley, Scott K.

    In recent years, multivariate optimization has played an increasing role in analytical method development. ICH guidelines recommend using statistical design of experiments to identify the design space, in which multivariate combinations of composition variables and process variables have been demonstrated to provide quality results. Considering a microemulsion electrokinetic chromatography method (MEEKC), the performance of the electrophoretic run depends on the proportions of mixture components (MCs) of the microemulsion and on the values of process variables (PVs). In the present work, for the first time in the literature, a mixture-process variable (MPV) approach was applied to optimize a MEEKC method formore » the analysis of coenzyme Q10 (Q10), ascorbic acid (AA), and folic acid (FA) contained in nutraceuticals. The MCs (buffer, surfactant-cosurfactant, oil) and the PVs (voltage, buffer concentration, buffer pH) were simultaneously changed according to a MPV experimental design. A 62-run MPV design was generated using the I-optimality criterion, assuming a 46-term MPV model allowing for special-cubic blending of the MCs, quadratic effects of the PVs, and some MC-PV interactions. The obtained data were used to develop MPV models that express the performance of an electrophoretic run (measured as peak efficiencies of Q10, AA, and FA) in terms of the MCs and PVs. Contour and perturbation plots were drawn for each of the responses. Finally, the MPV models and criteria for the peak efficiencies were used to develop the design space and an optimal subregion (i.e., the settings of the mixture MCs and PVs that satisfy the respective criteria), as well as a unique optimal combination of MCs and PVs.« less

  4. Formulation optimization of transdermal meloxicam potassium-loaded mesomorphic phases containing ethanol, oleic acid and mixture surfactant using the statistical experimental design methodology.

    PubMed

    Huang, Chi-Te; Tsai, Chia-Hsun; Tsou, Hsin-Yeh; Huang, Yaw-Bin; Tsai, Yi-Hung; Wu, Pao-Chu

    2011-01-01

    Response surface methodology (RSM) was used to develop and optimize the mesomorphic phase formulation for a meloxicam transdermal dosage form. A mixture design was applied to prepare formulations which consisted of three independent variables including oleic acid (X(1)), distilled water (X(2)) and ethanol (X(3)). The flux and lag time (LT) were selected as dependent variables. The result showed that using mesomorphic phases as vehicles can significantly increase flux and shorten LT of drug. The analysis of variance showed that the permeation parameters of meloxicam from formulations were significantly influenced by the independent variables and their interactions. The X(3) (ethanol) had the greatest potential influence on the flux and LT, followed by X(1) and X(2). A new formulation was prepared according to the independent levels provided by RSM. The observed responses were in close agreement with the predicted values, demonstrating that RSM could be successfully used to optimize mesomorphic phase formulations.

  5. Predicting the effectiveness of road safety campaigns through alternative research designs.

    PubMed

    Adamos, Giannis; Nathanail, Eftihia

    2016-12-01

    A large number of road safety communication campaigns have been designed and implemented in the recent years; however their explicit impact on driving behavior and road accident rates has been estimated in a rather low proportion. Based on the findings of the evaluation of three road safety communication campaigns addressing the issues of drinking and driving, seat belt usage, and driving fatigue, this paper applies different types of research designs (i.e., experimental, quasi-experimental, and non-experimental designs), when estimating the effectiveness of road safety campaigns, implements a cross-design assessment, and conducts a cross-campaign evaluation. An integrated evaluation plan was developed, taking into account the structure of evaluation questions, the definition of measurable variables, the separation of the target audience into intervention (exposed to the campaign) and control (not exposed to the campaign) groups, the selection of alternative research designs, and the appropriate data collection methods and techniques. Evaluating the implementation of different research designs in estimating the effectiveness of road safety campaigns, results showed that the separate pre-post samples design demonstrated better predictability than other designs, especially in data obtained from the intervention group after the realization of the campaign. The more constructs that were added to the independent variables, the higher the values of the predictability were. The construct that most affects behavior is intention, whereas the rest of the constructs have a lower impact on behavior. This is particularly significant in the Health Belief Model (HBM). On the other hand, behavioral beliefs, normative beliefs, and descriptive norms, are significant parameters for predicting intention according to the Theory of Planned Behavior (TPB). The theoretical and applied implications of alternative research designs and their applicability in the evaluation of road safety campaigns are provided by this study. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  6. Effects of accessible website design on nondisabled users: age and device as moderating factors.

    PubMed

    Schmutz, Sven; Sonderegger, Andreas; Sauer, Juergen

    2018-05-01

    This study examined how implementing recommendations from Web accessibility guidelines affects nondisabled people in different age groups using different technical devices. While recent research showed positive effects of implementing such recommendations for nondisabled users, it remains unclear whether such effects would apply to different age groups and kind of devices. A 2 × 2 × 2 design was employed with website accessibility (high accessibility vs. very low accessibility), age (younger adults vs. older adults) and type of device (laptop vs. tablet) as independent variables. 110 nondisabled participants took part in a usability test, in which performance and satisfaction were measured as dependent variables. The results showed that higher accessibility increased task completion rate, task completion time and satisfaction ratings of nondisabled users. While user age did not have any effects, users showed faster task completion time under high accessibility when using a tablet rather than a laptop. The findings confirmed previous findings, which showed benefits of accessible websites for nondisabled users. These beneficial effects may now be generalised to a wide age range and across different devices. Practitioner Summary: This work is relevant to the design of websites since it emphasises the need to consider the characteristics of different user groups. Accessible website design (aimed at users with disabilities) leads to benefits for nondisabled users across different ages. These findings provide further encouragement for practitioners to apply WCAG 2.0.

  7. Importance of anthropogenic climate impact, sampling error and urban development in sewer system design.

    PubMed

    Egger, C; Maurer, M

    2015-04-15

    Urban drainage design relying on observed precipitation series neglects the uncertainties associated with current and indeed future climate variability. Urban drainage design is further affected by the large stochastic variability of precipitation extremes and sampling errors arising from the short observation periods of extreme precipitation. Stochastic downscaling addresses anthropogenic climate impact by allowing relevant precipitation characteristics to be derived from local observations and an ensemble of climate models. This multi-climate model approach seeks to reflect the uncertainties in the data due to structural errors of the climate models. An ensemble of outcomes from stochastic downscaling allows for addressing the sampling uncertainty. These uncertainties are clearly reflected in the precipitation-runoff predictions of three urban drainage systems. They were mostly due to the sampling uncertainty. The contribution of climate model uncertainty was found to be of minor importance. Under the applied greenhouse gas emission scenario (A1B) and within the period 2036-2065, the potential for urban flooding in our Swiss case study is slightly reduced on average compared to the reference period 1981-2010. Scenario planning was applied to consider urban development associated with future socio-economic factors affecting urban drainage. The impact of scenario uncertainty was to a large extent found to be case-specific, thus emphasizing the need for scenario planning in every individual case. The results represent a valuable basis for discussions of new drainage design standards aiming specifically to include considerations of uncertainty. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Thermoelastic analysis of non-uniform pressurized functionally graded cylinder with variable thickness using first order shear deformation theory(FSDT) and perturbation method

    NASA Astrophysics Data System (ADS)

    Khoshgoftar, M. J.; Mirzaali, M. J.; Rahimi, G. H.

    2015-11-01

    Recently application of functionally graded materials(FGMs) have attracted a great deal of interest. These materials are composed of various materials with different micro-structures which can vary spatially in FGMs. Such composites with varying thickness and non-uniform pressure can be used in the aerospace engineering. Therefore, analysis of such composite is of high importance in engineering problems. Thermoelastic analysis of functionally graded cylinder with variable thickness under non-uniform pressure is considered. First order shear deformation theory and total potential energy approach is applied to obtain the governing equations of non-homogeneous cylinder. Considering the inner and outer solutions, perturbation series are applied to solve the governing equations. Outer solution for out of boundaries and more sensitive variable in inner solution at the boundaries are considered. Combining of inner and outer solution for near and far points from boundaries leads to high accurate displacement field distribution. The main aim of this paper is to show the capability of matched asymptotic solution for different non-homogeneous cylinders with different shapes and different non-uniform pressures. The results can be used to design the optimum thickness of the cylinder and also some properties such as high temperature residence by applying non-homogeneous material.

  9. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  10. Variable Valve Actuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey Gutterman; A. J. Lasley

    2008-08-31

    Many approaches exist to enable advanced mode, low temperature combustion systems for diesel engines - such as premixed charge compression ignition (PCCI), Homogeneous Charge Compression Ignition (HCCI) or other HCCI-like combustion modes. The fuel properties and the quantity, distribution and temperature profile of air, fuel and residual fraction in the cylinder can have a marked effect on the heat release rate and combustion phasing. Figure 1 shows that a systems approach is required for HCCI-like combustion. While the exact requirements remain unclear (and will vary depending on fuel, engine size and application), some form of substantially variable valve actuation ismore » a likely element in such a system. Variable valve actuation, for both intake and exhaust valve events, is a potent tool for controlling the parameters that are critical to HCCI-like combustion and expanding its operational range. Additionally, VVA can be used to optimize the combustion process as well as exhaust temperatures and impact the after treatment system requirements and its associated cost. Delphi Corporation has major manufacturing and product development and applied R&D expertise in the valve train area. Historical R&D experience includes the development of fully variable electro-hydraulic valve train on research engines as well as several generations of mechanical VVA for gasoline systems. This experience has enabled us to evaluate various implementations and determine the strengths and weaknesses of each. While a fully variable electro-hydraulic valve train system might be the 'ideal' solution technically for maximum flexibility in the timing and control of the valve events, its complexity, associated costs, and high power consumption make its implementation on low cost high volume applications unlikely. Conversely, a simple mechanical system might be a low cost solution but not deliver the flexibility required for HCCI operation. After modeling more than 200 variations of the mechanism it was determined that the single cam design did not have enough flexibility to satisfy three critical OEM requirements simultaneously, (maximum valve lift variation, intake valve opening timing and valve closing duration), and a new approach would be necessary. After numerous internal design reviews including several with the OEM a dual cam design was developed that had the flexibility to meet all motion requirements. The second cam added complexity to the mechanism however the cost was offset by the deletion of the electric motor required in the previous design. New patent applications including detailed drawings and potential valve motion profiles were generated and alternate two cam designs were proposed and evaluated for function, cost, reliability and durability. Hardware was designed and built and testing of sample hardware was successfully completed on an engine test stand. The mechanism developed during the course of this investigation can be applied by Original Equipment Manufacturers, (OEM), to their advanced diesel engines with the ultimate goal of reducing emissions and improving fuel economy. The objectives are: (1) Develop an optimal, cost effective, variable valve actuation (VVA) system for advanced low temperature diesel combustion processes. (2) Design and model alternative mechanical approaches and down-select for optimum design. (3) Build and demonstrate a mechanism capable of application on running engines.« less

  11. A hybrid fuzzy logic/constraint satisfaction problem approach to automatic decision making in simulation game models.

    PubMed

    Braathen, Sverre; Sendstad, Ole Jakob

    2004-08-01

    Possible techniques for representing automatic decision-making behavior approximating human experts in complex simulation model experiments are of interest. Here, fuzzy logic (FL) and constraint satisfaction problem (CSP) methods are applied in a hybrid design of automatic decision making in simulation game models. The decision processes of a military headquarters are used as a model for the FL/CSP decision agents choice of variables and rulebases. The hybrid decision agent design is applied in two different types of simulation games to test the general applicability of the design. The first application is a two-sided zero-sum sequential resource allocation game with imperfect information interpreted as an air campaign game. The second example is a network flow stochastic board game designed to capture important aspects of land manoeuvre operations. The proposed design is shown to perform well also in this complex game with a very large (billionsize) action set. Training of the automatic FL/CSP decision agents against selected performance measures is also shown and results are presented together with directions for future research.

  12. Optimization of Robust HPLC Method for Quantitation of Ambroxol Hydrochloride and Roxithromycin Using a DoE Approach.

    PubMed

    Patel, Rashmin B; Patel, Nilay M; Patel, Mrunali R; Solanki, Ajay B

    2017-03-01

    The aim of this work was to develop and optimize a robust HPLC method for the separation and quantitation of ambroxol hydrochloride and roxithromycin utilizing Design of Experiment (DoE) approach. The Plackett-Burman design was used to assess the impact of independent variables (concentration of organic phase, mobile phase pH, flow rate and column temperature) on peak resolution, USP tailing and number of plates. A central composite design was utilized to evaluate the main, interaction, and quadratic effects of independent variables on the selected dependent variables. The optimized HPLC method was validated based on ICH Q2R1 guideline and was used to separate and quantify ambroxol hydrochloride and roxithromycin in tablet formulations. The findings showed that DoE approach could be effectively applied to optimize a robust HPLC method for quantification of ambroxol hydrochloride and roxithromycin in tablet formulations. Statistical comparison between results of proposed and reported HPLC method revealed no significant difference; indicating the ability of proposed HPLC method for analysis of ambroxol hydrochloride and roxithromycin in pharmaceutical formulations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. An integrated quality by design and mixture-process variable approach in the development of a capillary electrophoresis method for the analysis of almotriptan and its impurities.

    PubMed

    Orlandini, S; Pasquini, B; Stocchero, M; Pinzauti, S; Furlanetto, S

    2014-04-25

    The development of a capillary electrophoresis (CE) method for the assay of almotriptan (ALM) and its main impurities using an integrated Quality by Design and mixture-process variable (MPV) approach is described. A scouting phase was initially carried out by evaluating different CE operative modes, including the addition of pseudostationary phases and additives to the background electrolyte, in order to approach the analytical target profile. This step made it possible to select normal polarity microemulsion electrokinetic chromatography (MEEKC) as operative mode, which allowed a good selectivity to be achieved in a low analysis time. On the basis of a general Ishikawa diagram for MEEKC methods, a screening asymmetric matrix was applied in order to screen the effects of the process variables (PVs) voltage, temperature, buffer concentration and buffer pH, on critical quality attributes (CQAs), represented by critical separation values and analysis time. A response surface study was then carried out considering all the critical process parameters, including both the PVs and the mixture components (MCs) of the microemulsion (borate buffer, n-heptane as oil, sodium dodecyl sulphate/n-butanol as surfactant/cosurfactant). The values of PVs and MCs were simultaneously changed in a MPV study, making it possible to find significant interaction effects. The design space (DS) was defined as the multidimensional combination of PVs and MCs where the probability for the different considered CQAs to be acceptable was higher than a quality level π=90%. DS was identified by risk of failure maps, which were drawn on the basis of Monte-Carlo simulations, and verification points spanning the design space were tested. Robustness testing of the method, performed by a D-optimal design, and system suitability criteria allowed a control strategy to be designed. The optimized method was validated following ICH Guideline Q2(R1) and was applied to a real sample of ALM coated tablets. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Variable focus photographic lens without mechanical movements

    NASA Astrophysics Data System (ADS)

    Chen, Jiabi; Peng, Runling; Zhuang, Songlin

    2007-09-01

    A novel design of a zoom lens system without motorized movements is proposed. The lens system consists of a fixed lens and two double-liquid variable-focus lenses. The liquid lenses, made out of two immiscible liquids, are based on the principle of electrowetting: an effect controlling the wetting properties of a liquid on a solid by modifying the applied voltage at the solid-liquid interface. The structure and principle of the lens system are introduced in this paper. And detailed calculations and simulation examples are presented to predict how two liquid lenses are related to meet the basic requirements of zoom lenses.

  15. Influence of thermodynamic parameter in Lanosterol 14alpha-demethylase inhibitory activity as antifungal agents: a QSAR approach.

    PubMed

    Vasanthanathan, Poongavanam; Lakshmi, Manickavasagam; Arockia Babu, Marianesan; Kaskhedikar, Sathish Gopalrao

    2006-06-01

    A quantitative structure activity relationship, Hansch approach was applied on twenty compounds of chromene derivatives as Lanosterol 14alpha-demethylase inhibitory activity against eight fungal organisms. Various physicochemical descriptors and reported minimum inhibitory concentration values of different fungal organisms were used as independent variables and dependent variable respectively. The best models for eight different fungal organisms were first validated by leave-one-out cross validation procedure. It was revealed that thermodynamic parameters were found to have overall significant correlationship with anti fungal activity and these studies provide an insight to design new molecules.

  16. Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study

    ERIC Educational Resources Information Center

    Rusticus, Shayna A.; Lovato, Chris Y.

    2014-01-01

    The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…

  17. Assessment of modification factors for a row of bolts or timber connectors

    Treesearch

    Thomas Lee Wilkinson

    1980-01-01

    When bolts or timber connectors are used in a row, with load applied parallel to the row, load will be unequally distributed among the fasteners. This study assessed methods of predicting this unequal load distribution, looked at how joint variables can affect the distribution, and compared the predictions with data existing in the literature. Presently used design...

  18. Applying Hierarchical Linear Models (HLM) to Estimate the School and Children's Effects on Reading Achievement

    ERIC Educational Resources Information Center

    Liu, Xing

    2008-01-01

    The purpose of this study was to illustrate the use of Hierarchical Linear Models (HLM) to investigate the effects of school and children's attributes on children' reading achievement. In particular, this study was designed to: (1) develop the HLM models to determine the effects of school-level and child-level variables on children's reading…

  19. 26 CFR 1.1274-2 - Issue price of debt instruments to which section 1274 applies.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...- borrower to the seller-lender that is designated as interest or points. See Example 2 of § 1.1273-2(g)(5... ignored. (f) Treatment of variable rate debt instruments—(1) Stated interest at a qualified floating rate... qualified floating rate (or rates) is determined by assuming that the instrument provides for a fixed rate...

  20. Apparatus and method for microwave processing of materials using field-perturbing tool

    DOEpatents

    Tucker, Denise A.; Fathi, Zakaryae; Lauf, Robert J.

    2001-01-01

    A variable frequency microwave heating apparatus designed to allow modulation of the frequency of the microwaves introduced into a multi-mode microwave cavity for heating or other selected applications. A field-perturbing tool is disposed within the cavity to perturb the microwave power distribution in order to apply a desired level of microwave power to the workpiece.

  1. Almost human: Anthropomorphism increases trust resilience in cognitive agents.

    PubMed

    de Visser, Ewart J; Monfort, Samuel S; McKendrick, Ryan; Smith, Melissa A B; McKnight, Patrick E; Krueger, Frank; Parasuraman, Raja

    2016-09-01

    We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human–automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism—the degree to which an agent exhibits human characteristics—is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater trust resilience , a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human–agent trust as well as novel automation design. PsycINFO Database Record (c) 2016 APA, all rights reserved

  2. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    USGS Publications Warehouse

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.

  3. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  4. Multidrug Resistance among New Tuberculosis Cases: Detecting Local Variation through Lot Quality-Assurance Sampling

    PubMed Central

    Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-01-01

    Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242

  5. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    PubMed

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  6. QbD for pediatric oral lyophilisates development: risk assessment followed by screening and optimization.

    PubMed

    Casian, Tibor; Iurian, Sonia; Bogdan, Catalina; Rus, Lucia; Moldovan, Mirela; Tomuta, Ioan

    2017-12-01

    This study proposed the development of oral lyophilisates with respect to pediatric medicine development guidelines, by applying risk management strategies and DoE as an integrated QbD approach. Product critical quality attributes were overviewed by generating Ishikawa diagrams for risk assessment purposes, considering process, formulation and methodology related parameters. Failure Mode Effect Analysis was applied to highlight critical formulation and process parameters with an increased probability of occurrence and with a high impact on the product performance. To investigate the effect of qualitative and quantitative formulation variables D-optimal designs were used for screening and optimization purposes. Process parameters related to suspension preparation and lyophilization were classified as significant factors, and were controlled by implementing risk mitigation strategies. Both quantitative and qualitative formulation variables introduced in the experimental design influenced the product's disintegration time, mechanical resistance and dissolution properties selected as CQAs. The optimum formulation selected through Design Space presented ultra-fast disintegration time (5 seconds), a good dissolution rate (above 90%) combined with a high mechanical resistance (above 600 g load). Combining FMEA and DoE allowed the science based development of a product with respect to the defined quality target profile by providing better insights on the relevant parameters throughout development process. The utility of risk management tools in pharmaceutical development was demonstrated.

  7. Nonparametric regression applied to quantitative structure-activity relationships

    PubMed

    Constans; Hirst

    2000-03-01

    Several nonparametric regressors have been applied to modeling quantitative structure-activity relationship (QSAR) data. The simplest regressor, the Nadaraya-Watson, was assessed in a genuine multivariate setting. Other regressors, the local linear and the shifted Nadaraya-Watson, were implemented within additive models--a computationally more expedient approach, better suited for low-density designs. Performances were benchmarked against the nonlinear method of smoothing splines. A linear reference point was provided by multilinear regression (MLR). Variable selection was explored using systematic combinations of different variables and combinations of principal components. For the data set examined, 47 inhibitors of dopamine beta-hydroxylase, the additive nonparametric regressors have greater predictive accuracy (as measured by the mean absolute error of the predictions or the Pearson correlation in cross-validation trails) than MLR. The use of principal components did not improve the performance of the nonparametric regressors over use of the original descriptors, since the original descriptors are not strongly correlated. It remains to be seen if the nonparametric regressors can be successfully coupled with better variable selection and dimensionality reduction in the context of high-dimensional QSARs.

  8. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  9. Optimization and design of ibuprofen-loaded nanostructured lipid carriers using a hybrid-design approach for ocular drug delivery

    NASA Astrophysics Data System (ADS)

    Rathod, Vishal

    The objective of the present project was to develop the Ibuprofen-loaded Nanostructured Lipid Carrier (IBU-NLCs) for topical ocular delivery based on substantial pre-formulation screening of the components and understanding the interplay between the formulation and process variables. The BCS Class II drug: Ibuprofen was selected as the model drug for the current study. IBU-NLCs were prepared by melt emulsification and ultrasonication technique. Extensive pre-formulation studies were performed to screen the lipid components (solid and liquid) based on drug's solubility and affinity as well as components compatibility. The results from DSC & XRD assisted in selecting the most suitable ratio to be utilized for future studies. DynasanRTM 114 was selected as the solid lipid & MiglyolRTM 840 was selected as the liquid lipid based on preliminary lipid screening. The ratio of 6:4 was predicted to be the best based on its crystallinity index and the thermal events. As there are many variables involved for further optimization of the formulation, a single design approach is not always adequate. A hybrid-design approach was applied by employing the Plackett Burman design (PBD) for preliminary screening of 7 critical variables, followed by Box-Behnken design (BBD), a sub-type of response surface methodology (RSM) design using 2 relatively significant variables from the former design and incorporating Surfactant/Co-surfactant ratio as the third variable. Comparatively, KolliphorRTM HS15 demonstrated lower Mean Particle Size (PS) & Polydispersity Index (PDI) and KolliphorRTM P188 resulted in Zeta Potential (ZP) < -20 mV during the surfactant screening & stability studies. Hence, Surfactant/Cosurfactant ratio was employed as the third variable to understand its synergistic effect on the response variables. We selected PS, PDI, and ZP as critical response variables in the PBD since they significantly influence the stability & performance of NLCs. Formulations prepared using BBD were further characterized and evaluated concerning PS, PDI, ZP and Entrapment Efficiency (EE) to identify the multi-factor interactions between selected formulation variables. In vitro release studies were performed using Spectra/por dialysis membrane on Franz diffusion cell and Phosphate Saline buffer (7.4 pH) as the medium. Samples for assay, EE, Loading Capacity (LC), Solubility studies & in-vitro release were filtered using Amicon 50K and analyzed via UPLC system (Waters) at a detection wavelength of 220 nm. Significant variables were selected through PBD, and the third variable was incorporated based on surfactant screening & stability studies for the next design. Assay of the BBD based formulations was found to be within 95-104% of the theoretically calculated values. Further studies were investigated for PS, PDI, ZP & EE. PS was found to be in the range of 103-194 nm with PDI ranging from 0.118 to 0.265. The ZP and EE were observed to be in the range of -22.2 to -11 mV & 90 to 98.7 % respectively. Drug release of 30% was observed from the optimized formulation in the first 6 hr of in-vitro studies, and the drug release showed a sustained release of ibuprofen thereafter over several hours. These values also confirm that the production method, and all other selected variables, effectively promoted the incorporation of ibuprofen in NLC. Quality by Design (QbD) approach was successfully implemented in developing a robust ophthalmic formulation with superior physicochemical and morphometric properties. NLCs as the nanocarrier demonstrated promising perspective for topical delivery of poorly water-soluble drugs.

  10. Central composite rotatable design for investigation of microwave-assisted extraction of ginger (Zingiber officinale)

    NASA Astrophysics Data System (ADS)

    Fadzilah, R. Hanum; Sobhana, B. Arianto; Mahfud, M.

    2015-12-01

    Microwave-assisted extraction technique was employed to extract essential oil from ginger. The optimal condition for microwave assisted extraction of ginger were determined by resposnse surface methodology. A central composite rotatable design was applied to evaluate the effects of three independent variables. The variables is were microwave power 400 - 800W as X1, feed solvent ratio of 0.33 -0.467 as X2 and feed size 1 cm, 0.25 cm and less than 0.2 cm as X3. The correlation analysis of mathematical modelling indicated that quadratic polynomial could be employed to optimize microwave assisted extraction of ginger. The optimal conditions to obtain highest yield of essential oil were : microwave power 597,163 W : feed solvent ratio and size of feed less than 0.2 cm.

  11. Modeling and design optimization of adhesion between surfaces at the microscale.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sylves, Kevin T.

    2008-08-01

    This research applies design optimization techniques to structures in adhesive contact where the dominant adhesive mechanism is the van der Waals force. Interface finite elements are developed for domains discretized by beam elements, quadrilateral elements or triangular shell elements. Example analysis problems comparing finite element results to analytical solutions are presented. These examples are then optimized, where the objective is matching a force-displacement relationship and the optimization variables are the interface element energy of adhesion or the width of beam elements in the structure. Several parameter studies are conducted and discussed.

  12. A pilot rating scale for evaluating failure transients in electronic flight control systems

    NASA Technical Reports Server (NTRS)

    Hindson, William S.; Schroeder, Jeffery A.; Eshow, Michelle M.

    1990-01-01

    A pilot rating scale was developed to describe the effects of transients in helicopter flight-control systems on safety-of-flight and on pilot recovery action. The scale was applied to the evaluation of hardovers that could potentially occur in the digital flight-control system being designed for a variable-stability UH-60A research helicopter. Tests were conducted in a large moving-base simulator and in flight. The results of the investigation were combined with existing airworthiness criteria to determine quantitative reliability design goals for the control system.

  13. Supporting second grade lower secondary school students’ understanding of linear equation system in two variables using ethnomathematics

    NASA Astrophysics Data System (ADS)

    Nursyahidah, F.; Saputro, B. A.; Rubowo, M. R.

    2018-03-01

    The aim of this research is to know the students’ understanding of linear equation system in two variables using Ethnomathematics and to acquire learning trajectory of linear equation system in two variables for the second grade of lower secondary school students. This research used methodology of design research that consists of three phases, there are preliminary design, teaching experiment, and retrospective analysis. Subject of this study is 28 second grade students of Sekolah Menengah Pertama (SMP) 37 Semarang. The result of this research shows that the students’ understanding in linear equation system in two variables can be stimulated by using Ethnomathematics in selling buying tradition in Peterongan traditional market in Central Java as a context. All of strategies and model that was applied by students and also their result discussion shows how construction and contribution of students can help them to understand concept of linear equation system in two variables. All the activities that were done by students produce learning trajectory to gain the goal of learning. Each steps of learning trajectory of students have an important role in understanding the concept from informal to the formal level. Learning trajectory using Ethnomathematics that is produced consist of watching video of selling buying activity in Peterongan traditional market to construct linear equation in two variables, determine the solution of linear equation in two variables, construct model of linear equation system in two variables from contextual problem, and solving a contextual problem related to linear equation system in two variables.

  14. Applying causal mediation analysis to personality disorder research.

    PubMed

    Walters, Glenn D

    2018-01-01

    This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. A torsional MRE joint for a C-shaped robotic leg

    NASA Astrophysics Data System (ADS)

    Christie, M. D.; Sun, S. S.; Ning, D. H.; Du, H.; Zhang, S. W.; Li, W. H.

    2017-01-01

    Serving to improve stability and energy efficiency during locomotion, in nature, animals modulate their leg stiffness to adapt to their terrain. Now incorporated into many locomotive robot designs, such compliance control can enable disturbance rejection and improved transition between changing ground conditions. This paper presents a novel design of a variable stiffness leg utilizing a magnetorheological elastomer joint in a literal rolling spring loaded inverted pendulum (R-SLIP) morphology. Through the semi-active control of this hybrid permanent-magnet and coil design, variable stiffness is realized, offering a design which is capable of both softening and stiffening in an adaptive sort of way, with a maximum stiffness change of 48.0%. Experimental characterization first serves to assess the stiffness variation capacity of the torsional joint, and through later comparison with force testing of the leg, the linear stiffness is characterized with the R-SLIP-like behavior of the leg being demonstrated. Through the force relationships applied, a generalized relationship for determining linear stiffness based on joint rotation angle is also proposed, further aiding experimental validation.

  16. Design of a Variable Stiffness Soft Dexterous Gripper

    PubMed Central

    Nefti-Meziani, Samia; Davis, Steve

    2017-01-01

    Abstract This article presents the design of a variable stiffness, soft, three-fingered dexterous gripper. The gripper uses two designs of McKibben muscles. Extensor muscles that increase in length when pressurized are used to form the fingers of the gripper. Contractor muscles that decrease in length when pressurized are then used to apply forces to the fingers through tendons, which cause flexion and extension of the fingers. The two types of muscles are arranged to act antagonistically and this means that by raising the pressure in all of the pneumatic muscles, the stiffness of the system can be increased without a resulting change in finger position. The article presents the design of the gripper, some basic kinematics to describe its function, and then experimental results demonstrating the ability to adjust the bending stiffness of the gripper's fingers. It has been demonstrated that the fingers' bending stiffness can be increased by more than 150%. The article concludes by demonstrating that the fingers can be closed loop position controlled and are able to track step and sinusoidal inputs. PMID:29062630

  17. Stress state estimation in multilayer support of vertical shafts, considering off-design cross-sectional deformation

    NASA Astrophysics Data System (ADS)

    Antsiferov, SV; Sammal, AS; Deev, PV

    2018-03-01

    To determine the stress-strain state of multilayer support of vertical shafts, including cross-sectional deformation of the tubing rings as against the design, the authors propose an analytical method based on the provision of the mechanics of underground structures and surrounding rock mass as the elements of an integrated deformable system. The method involves a rigorous solution of the corresponding problem of elasticity, obtained using the mathematical apparatus of the theory of analytic functions of a complex variable. The design method is implemented as a software program allowing multivariate applied computation. Examples of the calculation are given.

  18. Low vibration high numerical aperture automated variable temperature Raman microscope

    DOE PAGES

    Tian, Y.; Reijnders, A. A.; Osterhoudt, G. B.; ...

    2016-04-05

    Raman micro-spectroscopy is well suited for studying a variety of properties and has been applied to wide- ranging areas. Combined with tuneable temperature, Raman spectra can offer even more insights into the properties of materials. However, previous designs of variable temperature Raman microscopes have made it extremely challenging to measure samples with low signal levels due to thermal and positional instability as well as low collection efficiencies. Thus, contemporary Raman microscope has found limited applicability to probing the subtle physics involved in phase transitions and hysteresis. This paper describes a new design of a closed-cycle, Raman microscope with full polarizationmore » rotation. High collection efficiency, thermal and mechanical stability are ensured by both deliberate optical, cryogenic, and mechanical design. Measurements on two samples, Bi 2Se 3 and V 2O 3, which are known as challenging due to low thermal conductivities, low signal levels and/or hysteretic effects, are measured with previously undemonstrated temperature resolution.« less

  19. Low vibration high numerical aperture automated variable temperature Raman microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Y.; Reijnders, A. A.; Osterhoudt, G. B.

    Raman micro-spectroscopy is well suited for studying a variety of properties and has been applied to wide- ranging areas. Combined with tuneable temperature, Raman spectra can offer even more insights into the properties of materials. However, previous designs of variable temperature Raman microscopes have made it extremely challenging to measure samples with low signal levels due to thermal and positional instability as well as low collection efficiencies. Thus, contemporary Raman microscope has found limited applicability to probing the subtle physics involved in phase transitions and hysteresis. This paper describes a new design of a closed-cycle, Raman microscope with full polarizationmore » rotation. High collection efficiency, thermal and mechanical stability are ensured by both deliberate optical, cryogenic, and mechanical design. Measurements on two samples, Bi 2Se 3 and V 2O 3, which are known as challenging due to low thermal conductivities, low signal levels and/or hysteretic effects, are measured with previously undemonstrated temperature resolution.« less

  20. Effect of Variable Emittance Coatings on the Operation of a Miniature Loop Heat Pipe

    NASA Technical Reports Server (NTRS)

    Douglas, Donya M.; Ku, Jentung; Ottenstein, Laura; Swanson, Theodore; Hess, Steve; Darrin, Ann

    2005-01-01

    Abstract. As the size of spacecraft shrink to accommodate small and more efficient instruments, smaller launch vehicles, and constellation missions, all subsystems must also be made smaller. Under NASA NFL4 03-OSS-02, Space Technology-8 (ST 8), NASA Goddard Space Flight Center and Jet Propulsion Laboratory jointly conducted a Concept Definition study to develop a miniature loop heat pipe (MLHP) thermal management system design suitable for future small spacecraft. The proposed MLHP thermal management system consists of a miniature loop heat pipe (LHP) and deployable radiators that are coated with variable emittance coatings (VECs). As part of the Phase A study and proof of the design concept, variable emittance coatings were integrated with a breadboard miniature loop heat pipe. The miniature loop heat pipe was supplied by the Jet Propulsion Laboratory (PL), while the variable emittance technology were supplied by Johns Hopkins University Applied Physics Laboratory and Sensortex, Inc. The entire system was tested under vacuum at various temperature extremes and power loads. This paper summarizes the results of this testing and shows the effect of the VEC on the operation of a miniature loop heat pipe.

  1. Chemometric optimization of the robustness of the near infrared spectroscopic method in wheat quality control.

    PubMed

    Pojić, Milica; Rakić, Dušan; Lazić, Zivorad

    2015-01-01

    A chemometric approach was applied for the optimization of the robustness of the NIRS method for wheat quality control. Due to the high number of experimental (n=6) and response variables to be studied (n=7) the optimization experiment was divided into two stages: screening stage in order to evaluate which of the considered variables were significant, and optimization stage to optimize the identified factors in the previously selected experimental domain. The significant variables were identified by using fractional factorial experimental design, whilst Box-Wilson rotatable central composite design (CCRD) was run to obtain the optimal values for the significant variables. The measured responses included: moisture, protein and wet gluten content, Zeleny sedimentation value and deformation energy. In order to achieve the minimal variation in responses, the optimal factor settings were found by minimizing the propagation of error (POE). The simultaneous optimization of factors was conducted by desirability function. The highest desirability of 87.63% was accomplished by setting up experimental conditions as follows: 19.9°C for sample temperature, 19.3°C for ambient temperature and 240V for instrument voltage. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Determination of melamine in soil samples using surfactant-enhanced hollow fiber liquid phase microextraction followed by HPLC–UV using experimental design

    PubMed Central

    Sarafraz Yazdi, Ali; Raouf Yazdinezhad, Samaneh; Heidari, Tahereh

    2014-01-01

    Surfactant-enhanced hollow fiber liquid phase (SE-HF-LPME) microextraction was applied for the extraction of melamine in conjunction with high performance liquid chromatography with UV detection (HPLC–UV). Sodium dodecyl sulfate (SDS) was added firstly to the sample solution at pH 1.9 to form hydrophobic ion-pair with protonated melamine. Then the protonated melamine–dodecyl sulfate ion-pair (Mel–DS) was extracted from aqueous phase into organic phase immobilized in the pores and lumen of the hollow fiber. After extraction, the analyte-enriched 1-octanol was withdrawn into the syringe and injected into the HPLC. Preliminary, one variable at a time method was applied to select the type of extraction solvent. Then, in screening step, the other variables that may affect the extraction efficiency of the analyte were studied using a fractional factorial design. In the next step, a central composite design was applied for optimization of the significant factors having positive effects on extraction efficiency. The optimum operational conditions included: sample volume, 5 mL; surfactant concentration, 1.5 mM; pH 1.9; stirring rate, 1500 rpm and extraction time, 60 min. Using the optimum conditions, the method was analytically evaluated. The detection limit, relative standard deviation and linear range were 0.005 μg mL−1, 4.0% (3 μg mL−1, n = 5) and 0.01–8 μg mL−1, respectively. The performance of the procedure in extraction of melamine from the soil samples was good according to its relative recoveries in different spiking levels (95–109%). PMID:26644934

  3. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  4. Personality and long term exposure to organic solvents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindstroem, K.; Martelin, T.

    1980-01-01

    Personality, especially emotional reactions of two solvent exposed groups and a nonexposed reference group were described by means of 20 formal, content and check-list type of Rorschach variables. Another objective of the study was to explore the suitability and psychological meaning of other types of Rorschach variables than those applied earlier in the field of behavioral toxicology. The factor analyses grouped the applied variables into factors of Productivity, Ego Strength, Control of Emotionality, Defensive Introversion and Aggressiveness. One solvent group, a patient groups (N.53), was characterized by a high number of Organic signs and a low Genetic Level, indicating possiblemore » psychoorganic deterioration. The other solvent group, styrene exposed but subjectively healthy (N.98), was characterized by few emotional reactions, low Anxiety and a low number of Neurotic Signs. the long duration of exposure of the solvent patient group (mean 10.2 +/- 8.7 years) was related to variables of the Productivity factor, a finding that indicates a possible better adjustment of those exposed for a longer time. The duration of exposure of the styrene exposed group (mean 4.9 +/- 3.2 years) revealed a very slight relation to personality variables, but the mean urinary mandelic acid concentration, indicating the level of styrene exposure, correlated with increased emotional reactions. For the most part definite causal conclusions could not be drawn because of the cross-sectional design of the study.« less

  5. Latent structure modeling underlying theophylline tablet formulations using a Bayesian network based on a self-organizing map clustering.

    PubMed

    Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo

    2015-01-01

    The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and design space. In this article, we integrate thin-plate spline (TPS) interpolation, Kohonen's self-organizing map (SOM) and a Bayesian network (BN) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared using a standard formulation. We measured the tensile strength and disintegration time as response variables and the compressibility, cohesion and dispersibility of the pretableting blend as latent variables. We predicted these variables quantitatively using nonlinear TPS, generated a large amount of data on pretableting blends and tablets and clustered these data into several clusters using a SOM. Our results show that we are able to predict the experimental values of the latent and response variables with a high degree of accuracy and are able to classify the tablet data into several distinct clusters. In addition, to visualize the latent structure between the causal and latent factors and the response variables, we applied a BN method to the SOM clustering results. We found that despite having inserted latent variables between the causal factors and response variables, their relation is equivalent to the results for the SOM clustering, and thus we are able to explain the underlying latent structure. Consequently, this technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulation.

  6. Fulfilling the law of a single independent variable and improving the result of mathematical educational research

    NASA Astrophysics Data System (ADS)

    Pardimin, H.; Arcana, N.

    2018-01-01

    Many types of research in the field of mathematics education apply the Quasi-Experimental method and statistical analysis use t-test. Quasi-experiment has a weakness that is difficult to fulfil “the law of a single independent variable”. T-test also has a weakness that is a generalization of the conclusions obtained is less powerful. This research aimed to find ways to reduce the weaknesses of the Quasi-experimental method and improved the generalization of the research results. The method applied in the research was a non-interactive qualitative method, and the type was concept analysis. Concepts analysed are the concept of statistics, research methods of education, and research reports. The result represented a way to overcome the weaknesses of quasi-Experiments and T-test. In addition, the way was to apply a combination of Factorial Design and Balanced Design, which the authors refer to as Factorial-Balanced Design. The advantages of this design are: (1) almost fulfilling “the low of single independent variable” so no need to test the similarity of the academic ability, (2) the sample size of the experimental group and the control group became larger and equal; so it becomes robust to deal with violations of the assumptions of the ANOVA test.

  7. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  8. Design Life Level: Quantifying risk in a changing climate

    NASA Astrophysics Data System (ADS)

    Rootzén, Holger; Katz, Richard W.

    2013-09-01

    In the past, the concepts of return levels and return periods have been standard and important tools for engineering design. However, these concepts are based on the assumption of a stationary climate and do not apply to a changing climate, whether local or global. In this paper, we propose a refined concept, Design Life Level, which quantifies risk in a nonstationary climate and can serve as the basis for communication. In current practice, typical hydrologic risk management focuses on a standard (e.g., in terms of a high quantile corresponding to the specified probability of failure for a single year). Nevertheless, the basic information needed for engineering design should consist of (i) the design life period (e.g., the next 50 years, say 2015-2064); and (ii) the probability (e.g., 5% chance) of a hazardous event (typically, in the form of the hydrologic variable exceeding a high level) occurring during the design life period. Capturing both of these design characteristics, the Design Life Level is defined as an upper quantile (e.g., 5%) of the distribution of the maximum value of the hydrologic variable (e.g., water level) over the design life period. We relate this concept and variants of it to existing literature and illustrate how they, and some useful complementary plots, may be computed and used. One practically important consideration concerns quantifying the statistical uncertainty in estimating a high quantile under nonstationarity.

  9. Optimization of formulation variables of benzocaine liposomes using experimental design.

    PubMed

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness.

  10. [Diversity and frequency of scientific research design and statistical methods in the "Arquivos Brasileiros de Oftalmologia": a systematic review of the "Arquivos Brasileiros de Oftalmologia"--1993-2002].

    PubMed

    Crosta, Fernando; Nishiwaki-Dantas, Maria Cristina; Silvino, Wilmar; Dantas, Paulo Elias Correa

    2005-01-01

    To verify the frequency of study design, applied statistical analysis and approval by institutional review offices (Ethics Committee) of articles published in the "Arquivos Brasileiros de Oftalmologia" during a 10-year interval, with later comparative and critical analysis by some of the main international journals in the field of Ophthalmology. Systematic review without metanalysis was performed. Scientific papers published in the "Arquivos Brasileiros de Oftalmologia" between January 1993 and December 2002 were reviewed by two independent reviewers and classified according to the applied study design, statistical analysis and approval by the institutional review offices. To categorize those variables, a descriptive statistical analysis was used. After applying inclusion and exclusion criteria, 584 articles for evaluation of statistical analysis and, 725 articles for evaluation of study design were reviewed. Contingency table (23.10%) was the most frequently applied statistical method, followed by non-parametric tests (18.19%), Student's t test (12.65%), central tendency measures (10.60%) and analysis of variance (9.81%). Of 584 reviewed articles, 291 (49.82%) presented no statistical analysis. Observational case series (26.48%) was the most frequently used type of study design, followed by interventional case series (18.48%), observational case description (13.37%), non-random clinical study (8.96%) and experimental study (8.55%). We found a higher frequency of observational clinical studies, lack of statistical analysis in almost half of the published papers. Increase in studies with approval by institutional review Ethics Committee was noted since it became mandatory in 1996.

  11. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    NASA Astrophysics Data System (ADS)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  12. Application of optimal control theory to the design of broadband excitation pulses for high-resolution NMR.

    PubMed

    Skinner, Thomas E; Reiss, Timo O; Luy, Burkhard; Khaneja, Navin; Glaser, Steffen J

    2003-07-01

    Optimal control theory is considered as a methodology for pulse sequence design in NMR. It provides the flexibility for systematically imposing desirable constraints on spin system evolution and therefore has a wealth of applications. We have chosen an elementary example to illustrate the capabilities of the optimal control formalism: broadband, constant phase excitation which tolerates miscalibration of RF power and variations in RF homogeneity relevant for standard high-resolution probes. The chosen design criteria were transformation of I(z)-->I(x) over resonance offsets of +/- 20 kHz and RF variability of +/-5%, with a pulse length of 2 ms. Simulations of the resulting pulse transform I(z)-->0.995I(x) over the target ranges in resonance offset and RF variability. Acceptably uniform excitation is obtained over a much larger range of RF variability (approximately 45%) than the strict design limits. The pulse performs well in simulations that include homonuclear and heteronuclear J-couplings. Experimental spectra obtained from 100% 13C-labeled lysine show only minimal coupling effects, in excellent agreement with the simulations. By increasing pulse power and reducing pulse length, we demonstrate experimental excitation of 1H over +/-32 kHz, with phase variations in the spectra <8 degrees and peak amplitudes >93% of maximum. Further improvements in broadband excitation by optimized pulses (BEBOP) may be possible by applying more sophisticated implementations of the optimal control formalism.

  13. Panel flutter optimization by gradient projection

    NASA Technical Reports Server (NTRS)

    Pierson, B. L.

    1975-01-01

    A gradient projection optimal control algorithm incorporating conjugate gradient directions of search is described and applied to several minimum weight panel design problems subject to a flutter speed constraint. New numerical solutions are obtained for both simply-supported and clamped homogeneous panels of infinite span for various levels of inplane loading and minimum thickness. The minimum thickness inequality constraint is enforced by a simple transformation of variables.

  14. Social Cognitive and Planned Behavior Variables Associated with Stages of Change for Physical Activity in Spinal Cord Injury: A Multivariate Analysis

    ERIC Educational Resources Information Center

    Keegan, John; Ditchman, Nicole; Dutta, Alo; Chiu, Chung-Yi; Muller, Veronica; Chan, Fong; Kundu, Madan

    2016-01-01

    Purpose: To apply the constructs of social cognitive theory (SCT) and the theory of planned behavior (TPB) to understand the stages of change (SOC) for physical activities among individuals with a spinal cord injury (SCI). Method: Ex post facto design using multivariate analysis of variance (MANOVA). The participants were 144 individuals with SCI…

  15. A Demonstration Sample for Poetry Education: Poem under the Light of "Poetics of the Open Work"

    ERIC Educational Resources Information Center

    Afacan, Aydin

    2016-01-01

    The aim of this study is to provide a demonstration sample for the high school stage under the light of "Poetics of the Open Work" that is considered as a step towards comprehending the qualified poem. In this study, has been built in single group pretest-posttest design. Independent variables are applied to a randomly selected group to…

  16. Software sensors for bioprocesses.

    PubMed

    Bogaerts, Ph; Vande Wouwer, A

    2003-10-01

    State estimation is a significant problem in biotechnological processes, due to the general lack of hardware sensor measurements of the variables describing the process dynamics. The objective of this paper is to review a number of software sensor design methods, including extended Kalman filters, receding-horizon observers, asymptotic observers, and hybrid observers, which can be efficiently applied to bioprocesses. These several methods are illustrated with simulation and real-life case studies.

  17. The "Health Belief Model" Applied to Two Preventive Health Behaviors Among Women from a Rural Pennsylvania County. AE & RS 115.

    ERIC Educational Resources Information Center

    Hazen, Mary E.

    In order to test the usefulnes of the Health Belief Model (a model designed to measure health practices, attitudes, and knowledge), a survey of Potter County, Pennsylvania was conducted, and 283 responses from adult females without chronic illnesses were analyzed. The dependent variables employed were regulating diet and getting regular exercise.…

  18. Variable Structure Control of a Hand-Launched Glider

    NASA Technical Reports Server (NTRS)

    Anderson, Mark R.; Waszak, Martin R.

    2005-01-01

    Variable structure control system design methods are applied to the problem of aircraft spin recovery. A variable structure control law typically has two phases of operation. The reaching mode phase uses a nonlinear relay control strategy to drive the system trajectory to a pre-defined switching surface within the motion state space. The sliding mode phase involves motion along the surface as the system moves toward an equilibrium or critical point. Analysis results presented in this paper reveal that the conventional method for spin recovery can be interpreted as a variable structure controller with a switching surface defined at zero yaw rate. Application of Lyapunov stability methods show that deflecting the ailerons in the direction of the spin helps to insure that this switching surface is stable. Flight test results, obtained using an instrumented hand-launched glider, are used to verify stability of the reaching mode dynamics.

  19. Multiple regression for physiological data analysis: the problem of multicollinearity.

    PubMed

    Slinker, B K; Glantz, S A

    1985-07-01

    Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.

  20. Topometry optimization of sheet metal structures for crashworthiness design using hybrid cellular automata

    NASA Astrophysics Data System (ADS)

    Mozumder, Chandan K.

    The objective in crashworthiness design is to generate plastically deformable energy absorbing structures which can satisfy the prescribed force-displacement (FD) response. The FD behavior determines the reaction force, displacement and the internal energy that the structure should withstand. However, attempts to include this requirement in structural optimization problems remain scarce. The existing commercial optimization tools utilize models under static loading conditions because of the complexities associated with dynamic/impact loading. Due to the complexity of a crash event and the consequent time required to numerically analyze the dynamic response of the structure, classical methods (i.e., gradient-based and direct) are not well developed to solve this undertaking. This work presents an approach under the framework of the hybrid cellular automaton (HCA) method to solve the above challenge. The HCA method has been successfully applied to nonlinear transient topology optimization for crashworthiness design. In this work, the HCA algorithm has been utilized to develop an efficient methodology for synthesizing shell-based sheet metal structures with optimal material thickness distribution under a dynamic loading event using topometry optimization. This method utilizes the cellular automata (CA) computing paradigm and nonlinear transient finite element analysis (FEA) via ls-dyna. In this method, a set field variables is driven to their target states by changing a convenient set of design variables (e.g., thickness). These rules operate locally in cells within a lattice that only know local conditions. The field variables associated with the cells are driven to a setpoint to obtain the desired structure. This methodology is used to design for structures with controlled energy absorption with specified buckling zones. The peak reaction force and the maximum displacement are also constrained to meet the desired safety level according to passenger safety regulations. Design for prescribed FD response by minimizing the error between the actual response and desired FD curve is implemented. With the use of HCA rules, manufacturability constraints (e.g., rolling) and structures which can be manufactured by special techniques, such as, tailor-welded blanks (TWB), have also been implemented. This methodology is applied to shock-absorbing structural components for passengers in a crashing vehicle. These results are compared to previous designs showing the benefits of the method introduced in this work.

  1. An analysis of the loads applied to a heavy Space Station rack during translation and rotation tasks

    NASA Technical Reports Server (NTRS)

    Stoycos, Lara E.; Klute, Glenn K.

    1994-01-01

    To prepare for Space Station Alpha's on-orbit assembly, maintenance, and resupply, NASA requires information about the crew members' ability to move heavy masses on orbit. Ease of movement in microgravity and orbiter stay time constraints may change the Space Station equipment and outfitting design requirements. Therefore, the time and effort required to perform a particular task and how and where the forces and torque should be applied become critical in evaluating the design effort. Thus, the three main objectives of this investigation were to: (1) quantify variables such as force and torque as they relate to heavy mass handling techniques; (2) predict the time required to perform heavy mass handling tasks; and (3) note any differences between males and females in their ability to manipulate a heavy mass.

  2. [An Introduction to Methods for Evaluating Health Care Technology].

    PubMed

    Lee, Ting-Ting

    2015-06-01

    The rapid and continual advance of healthcare technology makes ensuring that this technology is used effectively to achieve its original goals a critical issue. This paper presents three methods that may be applied by healthcare professionals in the evaluation of healthcare technology. These methods include: the perception/experiences of users, user work-pattern changes, and chart review or data mining. The first method includes two categories: using interviews to explore the user experience and using theory-based questionnaire surveys. The second method applies work sampling to observe the work pattern changes of users. The last method conducts chart reviews or data mining to analyze the designated variables. In conclusion, while evaluative feedback may be used to improve the design and development of healthcare technology applications, the informatics competency and informatics literacy of users may be further explored in future research.

  3. Degradation of ticarcillin by subcritial water oxidation method: Application of response surface methodology and artificial neural network modeling.

    PubMed

    Yabalak, Erdal

    2018-05-18

    This study was performed to investigate the mineralization of ticarcillin in the artificially prepared aqueous solution presenting ticarcillin contaminated waters, which constitute a serious problem for human health. 81.99% of total organic carbon removal, 79.65% of chemical oxygen demand removal, and 94.35% of ticarcillin removal were achieved by using eco-friendly, time-saving, powerful and easy-applying, subcritical water oxidation method in the presence of a safe-to-use oxidizing agent, hydrogen peroxide. Central composite design, which belongs to the response surface methodology, was applied to design the degradation experiments, to optimize the methods, to evaluate the effects of the system variables, namely, temperature, hydrogen peroxide concentration, and treatment time, on the responses. In addition, theoretical equations were proposed in each removal processes. ANOVA tests were utilized to evaluate the reliability of the performed models. F values of 245.79, 88.74, and 48.22 were found for total organic carbon removal, chemical oxygen demand removal, and ticarcillin removal, respectively. Moreover, artificial neural network modeling was applied to estimate the response in each case and its prediction and optimizing performance was statistically examined and compared to the performance of central composite design.

  4. A journal bearing with variable geometry for the suppression of vibrations in rotating shafts: Simulation, design, construction and experiment

    NASA Astrophysics Data System (ADS)

    Chasalevris, Athanasios; Dohnal, Fadi

    2015-02-01

    The idea for a journal bearing with variable geometry was formerly developed and investigated on its principles of operation giving very optimistic theoretical results for the vibration quenching of simple and more complicated rotor bearing systems during the passage through the first critical speed. The journal bearing with variable geometry is presented in this paper in its final form with the detailed design procedure. The current journal bearing was constructed in order to be applied in a simple real rotor bearing system that already exists as an experimental facility. The current paper presents details on the manufactured prototype bearing as an experimental continuation of previous works that presented the simulation of the operating principle of this journal bearing. The design parameters are discussed thoroughly under the numerical simulation for the fluid film pressure in dependency of the variable fluid film thickness during the operation conditions. The implementation of the variable geometry bearing in an experimental rotor bearing system is outlined. Various measurements highlight the efficiency of the proposed bearing element in vibration quenching during the passage through resonance. The inspiration for the current idea is based on the fact that the alteration of the fluid film characteristics of stiffness and damping during the passage through resonance results in vibration quenching. This alteration of the bearing characteristics is achieved by the introduction of an additional fluid film thickness using the passive displacement of the lower half-bearing part. • The contribution of the current journal bearing in vibration quenching. • Experimental evidence for the VGJB contribution.

  5. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  6. Incorporation of expert variability into breast cancer treatment recommendation in designing clinical protocol guided fuzzy rule system models.

    PubMed

    Garibaldi, Jonathan M; Zhou, Shang-Ming; Wang, Xiao-Ying; John, Robert I; Ellis, Ian O

    2012-06-01

    It has been often demonstrated that clinicians exhibit both inter-expert and intra-expert variability when making difficult decisions. In contrast, the vast majority of computerized models that aim to provide automated support for such decisions do not explicitly recognize or replicate this variability. Furthermore, the perfect consistency of computerized models is often presented as a de facto benefit. In this paper, we describe a novel approach to incorporate variability within a fuzzy inference system using non-stationary fuzzy sets in order to replicate human variability. We apply our approach to a decision problem concerning the recommendation of post-operative breast cancer treatment; specifically, whether or not to administer chemotherapy based on assessment of five clinical variables: NPI (the Nottingham Prognostic Index), estrogen receptor status, vascular invasion, age and lymph node status. In doing so, we explore whether such explicit modeling of variability provides any performance advantage over a more conventional fuzzy approach, when tested on a set of 1310 unselected cases collected over a fourteen year period at the Nottingham University Hospitals NHS Trust, UK. The experimental results show that the standard fuzzy inference system (that does not model variability) achieves overall agreement to clinical practice around 84.6% (95% CI: 84.1-84.9%), while the non-stationary fuzzy model can significantly increase performance to around 88.1% (95% CI: 88.0-88.2%), p<0.001. We conclude that non-stationary fuzzy models provide a valuable new approach that may be applied to clinical decision support systems in any application domain. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Quality-assurance design applied to an assessment of agricultural pesticides in ground water from carbonate bedrock aquifers in the Great Valley of eastern Pennsylvania

    USGS Publications Warehouse

    Breen, Kevin J.

    2000-01-01

    Assessments to determine whether agricultural pesticides are present in ground water are performed by the Commonwealth of Pennsylvania under the aquifer monitoring provisions of the State Pesticides and Ground Water Strategy. Pennsylvania's Department of Agriculture conducts the monitoring and collects samples; the Department of Environmental Protection (PaDEP) Laboratory analyzes the samples to measure pesticide concentration. To evaluate the quality of the measurements of pesticide concentration for a groundwater assessment, a quality-assurance design was developed and applied to a selected assessment area in Pennsylvania. This report describes the quality-assurance design, describes how and where the design was applied, describes procedures used to collect and analyze samples and to evaluate the results, and summarizes the quality assurance results along with the assessment results.The design was applied in an agricultural area of the Delaware River Basin in Berks, Lebanon, Lehigh, and Northampton Counties to evaluate the bias and variability in laboratory results for pesticides. The design—with random spatial and temporal components—included four data-quality objectives for bias and variability. The spatial design was primary and represented an area comprising 30 sampling cells. A quality-assurance sampling frequency of 20 percent of cells was selected to ensure a sample number of five or more for analysis. Quality-control samples included blanks, spikes, and replicates of laboratory water and spikes, replicates, and 2-lab splits of groundwater. Two analytical laboratories, the PaDEP Laboratory and a U.S. Geological Survey Laboratory, were part of the design. Bias and variability were evaluated by use of data collected from October 1997 through January 1998 for alachlor, atrazine, cyanazine, metolachlor, simazine, pendimethalin, metribuzin, and chlorpyrifos.Results of analyses of field blanks indicate that collection, processing, transport, and laboratory analysis procedures did not contaminate the samples; there were no false-positive results. Pesticides were detected in water when pesticides were spiked into (added to) samples. There were no false negatives for the eight pesticides in all spiked samples. Negative bias was characteristic of analytical results for the eight pesticides, and bias was generally in excess of 10 percent from the ‘true’ or expected concentration (34 of 39 analyses, or 87 percent of the ground-water results) for pesticide concentrations ranging from 0.31 to 0.51 mg/L (micrograms per liter). The magnitude of the negative bias for the eight pesticides, with the exception of cyanazine, would result in reported concentrations commonly 75-80 percent of the expected concentration in the water sample. The bias for cyanazine was negative and within 10 percent of the expected concentration. A comparison of spiked pesticide-concentration recoveries in laboratory water and ground water indicated no effect of the ground-water matrix, and matrix interference was not a source of the negative bias. Results for the laboratory-water spikes submitted in triplicate showed large variability for recoveries of atrazine, cyanazine, and pendimethalin. The relative standard deviation (RSD) was used as a measure of method variability over the course of the study for laboratory waters at a concentration of 0.4 mg/L. An RSD of about 11 percent (or about ?0.05 mg/L)characterizes the method results for alachlor, chlorpyrifos, metolachlor, metribuzin, and simazine. Atrazine and pendimethalin have RSD values of about 17 and 23 percent, respectively. Cyanazine showed the largest RSD at nearly 51 percent. The pesticides with low variability in laboratory-water spikes also had low variability in ground water.The assessment results showed that atrazinewas the most commonly detected pesticide in ground water in the assessment area. Atrazine was detected in water from 22 of the 28 wells sampled, and recovery results for atrazine were some of the worst (largest negative bias). Concentrations of the eight pesticides in ground water from wells were generally less than 0.3 µg/L. Only six individual measurements of the concentrations in water from six of the wells were at or above 0.3 µg/L, five for atrazine and one for metolachlor. There were eight additional detections of metolachlor and simazine at concentrations less than 0.1 µg/L. No well water contained more than one pesticide at concentra-tions at or above 0.3 µg/L. Evidence exists, how-ever, for a pattern of co-occurrence of metolachlor and simazine at low concentrations with higher concentrations of atrazine.Large variability in replicate samples and negative bias for pesticide recovery from spiked samples indicate the need to use data for pesticide recovery in the interpretation of measured pesti-cide concentrations in ground water. Data from samples spiked with known amounts of pesticides were a critical component of a quality-assurance design for the monitoring component of the Pesti-cides and Ground Water Strategy.Trigger concentrations, the concentrations that require action under the Pesticides and Ground Water Strategy, should be considered maximums for action. This consideration is needed because of the magnitude of negative bias.

  8. Designs of goal-free problems for trigonometry learning

    NASA Astrophysics Data System (ADS)

    Retnowati, E.; Maulidya, S. R.

    2018-03-01

    This paper describes the designs of goal-free problems particularly for trigonometry, which may be considered a difficult topic for high school students.Goal-free problem is an instructional design developed based on a Cognitive load theory (CLT). Within the design, instead of asking students to solve a specific goal of a mathematics problem, the instruction is to solve as many Pythagoras as possible. It was assumed that for novice students, goal-free problems encourage students to pay attention more to the given information and the mathematical principles that can be applied to reveal the unknown variables. Hence, students develop more structured knowledge while solving the goal-free problems. The resulted design may be used in regular mathematics classroom with some adjustment on the difficulty level and the allocated lesson time.

  9. Numerical studies of the thermal design sensitivity calculation for a reaction-diffusion system with discontinuous derivatives

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeen S.

    1987-01-01

    The aim of this study is to find a reliable numerical algorithm to calculate thermal design sensitivities of a transient problem with discontinuous derivatives. The thermal system of interest is a transient heat conduction problem related to the curing process of a composite laminate. A logical function which can smoothly approximate the discontinuity is introduced to modify the system equation. Two commonly used methods, the adjoint variable method and the direct differentiation method, are then applied to find the design derivatives of the modified system. The comparisons of numerical results obtained by these two methods demonstrate that the direct differentiation method is a better choice to be used in calculating thermal design sensitivity.

  10. The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.

    2017-12-01

    The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.

  11. Comparison Between Two Methods for Estimating the Vertical Scale of Fluctuation for Modeling Random Geotechnical Problems

    NASA Astrophysics Data System (ADS)

    Pieczyńska-Kozłowska, Joanna M.

    2015-12-01

    The design process in geotechnical engineering requires the most accurate mapping of soil. The difficulty lies in the spatial variability of soil parameters, which has been a site of investigation of many researches for many years. This study analyses the soil-modeling problem by suggesting two effective methods of acquiring information for modeling that consists of variability from cone penetration test (CPT). The first method has been used in geotechnical engineering, but the second one has not been associated with geotechnics so far. Both methods are applied to a case study in which the parameters of changes are estimated. The knowledge of the variability of parameters allows in a long term more effective estimation, for example, bearing capacity probability of failure.

  12. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  13. SEAWAT: A Computer Program for Simulation of Variable-Density Groundwater Flow and Multi-Species Solute and Heat Transport

    USGS Publications Warehouse

    Langevin, Christian D.

    2009-01-01

    SEAWAT is a MODFLOW-based computer program designed to simulate variable-density groundwater flow coupled with multi-species solute and heat transport. The program has been used for a wide variety of groundwater studies including saltwater intrusion in coastal aquifers, aquifer storage and recovery in brackish limestone aquifers, and brine migration within continental aquifers. SEAWAT is relatively easy to apply because it uses the familiar MODFLOW structure. Thus, most commonly used pre- and post-processors can be used to create datasets and visualize results. SEAWAT is a public domain computer program distributed free of charge by the U.S. Geological Survey.

  14. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.

  15. A new mathematical approach for the estimation of the AUC and its variability under different experimental designs in preclinical studies.

    PubMed

    Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán

    2012-01-01

    The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.

  16. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    PubMed Central

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2013-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262

  17. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  18. Aircraft Conceptual Design and Risk Analysis Using Physics-Based Noise Prediction

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.; Mavris, Dimitri N.

    2006-01-01

    An approach was developed which allows for design studies of commercial aircraft using physics-based noise analysis methods while retaining the ability to perform the rapid trade-off and risk analysis studies needed at the conceptual design stage. A prototype integrated analysis process was created for computing the total aircraft EPNL at the Federal Aviation Regulations Part 36 certification measurement locations using physics-based methods for fan rotor-stator interaction tones and jet mixing noise. The methodology was then used in combination with design of experiments to create response surface equations (RSEs) for the engine and aircraft performance metrics, geometric constraints and take-off and landing noise levels. In addition, Monte Carlo analysis was used to assess the expected variability of the metrics under the influence of uncertainty, and to determine how the variability is affected by the choice of engine cycle. Finally, the RSEs were used to conduct a series of proof-of-concept conceptual-level design studies demonstrating the utility of the approach. The study found that a key advantage to using physics-based analysis during conceptual design lies in the ability to assess the benefits of new technologies as a function of the design to which they are applied. The greatest difficulty in implementing physics-based analysis proved to be the generation of design geometry at a sufficient level of detail for high-fidelity analysis.

  19. Multichannel microformulators for massively parallel machine learning and automated design of biological experiments

    NASA Astrophysics Data System (ADS)

    Wikswo, John; Kolli, Aditya; Shankaran, Harish; Wagoner, Matthew; Mettetal, Jerome; Reiserer, Ronald; Gerken, Gregory; Britt, Clayton; Schaffer, David

    Genetic, proteomic, and metabolic networks describing biological signaling can have 102 to 103 nodes. Transcriptomics and mass spectrometry can quantify 104 different dynamical experimental variables recorded from in vitro experiments with a time resolution approaching 1 s. It is difficult to infer metabolic and signaling models from such massive data sets, and it is unlikely that causality can be determined simply from observed temporal correlations. There is a need to design and apply specific system perturbations, which will be difficult to perform manually with 10 to 102 externally controlled variables. Machine learning and optimal experimental design can select an experiment that best discriminates between multiple conflicting models, but a remaining problem is to control in real time multiple variables in the form of concentrations of growth factors, toxins, nutrients and other signaling molecules. With time-division multiplexing, a microfluidic MicroFormulator (μF) can create in real time complex mixtures of reagents in volumes suitable for biological experiments. Initial 96-channel μF implementations control the exposure profile of cells in a 96-well plate to different temporal profiles of drugs; future experiments will include challenge compounds. Funded in part by AstraZeneca, NIH/NCATS HHSN271201600009C and UH3TR000491, and VIIBRE.

  20. Bioreactor process parameter screening utilizing a Plackett-Burman design for a model monoclonal antibody.

    PubMed

    Agarabi, Cyrus D; Schiel, John E; Lute, Scott C; Chavez, Brittany K; Boyne, Michael T; Brorson, Kurt A; Khan, Mansoora; Read, Erik K

    2015-06-01

    Consistent high-quality antibody yield is a key goal for cell culture bioprocessing. This endpoint is typically achieved in commercial settings through product and process engineering of bioreactor parameters during development. When the process is complex and not optimized, small changes in composition and control may yield a finished product of less desirable quality. Therefore, changes proposed to currently validated processes usually require justification and are reported to the US FDA for approval. Recently, design-of-experiments-based approaches have been explored to rapidly and efficiently achieve this goal of optimized yield with a better understanding of product and process variables that affect a product's critical quality attributes. Here, we present a laboratory-scale model culture where we apply a Plackett-Burman screening design to parallel cultures to study the main effects of 11 process variables. This exercise allowed us to determine the relative importance of these variables and identify the most important factors to be further optimized in order to control both desirable and undesirable glycan profiles. We found engineering changes relating to culture temperature and nonessential amino acid supplementation significantly impacted glycan profiles associated with fucosylation, β-galactosylation, and sialylation. All of these are important for monoclonal antibody product quality. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  1. Application and System Design of Elastomer Based Optofluidic Lenses

    NASA Astrophysics Data System (ADS)

    Savidis, Nickolaos

    Adaptive optic technology has revolutionized real time correction of wavefront aberrations. Optofluidic based applied optic devices have offered an opportunity to produce flexible refractive lenses in the correction of wavefronts. Fluidic lenses have superiority relative to their solid lens counterparts in their capabilities of producing tunable optical systems, that when synchronized, can produce real time variable systems with no moving parts. We have developed optofluidic fluidic lenses for applications of applied optical devices, as well as ophthalmic optic devices. The first half of this dissertation discusses the production of fluidic lenses as optical devices. In addition, the design and testing of various fluidic systems made with these components are evaluated. We begin with the creation of spherical or defocus singlet fluidic lenses. We then produced zoom optical systems with no moving parts by synchronizing combinations of these fluidic spherical lenses. The variable power zoom system incorporates two singlet fluidic lenses that are synchronized. The coupled device has no moving parts and has produced a magnification range of 0.1 x to 10 x or a 20 x magnification range. The chapter after fluidic zoom technology focuses on producing achromatic lens designs. We offer an analysis of a hybrid diffractive and refractive achromat that offers discrete achromatized variable focal lengths. In addition, we offer a design of a fully optofluidic based achromatic lens. By synchronizing the two membrane surfaces of the fluidic achromat we develop a design for a fluidic achromatic lens. The second half of this dissertation discusses the production of optofluidic technology in ophthalmic applications. We begin with an introduction to an optofluidic phoropter system. A fluidic phoropter is designed through the combination of a defocus lens with two cylindrical fluidic lenses that are orientated 45° relative to each other. Here we discuss the designs of the fluidic cylindrical lens coupled with a previously discussed defocus singlet lens. We then couple this optofluidic phoropter with relay optics and Shack-Hartmann wavefront sensing technology to produce an auto-phoropter device. The auto-phoropter system combines a refractometer designed Shack-Hartmann wavefront sensor with the compact refractive fluidic lens phoropter. This combination allows for the identification and control of ophthalmic cylinder, cylinder axis, as well as refractive error. The closed loop system of the fluidic phoropter with refractometer enables for the creation of our see-through auto-phoropter system. The design and testing of several generations of transmissive see-through auto-phoropter devices are presented in this section.

  2. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals

    PubMed Central

    Matt, Dominik T.

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system. PMID:29065578

  3. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals.

    PubMed

    Arcidiacono, Gabriele; Matt, Dominik T; Rauch, Erwin

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system.

  4. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals

    PubMed

    Arcidiacono, Gabriele; Matt, Dominik T.; Rauch, Erwin

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system. © 2017 Gabriele Arcidiacono et al.

  5. A Rigorous Framework for Optimization of Expensive Functions by Surrogates

    NASA Technical Reports Server (NTRS)

    Booker, Andrew J.; Dennis, J. E., Jr.; Frank, Paul D.; Serafini, David B.; Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which design application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31-variable helicopter rotor blade design example and for a standard optimization test example.

  6. Improved Broadband Liner Optimization Applied to the Advanced Noise Control Fan

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.; Sutliff, Daniel L.; Ayle, Earl; Ichihashi, Fumitaka

    2014-01-01

    The broadband component of fan noise has grown in relevance with the utilization of increased bypass ratio and advanced fan designs. Thus, while the attenuation of fan tones remains paramount, the ability to simultaneously reduce broadband fan noise levels has become more desirable. This paper describes improvements to a previously established broadband acoustic liner optimization process using the Advanced Noise Control Fan rig as a demonstrator. Specifically, in-duct attenuation predictions with a statistical source model are used to obtain optimum impedance spectra over the conditions of interest. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners aimed at producing impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increased weighting to specific frequencies and/or operating conditions. Constant-depth, double-degree of freedom and variable-depth, multi-degree of freedom designs are carried through design, fabrication, and testing to validate the efficacy of the design process. Results illustrate the value of the design process in concurrently evaluating the relative costs/benefits of these liner designs. This study also provides an application for demonstrating the integrated use of duct acoustic propagation/radiation and liner modeling tools in the design and evaluation of novel broadband liner concepts for complex engine configurations.

  7. Analytical and numerical study on cooling flow field designs performance of PEM fuel cell with variable heat flux

    NASA Astrophysics Data System (ADS)

    Afshari, Ebrahim; Ziaei-Rad, Masoud; Jahantigh, Nabi

    2016-06-01

    In PEM fuel cells, during electrochemical generation of electricity more than half of the chemical energy of hydrogen is converted to heat. This heat of reactions, if not exhausted properly, would impair the performance and durability of the cell. In general, large scale PEM fuel cells are cooled by liquid water that circulates through coolant flow channels formed in bipolar plates or in dedicated cooling plates. In this paper, a numerical method has been presented to study cooling and temperature distribution of a polymer membrane fuel cell stack. The heat flux on the cooling plate is variable. A three-dimensional model of fluid flow and heat transfer in cooling plates with 15 cm × 15 cm square area is considered and the performances of four different coolant flow field designs, parallel field and serpentine fields are compared in terms of maximum surface temperature, temperature uniformity and pressure drop characteristics. By comparing the results in two cases, the constant and variable heat flux, it is observed that applying constant heat flux instead of variable heat flux which is actually occurring in the fuel cells is not an accurate assumption. The numerical results indicated that the straight flow field model has temperature uniformity index and almost the same temperature difference with the serpentine models, while its pressure drop is less than all of the serpentine models. Another important advantage of this model is the much easier design and building than the spiral models.

  8. Design of a zoom lens without motorized optical elements

    NASA Astrophysics Data System (ADS)

    Peng, Runling; Chen, Jiabi; Zhu, Cheng; Zhuang, Songlin

    2007-05-01

    A novel design of a zoom lens system without motorized movements is proposed. The lens system consists of a fixed lens and two double-liquid variable-focus lenses. The liquid lenses, made out of two immiscible liquids, are based on the principle of electrowetting: an effect controlling the wetting properties of a liquid on a solid by modifying the applied voltage at the solid-liquid interface. The structure and principle of the lens system are introduced in this paper. Detailed calculations and simulation examples are presented to show that this zoom lens system appears viable as the next-generation zoom lens.

  9. Design of a zoom lens without motorized optical elements.

    PubMed

    Peng, Runling; Chen, Jiabi; Zhu, Cheng; Zhuang, Songlin

    2007-05-28

    A novel design of a zoom lens system without motorized movements is proposed. The lens system consists of a fixed lens and two double-liquid variable-focus lenses. The liquid lenses, made out of two immiscible liquids, are based on the principle of electrowetting: an effect controlling the wetting properties of a liquid on a solid by modifying the applied voltage at the solid-liquid interface. The structure and principle of the lens system are introduced in this paper. Detailed calculations and simulation examples are presented to show that this zoom lens system appears viable as the next-generation zoom lens.

  10. Proceedings of the Conference on the Design of Experiments in Army Research, Development, and Testing (24th) Held at Mathematics Research Center, University of Wisconsin, Madison, Wisconsin on 4-6 October 1978

    DTIC Science & Technology

    1979-06-01

    study k variables, where k+1 is a multiple of four . In Reference 2, Box and Hunter give the following definition of resolution III designs: "No main... study such a class of generators and show that in a strong sense the combined generator does offer improvement. Our a~ pproach applies results from ma...required to fire a group of rounds has been as great as four hours. Test conditions such as tube droop, cant, ambient environmental conditions and

  11. Random Access Frame (RAF) System Neutral Buoyancy Evaluations

    NASA Technical Reports Server (NTRS)

    Howe, A. Scott; Polit-Casillas, Raul; Akin, David L.; McBryan, Katherine; Carlsen, Christopher

    2015-01-01

    The Random Access Frame (RAF) concept is a system for organizing internal layouts of space habitats, vehicles, and outposts. The RAF system is designed as a more efficient improvement over the current International Standard Payload Rack (ISPR) used on the International Space Station (ISS), which was originally designed to allow for swapping and resupply by the Space Shuttle. The RAF system is intended to be applied in variable gravity or microgravity environments. This paper discusses evaluations and results of testing the RAF system in a neutral buoyancy facility simulating low levels of gravity that might be encountered in a deep space environment.

  12. Response surface modeling of boron adsorption from aqueous solution by vermiculite using different adsorption agents: Box-Behnken experimental design.

    PubMed

    Demirçivi, Pelin; Saygılı, Gülhayat Nasün

    2017-07-01

    In this study, a different method was applied for boron removal by using vermiculite as the adsorbent. Vermiculite, which was used in the experiments, was not modified with adsorption agents before boron adsorption using a separate process. Hexadecyltrimethylammonium bromide (HDTMA) and Gallic acid (GA) were used as adsorption agents for vermiculite by maintaining the solid/liquid ratio at 12.5 g/L. HDTMA/GA concentration, contact time, pH, initial boron concentration, inert electrolyte and temperature effects on boron adsorption were analyzed. A three-factor, three-level Box-Behnken design model combined with response surface method (RSM) was employed to examine and optimize process variables for boron adsorption from aqueous solution by vermiculite using HDTMA and GA. Solution pH (2-12), temperature (25-60 °C) and initial boron concentration (50-8,000 mg/L) were chosen as independent variables and coded x 1 , x 2 and x 3 at three levels (-1, 0 and 1). Analysis of variance was used to test the significance of variables and their interactions with 95% confidence limit (α = 0.05). According to the regression coefficients, a second-order empirical equation was evaluated between the adsorption capacity (q i ) and the coded variables tested (x i ). Optimum values of the variables were also evaluated for maximum boron adsorption by vermiculite-HDTMA (HDTMA-Verm) and vermiculite-GA (GA-Verm).

  13. Simple uncertainty propagation for early design phase aircraft sizing

    NASA Astrophysics Data System (ADS)

    Lenz, Annelise

    Many designers and systems analysts are aware of the uncertainty inherent in their aircraft sizing studies; however, few incorporate methods to address and quantify this uncertainty. Many aircraft design studies use semi-empirical predictors based on a historical database and contain uncertainty -- a portion of which can be measured and quantified. In cases where historical information is not available, surrogate models built from higher-fidelity analyses often provide predictors for design studies where the computational cost of directly using the high-fidelity analyses is prohibitive. These surrogate models contain uncertainty, some of which is quantifiable. However, rather than quantifying this uncertainty, many designers merely include a safety factor or design margin in the constraints to account for the variability between the predicted and actual results. This can become problematic if a designer does not estimate the amount of variability correctly, which then can result in either an "over-designed" or "under-designed" aircraft. "Under-designed" and some "over-designed" aircraft will likely require design changes late in the process and will ultimately require more time and money to create; other "over-designed" aircraft concepts may not require design changes, but could end up being more costly than necessary. Including and propagating uncertainty early in the design phase so designers can quantify some of the errors in the predictors could help mitigate the extent of this additional cost. The method proposed here seeks to provide a systematic approach for characterizing a portion of the uncertainties that designers are aware of and propagating it throughout the design process in a procedure that is easy to understand and implement. Using Monte Carlo simulations that sample from quantified distributions will allow a systems analyst to use a carpet plot-like approach to make statements like: "The aircraft is 'P'% likely to weigh 'X' lbs or less, given the uncertainties quantified" without requiring the systems analyst to have substantial knowledge of probabilistic methods. A semi-empirical sizing study of a small single-engine aircraft serves as an example of an initial version of this simple uncertainty propagation. The same approach is also applied to a variable-fidelity concept study using a NASA-developed transonic Hybrid Wing Body aircraft.

  14. Automated predesign of aircraft

    NASA Technical Reports Server (NTRS)

    Poe, C. C., Jr.; Kruse, G. S.; Tanner, C. J.; Wilson, P. J.

    1978-01-01

    Program uses multistation structural-synthesis to size and design box-beam structures for transport aircraft. Program optimizes static strength and scales up to satisfy fatigue and fracture criteria. It has multimaterial capability and library of materials properties, including advanced composites. Program can be used to evaluate impact on weight of variables such as materials, types of construction, structural configurations, minimum gage limits, applied loads, fatigue lives, crack-growth lives, initial crack sizes, and residual strengths.

  15. Does the central limit theorem always apply to phase noise? Some implications for radar problems

    NASA Astrophysics Data System (ADS)

    Gray, John E.; Addison, Stephen R.

    2017-05-01

    The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.

  16. Hierarchical Bayesian models to assess between- and within-batch variability of pathogen contamination in food.

    PubMed

    Commeau, Natalie; Cornu, Marie; Albert, Isabelle; Denis, Jean-Baptiste; Parent, Eric

    2012-03-01

    Assessing within-batch and between-batch variability is of major interest for risk assessors and risk managers in the context of microbiological contamination of food. For example, the ratio between the within-batch variability and the between-batch variability has a large impact on the results of a sampling plan. Here, we designed hierarchical Bayesian models to represent such variability. Compatible priors were built mathematically to obtain sound model comparisons. A numeric criterion is proposed to assess the contamination structure comparing the ability of the models to replicate grouped data at the batch level using a posterior predictive loss approach. Models were applied to two case studies: contamination by Listeria monocytogenes of pork breast used to produce diced bacon and contamination by the same microorganism on cold smoked salmon at the end of the process. In the first case study, a contamination structure clearly exists and is located at the batch level, that is, between batches variability is relatively strong, whereas in the second a structure also exists but is less marked. © 2012 Society for Risk Analysis.

  17. Applied Music Teaching Behavior as a Function of Selected Personality Variables.

    ERIC Educational Resources Information Center

    Schmidt, Charles P.

    1989-01-01

    Investigates the relationships among applied music teaching behaviors and personality variables as measured by the Myers-Briggs Type Indicator (MBTI). Suggests that personality variables may be important factors underlying four applied music teaching behaviors: approvals, rate of reinforcement, teacher model/performance, and pace. (LS)

  18. [Development and Effects of Assertiveness Training applying Dongsasub Training for Nursing Students in Clinical Practice].

    PubMed

    Kim, Myoungsuk

    2016-08-01

    This study was conducted to develop assertiveness training applying Dongsasub training for junior nursing students, and to verify effectiveness of the training on assertiveness behavior, self-esteem, clinical practice stress, and clinical competence. The study design was a non-equivalent control group non-synchronized design. Participants were 63 nursing students in clinical training (31 students in the experimental group and 32 students in the control group). The assertiveness training applying Dongsasub training consisted of four sessions. Outcome variables included assertiveness behavior, self-esteem, clinical practice stress, and clinical competence. Data were analyzed using Chi-square, Fisher's exact test and independent samples t-test with SPSS/WIN 21.0. Scores of assertiveness behavior (t=-2.49, p=.015), self-esteem (t=-4.80, p<.001) and clinical competence (t=-2.33, p=.023) were significantly higher and clinical practice stress (t=4.22, p<.001) was significantly lower in the experimental group compared to the control group. Results indicate that the assertiveness training applying Dongsasub training can be used as a nursing intervention to lower clinical practice stress and improve the clinical competence of nursing students.

  19. An Optimization-Based Approach to Determine Requirements and Aircraft Design under Multi-domain Uncertainties

    NASA Astrophysics Data System (ADS)

    Govindaraju, Parithi

    Determining the optimal requirements for and design variable values of new systems, which operate along with existing systems to provide a set of overarching capabilities, as a single task is challenging due to the highly interconnected effects that setting requirements on a new system's design can have on how an operator uses this newly designed system. This task of determining the requirements and the design variable values becomes even more difficult because of the presence of uncertainties in the new system design and in the operational environment. This research proposed and investigated aspects of a framework that generates optimum design requirements of new, yet-to-be-designed systems that, when operating alongside other systems, will optimize fleet-level objectives while considering the effects of various uncertainties. Specifically, this research effort addresses the issues of uncertainty in the design of the new system through reliability-based design optimization methods, and uncertainty in the operations of the fleet through descriptive sampling methods and robust optimization formulations. In this context, fleet-level performance metrics result from using the new system alongside other systems to accomplish an overarching objective or mission. This approach treats the design requirements of a new system as decision variables in an optimization problem formulation that a user in the position of making an acquisition decision could solve. This solution would indicate the best new system requirements-and an associated description of the best possible design variable variables for that new system-to optimize the fleet level performance metric(s). Using a problem motivated by recorded operations of the United States Air Force Air Mobility Command for illustration, the approach is demonstrated first for a simplified problem that only considers demand uncertainties in the service network and the proposed methodology is used to identify the optimal design requirements and optimal aircraft sizing variables of new, yet-to-be-introduced aircraft. With this new aircraft serving alongside other existing aircraft, the fleet of aircraft satisfy the desired demand for cargo transportation, while maximizing fleet productivity and minimizing fuel consumption via a multi-objective problem formulation. The approach is then extended to handle uncertainties in both the design of the new system and in the operations of the fleet. The propagation of uncertainties associated with the conceptual design of the new aircraft to the uncertainties associated with the subsequent operations of the new and existing aircraft in the fleet presents some unique challenges. A computationally tractable hybrid robust counterpart formulation efficiently handles the confluence of the two types of domain-specific uncertainties. This hybrid formulation is tested on a larger route network problem to demonstrate the scalability of the approach. Following the presentation of the results obtained, a summary discussion indicates how decision-makers might use these results to set requirements for new aircraft that meet operational needs while balancing the environmental impact of the fleet with fleet-level performance. Comparing the solutions from the uncertainty-based and deterministic formulations via a posteriori analysis demonstrates the efficacy of the robust and reliability-based optimization formulations in addressing the different domain-specific uncertainties. Results suggest that the aircraft design requirements and design description determined through the hybrid robust counterpart formulation approach differ from solutions obtained from the simplistic deterministic approach, and leads to greater fleet-level fuel savings, when subjected to real-world uncertain scenarios (more robust to uncertainty). The research, though applied to a specific air cargo application, is technically agnostic in nature and can be applied to other facets of policy and acquisition management, to explore capability trade spaces for different vehicle systems, mitigate risks, define policy and potentially generate better returns on investment. Other domains relevant to policy and acquisition decisions could utilize the problem formulation and solution approach proposed in this dissertation provided that the problem can be split into a non-linear programming problem to describe the new system sizing and the fleet operations problem can be posed as a linear/integer programming problem.

  20. Statokinesigram normalization method.

    PubMed

    de Oliveira, José Magalhães

    2017-02-01

    Stabilometry is a technique that aims to study the body sway of human subjects, employing a force platform. The signal obtained from this technique refers to the position of the foot base ground-reaction vector, known as the center of pressure (CoP). The parameters calculated from the signal are used to quantify the displacement of the CoP over time; there is a large variability, both between and within subjects, which prevents the definition of normative values. The intersubject variability is related to differences between subjects in terms of their anthropometry, in conjunction with their muscle activation patterns (biomechanics); and the intrasubject variability can be caused by a learning effect or fatigue. Age and foot placement on the platform are also known to influence variability. Normalization is the main method used to decrease this variability and to bring distributions of adjusted values into alignment. In 1996, O'Malley proposed three normalization techniques to eliminate the effect of age and anthropometric factors from temporal-distance parameters of gait. These techniques were adopted to normalize the stabilometric signal by some authors. This paper proposes a new method of normalization of stabilometric signals to be applied in balance studies. The method was applied to a data set collected in a previous study, and the results of normalized and nonnormalized signals were compared. The results showed that the new method, if used in a well-designed experiment, can eliminate undesirable correlations between the analyzed parameters and the subjects' characteristics and show only the experimental conditions' effects.

  1. Zoning method for environmental engineering geological patterns in underground coal mining areas.

    PubMed

    Liu, Shiliang; Li, Wenping; Wang, Qiqing

    2018-09-01

    Environmental engineering geological patterns (EEGPs) are used to express the trend and intensity of eco-geological environment caused by mining in underground coal mining areas, a complex process controlled by multiple factors. A new zoning method for EEGPs was developed based on the variable-weight theory (VWT), where the weights of factors vary with their value. The method was applied to the Yushenfu mining area, Shaanxi, China. First, the mechanism of the EEGPs caused by mining was elucidated, and four types of EEGPs were proposed. Subsequently, 13 key control factors were selected from mining conditions, lithosphere, hydrosphere, ecosphere, and climatic conditions; their thematic maps were constructed using ArcGIS software and remote-sensing technologies. Then, a stimulation-punishment variable-weight model derived from the partition of basic evaluation unit of study area, construction of partition state-variable-weight vector, and determination of variable-weight interval was built to calculate the variable weights of each factor. On this basis, a zoning mathematical model of EEGPs was established, and the zoning results were analyzed. For comparison, the traditional constant-weight theory (CWT) was also applied to divide the EEGPs. Finally, the zoning results obtained using VWT and CWT were compared. The verification of field investigation indicates that VWT is more accurate and reliable than CWT. The zoning results are consistent with the actual situations and the key of planning design for the rational development of coal resources and protection of eco-geological environment. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Enabling intelligent copernicus services for carbon and water balance modeling of boreal forest ecosystems - North State

    NASA Astrophysics Data System (ADS)

    Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi

    2015-04-01

    The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.

  3. Bio-inspired online variable recruitment control of fluidic artificial muscles

    NASA Astrophysics Data System (ADS)

    Jenkins, Tyler E.; Chapman, Edward M.; Bryant, Matthew

    2016-12-01

    This paper details the creation of a hybrid variable recruitment control scheme for fluidic artificial muscle (FAM) actuators with an emphasis on maximizing system efficiency and switching control performance. Variable recruitment is the process of altering a system’s active number of actuators, allowing operation in distinct force regimes. Previously, FAM variable recruitment was only quantified with offline, manual valve switching; this study addresses the creation and characterization of novel, on-line FAM switching control algorithms. The bio-inspired algorithms are implemented in conjunction with a PID and model-based controller, and applied to a simulated plant model. Variable recruitment transition effects and chatter rejection are explored via a sensitivity analysis, allowing a system designer to weigh tradeoffs in actuator modeling, algorithm choice, and necessary hardware. Variable recruitment is further developed through simulation of a robotic arm tracking a variety of spline position inputs, requiring several levels of actuator recruitment. Switching controller performance is quantified and compared with baseline systems lacking variable recruitment. The work extends current variable recruitment knowledge by creating novel online variable recruitment control schemes, and exploring how online actuator recruitment affects system efficiency and control performance. Key topics associated with implementing a variable recruitment scheme, including the effects of modeling inaccuracies, hardware considerations, and switching transition concerns are also addressed.

  4. A Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) for In-Situ Mars Surface Sample Analysis

    NASA Technical Reports Server (NTRS)

    Edmunson, J.; Gaskin, J. A.; Jerman, G. A.; Harvey, R. P.; Doloboff, I. J.; Neidholdt, E. L.

    2016-01-01

    The Miniaturized Variable Pressure Scanning Electron Microscope (MVP-SEM) project, funded by the NASA Planetary Instrument Concepts for the Advancement of Solar System Observations (PICASSO) Research Opportunities in Space and Earth Sciences (ROSES), will build upon previous miniaturized SEM designs and recent advancements in variable pressure SEM's to design and build a SEM to complete analyses of samples on the surface of Mars using the atmosphere as an imaging medium. This project is a collaboration between NASA Marshall Space Flight Center (MSFC), the Jet Propulsion Laboratory (JPL), electron gun and optics manufacturer Applied Physics Technologies, and small vacuum system manufacturer Creare. Dr. Ralph Harvery and environmental SEM (ESEM) inventor Dr. Gerry Danilatos serve as advisors to the team. Variable pressure SEMs allow for fine (nm-scale) resolution imaging and micron-scale chemical study of materials without sample preparation (e.g., carbon or gold coating). Charging of a sample is reduced or eliminated by the gas surrounding the sample. It is this property of ESEMs that make them ideal for locations where sample preparation is not yet feasible, such as the surface of Mars. In addition, the lack of sample preparation needed here will simplify the sample acquisition process and allow caching of the samples for future complementary payload use.

  5. Development and Validation of an Interactive Liner Design and Impedance Modeling Tool

    NASA Technical Reports Server (NTRS)

    Howerton, Brian M.; Jones, Michael G.; Buckley, James L.

    2012-01-01

    The Interactive Liner Impedance Analysis and Design (ILIAD) tool is a LabVIEW-based software package used to design the composite surface impedance of a series of small-diameter quarter-wavelength resonators incorporating variable depth and sharp bends. Such structures are useful for packaging broadband acoustic liners into constrained spaces for turbofan engine noise control applications. ILIAD s graphical user interface allows the acoustic channel geometry to be drawn in the liner volume while the surface impedance and absorption coefficient calculations are updated in real-time. A one-dimensional transmission line model serves as the basis for the impedance calculation and can be applied to many liner configurations. Experimentally, tonal and broadband acoustic data were acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3000 Hz at 120 and 140 dB SPL. Normalized impedance spectra were measured using the Two-Microphone Method for the various combinations of channel configurations. Comparisons between the computed and measured impedances show excellent agreement for broadband liners comprised of multiple, variable-depth channels. The software can be used to design arrays of resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  6. Partial gravity habitat study

    NASA Technical Reports Server (NTRS)

    Capps, Stephen; Lorandos, Jason; Akhidime, Eval; Bunch, Michael; Lund, Denise; Moore, Nathan; Murakawa, Kiosuke

    1989-01-01

    The purpose of this study is to investigate comprehensive design requirements associated with designing habitats for humans in a partial gravity environment, then to apply them to a lunar base design. Other potential sites for application include planetary surfaces such as Mars, variable-gravity research facilities, and a rotating spacecraft. Design requirements for partial gravity environments include locomotion changes in less than normal earth gravity; facility design issues, such as interior configuration, module diameter, and geometry; and volumetric requirements based on the previous as well as psychological issues involved in prolonged isolation. For application to a lunar base, it is necessary to study the exterior architecture and configuration to insure optimum circulation patterns while providing dual egress; radiation protection issues are addressed to provide a safe and healthy environment for the crew; and finally, the overall site is studied to locate all associated facilities in context with the habitat. Mission planning is not the purpose of this study; therefore, a Lockheed scenario is used as an outline for the lunar base application, which is then modified to meet the project needs. The goal of this report is to formulate facts on human reactions to partial gravity environments, derive design requirements based on these facts, and apply the requirements to a partial gravity situation which, for this study, was a lunar base.

  7. Improved Data Acquisition Methods for Uninterrupted Signal Monitoring and Ultra-Fast Plasma Diagnostics in LHD

    NASA Astrophysics Data System (ADS)

    Nakanishi, Hideya; Imazu, Setsuo; Ohsuna, Masaki; Kojima, Mamoru; Nonomura, Miki; Shoji, Mamoru; Emoto, Masahiko; Yoshida, Masanobu; Iwata, Chie; Miyake, Hitoshi; Nagayama, Yoshio; Kawahata, Kazuo

    To deal with endless data streams acquired in LHD steady-state experiments, the LHD data acquisition system was designed with a simple concept that divides a long pulse into a consecutive series of 10-s “subshots”. Latest digitizers applying high-speed PCI-Express technology, however, output nonstop gigabyte per second data streams whose subshot intervals would be extremely long if 10-s rule was applied. These digitizers need shorter subshot intervals, less than 10-s long. In contrast, steady-state fusion plants need uninterrupted monitoring of the environment and device soundness. They adopt longer subshot lengths of either 10 min or 1 day. To cope with both uninterrupted monitoring and ultra-fast diagnostics, the ability to vary the subshot length according to the type of operation is required. In this study, a design modification that enables variable subshot lengths was implemented and its practical effectiveness in LHD was verified.

  8. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  9. Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.

  10. Tobacco smoking and depression: time to move on to a new research paradigm in medicine?

    PubMed

    de Jonge, Peter; Bos, Elisabeth Henriette

    2013-05-24

    A recent paper published in BMC Cardiovascular Disorders reported on a study into whether tobacco smoking may serve as a risk factor for depression in patients with heart disease. In the current paper, we discuss several limitations of that study, of which many apply not just to the study itself but to the nomothetic research design that was used. Particularly when bidirectionality between variables is expected, fluctuation in variables over time takes place, and/or inter-individual differences are considerable, a nomothetic research approach does not seem appropriate, and may lead to false conclusions. As an alternative, we describe an idiographic approach in which individuals are followed up over time using many repeated measurements, and from which individual models are estimated. Such intensive time-series studies are not common in medicine, but are well described in the fields of econometrics and meteorology. Combining idiographic research designs with more traditional nomothetic designs may lead to research findings that are not only useful for society but also valid in individuals. See related research article here http://www.biomedcentral.com/1471-2261/13/35.

  11. Tobacco smoking and depression: time to move on to a new research paradigm in medicine?

    PubMed Central

    2013-01-01

    A recent paper published in BMC Cardiovascular Disorders reported on a study into whether tobacco smoking may serve as a risk factor for depression in patients with heart disease. In the current paper, we discuss several limitations of that study, of which many apply not just to the study itself but to the nomothetic research design that was used. Particularly when bidirectionality between variables is expected, fluctuation in variables over time takes place, and/or inter-individual differences are considerable, a nomothetic research approach does not seem appropriate, and may lead to false conclusions. As an alternative, we describe an idiographic approach in which individuals are followed up over time using many repeated measurements, and from which individual models are estimated. Such intensive time-series studies are not common in medicine, but are well described in the fields of econometrics and meteorology. Combining idiographic research designs with more traditional nomothetic designs may lead to research findings that are not only useful for society but also valid in individuals. See related research article here http://www.biomedcentral.com/1471-2261/13/35. PMID:23705867

  12. A method for obtaining reduced-order control laws for high-order systems using optimization techniques

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.; Abel, I.

    1981-01-01

    A method of synthesizing reduced-order optimal feedback control laws for a high-order system is developed. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean-square steady-state responses and control inputs. An analogy with the linear quadractic Gaussian solution is utilized to select a set of design variables and their initial values. To improve the stability margins of the system, an input-noise adjustment procedure is used in the design algorithm. The method is applied to the synthesis of an active flutter-suppression control law for a wind tunnel model of an aeroelastic wing. The reduced-order controller is compared with the corresponding full-order controller and found to provide nearly optimal performance. The performance of the present method appeared to be superior to that of two other control law order-reduction methods. It is concluded that by using the present algorithm, nearly optimal low-order control laws with good stability margins can be synthesized.

  13. Applying Service-Oriented Architecture to Archiving Data in Control and Monitoring Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogiec, J. M.; Trombly-Freytag, K.

    Current trends in the architecture of software systems focus our attention on building systems using a set of loosely coupled components, each providing a specific functionality known as service. It is not much different in control and monitoring systems, where a functionally distinct sub-system can be identified and independently designed, implemented, deployed and maintained. One functionality that renders itself perfectly to becoming a service is archiving the history of the system state. The design of such a service and our experience of using it are the topic of this article. The service is built with responsibility segregation in mind, therefore,more » it provides for reducing data processing on the data viewer side and separation of data access and modification operations. The service architecture and the details concerning its data store design are discussed. An implementation of a service client capable of archiving EPICS process variables (PV) and LabVIEW shared variables is presented. Data access tools, including a browser-based data viewer and a mobile viewer, are also presented.« less

  14. Optimization of microwave-assisted extraction (MAP) for ginseng components by response surface methodology.

    PubMed

    Kwon, Joong-Ho; Bélanger, Jacqueline M R; Paré, J R Jocelyn

    2003-03-26

    Response surface methodology (RSM) was applied to predict optimum conditions for microwave-assisted extraction-a MAP technology-of saponin components from ginseng roots. A central composite design was used to monitor the effect of ethanol concentration (30-90%, X(1)) and extraction time (30-270 s, X(2)) on dependent variables, such as total extract yield (Y(1)), crude saponin content (Y(2)), and saponin ratio (Y(3)), under atmospheric pressure conditions when focused microwaves were applied at an emission frequency of 2450 MHz. In MAP under pre-established conditions, correlation coefficients (R (2)) of the models for total extract yield and crude saponin were 0.9841 (p < 0.001) and 0.9704 (p < 0.01). Optimum extraction conditions were predicted for each variable as 52.6% ethanol and 224.7 s in extract yield and as 77.3% ethanol and 295.1 s in crude saponins, respectively. Estimated maximum values at predicted optimum conditions were in good agreement with experimental values.

  15. Thermal sensation prediction by soft computing methodology.

    PubMed

    Jović, Srđan; Arsić, Nebojša; Vilimonović, Jovana; Petković, Dalibor

    2016-12-01

    Thermal comfort in open urban areas is very factor based on environmental point of view. Therefore it is need to fulfill demands for suitable thermal comfort during urban planning and design. Thermal comfort can be modeled based on climatic parameters and other factors. The factors are variables and they are changed throughout the year and days. Therefore there is need to establish an algorithm for thermal comfort prediction according to the input variables. The prediction results could be used for planning of time of usage of urban areas. Since it is very nonlinear task, in this investigation was applied soft computing methodology in order to predict the thermal comfort. The main goal was to apply extreme leaning machine (ELM) for forecasting of physiological equivalent temperature (PET) values. Temperature, pressure, wind speed and irradiance were used as inputs. The prediction results are compared with some benchmark models. Based on the results ELM can be used effectively in forecasting of PET. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Technical support for creating an artificial intelligence system for feature extraction and experimental design

    NASA Technical Reports Server (NTRS)

    Glick, B. J.

    1985-01-01

    Techniques for classifying objects into groups or clases go under many different names including, most commonly, cluster analysis. Mathematically, the general problem is to find a best mapping of objects into an index set consisting of class identifiers. When an a priori grouping of objects exists, the process of deriving the classification rules from samples of classified objects is known as discrimination. When such rules are applied to objects of unknown class, the process is denoted classification. The specific problem addressed involves the group classification of a set of objects that are each associated with a series of measurements (ratio, interval, ordinal, or nominal levels of measurement). Each measurement produces one variable in a multidimensional variable space. Cluster analysis techniques are reviewed and methods for incuding geographic location, distance measures, and spatial pattern (distribution) as parameters in clustering are examined. For the case of patterning, measures of spatial autocorrelation are discussed in terms of the kind of data (nominal, ordinal, or interval scaled) to which they may be applied.

  17. Electro-responsive polyelectrolyte-coated surfaces.

    PubMed

    Sénéchal, V; Saadaoui, H; Rodriguez-Hernandez, J; Drummond, C

    2017-07-01

    The anchoring of polymer chains at solid surfaces is an efficient way to modify interfacial properties like the stability and rheology of colloidal dispersions, lubrication and biocompatibility. Polyelectrolytes are good candidates for the building of smart materials, as the polyion chain conformation can often be tuned by manipulation of different physico-chemical variables. However, achieving efficient and reversible control of this process represents an important technological challenge. In this regard, the application of an external electrical stimulus on polyelectrolytes seems to be a convenient control strategy, for several reasons. First, it is relatively easy to apply an electric field to the material with adequate spatiotemporal control. In addition, in contrast to chemically induced changes, the molecular response to a changing electric field occurs relatively quickly. If the system is properly designed, this response can then be used to control the magnitude of surface properties. In this work we discuss the effect of an external electric field on the adhesion and lubrication properties of several polyelectrolyte-coated surfaces. The influence of the applied field is investigated at different pH and salt conditions, as the polyelectrolyte conformation is sensitive to these variables. We show that it is possible to fine tune friction and adhesion using relatively low applied fields.

  18. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research

    PubMed Central

    Bamberger, Katharine T.

    2015-01-01

    The use of intensive longitudinal methods (ILM)—rapid in situ assessment at micro timescales—can be overlaid on RCTs and other study designs in applied family research. Especially when done as part of a multiple timescale design—in bursts over macro timescales, ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based (rather than family-based) interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM. PMID:26541560

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  20. Age adjustment in ecological studies: using a study on arsenic ingestion and bladder cancer as an example.

    PubMed

    Guo, How-Ran

    2011-10-20

    Despite its limitations, ecological study design is widely applied in epidemiology. In most cases, adjustment for age is necessary, but different methods may lead to different conclusions. To compare three methods of age adjustment, a study on the associations between arsenic in drinking water and incidence of bladder cancer in 243 townships in Taiwan was used as an example. A total of 3068 cases of bladder cancer, including 2276 men and 792 women, were identified during a ten-year study period in the study townships. Three methods were applied to analyze the same data set on the ten-year study period. The first (Direct Method) applied direct standardization to obtain standardized incidence rate and then used it as the dependent variable in the regression analysis. The second (Indirect Method) applied indirect standardization to obtain standardized incidence ratio and then used it as the dependent variable in the regression analysis instead. The third (Variable Method) used proportions of residents in different age groups as a part of the independent variables in the multiple regression models. All three methods showed a statistically significant positive association between arsenic exposure above 0.64 mg/L and incidence of bladder cancer in men and women, but different results were observed for the other exposure categories. In addition, the risk estimates obtained by different methods for the same exposure category were all different. Using an empirical example, the current study confirmed the argument made by other researchers previously that whereas the three different methods of age adjustment may lead to different conclusions, only the third approach can obtain unbiased estimates of the risks. The third method can also generate estimates of the risk associated with each age group, but the other two are unable to evaluate the effects of age directly.

  1. Designing flexible engineering systems utilizing embedded architecture options

    NASA Astrophysics Data System (ADS)

    Pierce, Jeff G.

    This dissertation develops and applies an integrated framework for embedding flexibility in an engineered system architecture. Systems are constantly faced with unpredictability in the operational environment, threats from competing systems, obsolescence of technology, and general uncertainty in future system demands. Current systems engineering and risk management practices have focused almost exclusively on mitigating or preventing the negative consequences of uncertainty. This research recognizes that high uncertainty also presents an opportunity to design systems that can flexibly respond to changing requirements and capture additional value throughout the design life. There does not exist however a formalized approach to designing appropriately flexible systems. This research develops a three stage integrated flexibility framework based on the concept of architecture options embedded in the system design. Stage One defines an eight step systems engineering process to identify candidate architecture options. This process encapsulates the operational uncertainty though scenario development, traces new functional requirements to the affected design variables, and clusters the variables most sensitive to change. The resulting clusters can generate insight into the most promising regions in the architecture to embed flexibility in the form of architecture options. Stage Two develops a quantitative option valuation technique, grounded in real options theory, which is able to value embedded architecture options that exhibit variable expiration behavior. Stage Three proposes a portfolio optimization algorithm, for both discrete and continuous options, to select the optimal subset of architecture options, subject to budget and risk constraints. Finally, the feasibility, extensibility and limitations of the framework are assessed by its application to a reconnaissance satellite system development problem. Detailed technical data, performance models, and cost estimates were compiled for the Tactical Imaging Constellation Architecture Study and leveraged to complete a realistic proof-of-concept.

  2. A Novel Variable-Focus Lens for HFGW

    NASA Astrophysics Data System (ADS)

    Woods, R. Clive

    2006-01-01

    Li and Torr published calculations claiming to show that gravitational waves (GWs) propagate inside superconductors with a phase velocity reduction (compared to free space) by a factor n ~ 300× and a wavenumber increase by a factor n. This gives major opportunities for designing future GW components able to focus, refract, reflect, and otherwise manipulate gravitational waves for efficient coupling to detectors, transmitters, generators, resonant chambers, and other sensors. To exploit this result, a novel type of HFGW lens design is proposed here using a magnetic field to adjust the focal length in an infinitely-variable manner. Type-II superconductors do not always completely expel large magnetic fields; above their lower critical field they allow vortices of magnetic flux to channel the magnetic field through the material. Within these vortices, the superconductor is magnetically quenched and so behaves as a non-superconductor. Varying the applied magnetic field varies the proportion of material that is quenched. This subsequently affects GW propagation behavior through a type II superconductor. Therefore, using a suitable non-uniform magnetic field, the GW optical path length may be arranged to vary in a technologically useful manner. A GW lens may be designed with focal length dependent upon the applied magnetic field. Such a lens would be invaluable in the design of advanced GW optics since focusing will be achieved electrically with no moving parts; for this reason it would be unparalleled in conventional optics. Since, therefore, variations in n (due to calculation error limits) can be compensated electrically, successful demonstration of this device would confirm the Li and Torr prediction much more easily than directly using a fixed lens structure. The device would also enable fast auto-focusing, zooming, and imaging tomography using electronic servos following development of the necessary HFGW detectors.

  3. First-Order Model Management With Variable-Fidelity Physics Applied to Multi-Element Airfoil Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.

    2000-01-01

    First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.

  4. Detrended fluctuation analysis as a regression framework: Estimating dependence at different scales

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2015-02-01

    We propose a framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential nonstationarity and power-law correlations. The former feature allows for distinguishing between effects for a pair of variables from different temporal perspectives. The latter ones make the method a significant improvement over the standard least squares estimation. Theoretical claims are supported by Monte Carlo simulations. The method is then applied on selected examples from physics, finance, environmental science, and epidemiology. For most of the studied cases, the relationship between variables of interest varies strongly across scales.

  5. Application of Quality by Design (QbD) Principles to Extractables/Leachables Assessment. Establishing a Design Space for Terminally Sterilized Aqueous Drug Products Stored in a Plastic Packaging System.

    PubMed

    Jenke, Dennis

    2010-01-01

    The concept of quality by design (QbD) reflects the current global regulatory thinking related to pharmaceutical products. A cornerstone of the QbD paradigm is the concept of a design space, where the design space is a multidimensional combination of input variables and process parameters that have been demonstrated to provide the assurance of product quality. If a design space can be established for a pharmaceutical process or product, then operation within the design space confirms that the product or process output possesses the required quality attributes. This concept of design space can be applied to the safety (leachables) assessment of drug products manufactured and stored in packaging systems. Critical variables in such a design space would include those variables that affect the interaction of the drug product and its packaging, including (a) composition of the drug product, (b) composition of the packaging system, (c) configuration of the packaging system, and (d) the conditions of contact. This paper proposes and justifies such a leachables design space for aqueous drug products packaged in a specific plastic packaging system. Such a design space has the following boundaries:Aqueous drug products with a pH in the range of 2 to 8 and that contain no polarity-impacting agents such as organic solubilizers and stabilizers (addressing variable a). Packaging systems manufactured from materials that meet the system's existing material specifications (addressing variable b). Nominal fill volumes from 50 to 1000 mL (addressing variable c). Products subjected to terminal sterilization and then stored at room temperature for a period of up to 24 months (addressing variable d). The ramification of such a design space is that any drug product that falls within these boundaries is deemed to be compatible with the packaging system, from the perspective of safety, without the requirement of supporting drug product testing. When drug products are packaged in plastic container systems, substances may leach from the container and accumulate in the product. It is necessary that the drug product's vendor demonstrate that any such leaching does not occur to the extent that the leached substances adversely affect the product's safety and/or efficacy. One method for accomplishing this objective is via analysis of the drug product to identify and quantify the leached substances. When a particular packaging system is utilized for multiple drug products, one reaches the point, after testing numerous drug products, where the leaching properties of the packaging system are well known and readily predictable. In such a case, testing of additional products in the same packaging system produces no new information and thus becomes redundant and unnecessary. The quality by design (QbD) principle can be simply stated as follows: once a system has been tested to the extent that the test results are predictable, further testing can be replaced by establishing that the system was operating within a defined design space. The purpose of this paper is to demonstrate the application of QbD principles to a packaging system that has been utilized with over 12 parenteral drug products. The paper concludes that the leachables profile of all drug products that fit a certain description (the design space) is known and predicable.

  6. Cooling of Gas Turbines, IV - Calculated Temperature Distribution in the Trailing Part of a Turbine Blade Using Direct Liquid Cooling

    NASA Technical Reports Server (NTRS)

    Brown, W. Byron; Monroe, William R.

    1947-01-01

    A theoretical analysis of the temperature distribution through the trailing portion of a blade near the coolant passages of liquid cooled gas turbines was made. The analysis was applied to obtain the hot spot temperatures at the trailing edge and influence of design variables. The effective gas temperature was varied from 2000 degrees to 5000 degrees F in each investigation.

  7. Computer program for simulation of variable recharge with the U. S. Geological Survey modular finite-difference ground-water flow model (MODFLOW)

    USGS Publications Warehouse

    Kontis, A.L.

    2001-01-01

    The Variable-Recharge Package is a computerized method designed for use with the U.S. Geological Survey three-dimensional finitedifference ground-water flow model (MODFLOW-88) to simulate areal recharge to an aquifer. It is suitable for simulations of aquifers in which the relation between ground-water levels and land surface can affect the amount and distribution of recharge. The method is based on the premise that recharge to an aquifer cannot occur where the water level is at or above land surface. Consequently, recharge will vary spatially in simulations in which the Variable- Recharge Package is applied, if the water levels are sufficiently high. The input data required by the program for each model cell that can potentially receive recharge includes the average land-surface elevation and a quantity termed ?water available for recharge,? which is equal to precipitation minus evapotranspiration. The Variable-Recharge Package also can be used to simulate recharge to a valley-fill aquifer in which the valley fill and the adjoining uplands are explicitly simulated. Valley-fill aquifers, which are the most common type of aquifer in the glaciated northeastern United States, receive much of their recharge from upland sources as channeled and(or) unchanneled surface runoff and as lateral ground-water flow. Surface runoff in the uplands is generated in the model when the applied water available for recharge is rejected because simulated water levels are at or above land surface. The surface runoff can be distributed to other parts of the model by (1) applying the amount of the surface runoff that flows to upland streams (channeled runoff) to explicitly simulated streams that flow onto the valley floor, and(or) (2) applying the amount that flows downslope toward the valley- fill aquifer (unchanneled runoff) to specified model cells, typically those near the valley wall. An example model of an idealized valley- fill aquifer is presented to demonstrate application of the method and the type of information that can be derived from its use. Documentation of the Variable-Recharge Package is provided in the appendixes and includes listings of model code and of program variables. Comment statements in the program listings provide a narrative of the code. Input-data instructions and printed model output for the package are included.

  8. Automated airplane surface generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.E.; Cordero, Y.; Jones, W.

    1996-12-31

    An efficient methodology and software axe presented for defining a class of airplane configurations. A small set of engineering design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tall, horizontal tail, and canard components. Wing, canard, and tail surface grids axe manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage is described by an algebraic function with four design parameters. The computed surface grids are suitablemore » for a wide range of Computational Fluid Dynamics simulation and configuration optimizations. Both batch and interactive software are discussed for applying the methodology.« less

  9. Optimal design application on the advanced aeroelastic rotor blade

    NASA Technical Reports Server (NTRS)

    Wei, F. S.; Jones, R.

    1985-01-01

    The vibration and performance optimization procedure using regression analysis was successfully applied to an advanced aeroelastic blade design study. The major advantage of this regression technique is that multiple optimizations can be performed to evaluate the effects of various objective functions and constraint functions. The data bases obtained from the rotorcraft flight simulation program C81 and Myklestad mode shape program are analytically determined as a function of each design variable. This approach has been verified for various blade radial ballast weight locations and blade planforms. This method can also be utilized to ascertain the effect of a particular cost function which is composed of several objective functions with different weighting factors for various mission requirements without any additional effort.

  10. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  11. Designing nacre-like materials for simultaneous stiffness, strength and toughness: Optimum materials, composition, microstructure and size

    NASA Astrophysics Data System (ADS)

    Barthelat, Francois

    2014-12-01

    Nacre, bone and spider silk are staggered composites where inclusions of high aspect ratio reinforce a softer matrix. Such staggered composites have emerged through natural selection as the best configuration to produce stiffness, strength and toughness simultaneously. As a result, these remarkable materials are increasingly serving as model for synthetic composites with unusual and attractive performance. While several models have been developed to predict basic properties for biological and bio-inspired staggered composites, the designer is still left to struggle with finding optimum parameters. Unresolved issues include choosing optimum properties for inclusions and matrix, and resolving the contradictory effects of certain design variables. Here we overcome these difficulties with a multi-objective optimization for simultaneous high stiffness, strength and energy absorption in staggered composites. Our optimization scheme includes material properties for inclusions and matrix as design variables. This process reveals new guidelines, for example the staggered microstructure is only advantageous if the tablets are at least five times stronger than the interfaces, and only if high volume concentrations of tablets are used. We finally compile the results into a step-by-step optimization procedure which can be applied for the design of any type of high-performance staggered composite and at any length scale. The procedure produces optimum designs which are consistent with the materials and microstructure of natural nacre, confirming that this natural material is indeed optimized for mechanical performance.

  12. Design of a rib impactor equipment

    NASA Astrophysics Data System (ADS)

    Torres, C. R.; García, G.; Aguilar, L. A.; Martínez, L.

    2017-01-01

    The human ribs must be analyzed as long and as curved bones, due to their physiology. For the development of an experimental equipment that simulate the application of loads, over the rib in the moment of a frontal collision in an automobile with seat belt, it was applied a methodology that constituted in the identification of needs and the variables which led the design of 3D model, from this it was used the technique of fused deposition modeling for the development of the equipment pieces. The supports that hold the rib ends were design with two and three degrees of freedom that allows the simulation of rib movement with the spine and the breastbone in the breathing. For the simulation of the seat belt, it was determined to applied two loads over the front part of the rib from the sagittal and lateral plane respectively, for this it was made a displacement through a lineal actuator with a speed of 4mm/s. The outcomes shown a design of an equipment able to obtain the load parameters required to generate fractures in rib specimens. The equipment may be used for the study of specimens with nearby geometries to the rib taken as a reference.

  13. Comment on 'Amplification of endpoint structure for new particle mass measurement at the LHC'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barr, A. J.; Gwenlan, C.; Young, C. J. S.

    2011-06-01

    We present a comment on the kinematic variable m{sub CT2} recently proposed in Won Sang Cho, Jihn E. Kim, and Ji-Hun Kim, Phys. Rev. D 81, 095010 (2010). The variable is designed to be applied to models such as R-parity conserving supersymmetry (SUSY) when there is pair production of new heavy particles each of which decays to a single massless visible and a massive invisible component. It was proposed by Cho, Kim, and Kim that a measurement of the peak of the m{sub CT2} distribution could be used to precisely constrain the masses of the SUSY particles. We show that,more » for the an example characterized by direct squark decays, when standard model backgrounds are included in simulations, the sensitivity of the m{sub CT2} variable to the SUSY particle masses is more seriously impacted for m{sub CT2} than for other previously proposed variables.« less

  14. Diversified models for portfolio selection based on uncertain semivariance

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  15. A Practical Framework Toward Prediction of Breaking Force and Disintegration of Tablet Formulations Using Machine Learning Tools.

    PubMed

    Akseli, Ilgaz; Xie, Jingjin; Schultz, Leon; Ladyzhynsky, Nadia; Bramante, Tommasina; He, Xiaorong; Deanne, Rich; Horspool, Keith R; Schwabe, Robert

    2017-01-01

    Enabling the paradigm of quality by design requires the ability to quantitatively correlate material properties and process variables to measureable product performance attributes. Conventional, quality-by-test methods for determining tablet breaking force and disintegration time usually involve destructive tests, which consume significant amount of time and labor and provide limited information. Recent advances in material characterization, statistical analysis, and machine learning have provided multiple tools that have the potential to develop nondestructive, fast, and accurate approaches in drug product development. In this work, a methodology to predict the breaking force and disintegration time of tablet formulations using nondestructive ultrasonics and machine learning tools was developed. The input variables to the model include intrinsic properties of formulation and extrinsic process variables influencing the tablet during manufacturing. The model has been applied to predict breaking force and disintegration time using small quantities of active pharmaceutical ingredient and prototype formulation designs. The novel approach presented is a step forward toward rational design of a robust drug product based on insight into the performance of common materials during formulation and process development. It may also help expedite drug product development timeline and reduce active pharmaceutical ingredient usage while improving efficiency of the overall process. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  16. L-asparaginase production by mangrove derived Bacillus cereus MAB5: Optimization by response surface methodology.

    PubMed

    Thenmozhi, C; Sankar, R; Karuppiah, V; Sampathkumar, P

    2011-06-01

    To isolate marine bacteria, statistically optimize them for maximum asparaginase production. In the present study, statistically based experimental designs were applied to maximize the production of L-asparaginase from bacterial strain of Bacillus cereus (B. cereus) MAB5 (HQ675025) isolated and identified by 16S rDNA sequencing from mangroves rhizosphere sediment. Plackett-Barman design was used to identify the interactive effect of the eight variables viz. yeast extract, soyabean meal, glucose, magnesium sulphate, KH(2)PO(4), wood chips, aspargine and sodium chloride. All the variables are denoted as numerical factors and investigated at two widely spaced intervals designated as -1 (low level) and +1 (high level). The effect of individual parameters on L-asparaginase production was calculated. Soyabean meal, aspargine, wood chips and sodium chloride were found to be the significant among eight variables. The maximum amount of L-asparaginase produced (51.54 IU/mL) from the optimized medium containing soyabean meal (6.282 8 g/L), aspargine (5.5 g/L), wood chips (1.383 8 g/L) and NaCl (4.535 4 g/L). The study revealed that, it is useful to produce the maximum amount of L-asparaginase from B. cereus MAB5 for the treatment of various infections and diseases. Copyright © 2011 Hainan Medical College. Published by Elsevier B.V. All rights reserved.

  17. Mastoid vibration affects dynamic postural control during gait in healthy older adults

    NASA Astrophysics Data System (ADS)

    Chien, Jung Hung; Mukherjee, Mukul; Kent, Jenny; Stergiou, Nicholas

    2017-01-01

    Vestibular disorders are difficult to diagnose early due to the lack of a systematic assessment. Our previous work has developed a reliable experimental design and the result shows promising results that vestibular sensory input while walking could be affected through mastoid vibration (MV) and changes are in the direction of motion. In the present paper, we wanted to extend this work to older adults and investigate how manipulating sensory input through mastoid vibration (MV) could affect dynamic postural control during walking. Three levels of MV (none, unilateral, and bilateral) applied via vibrating elements placed on the mastoid processes were combined with the Locomotor Sensory Organization Test (LSOT) paradigm to challenge the visual and somatosensory systems. We hypothesized that the MV would affect sway variability during walking in older adults. Our results revealed that MV significantly not only increased the amount of sway variability but also decreased the temporal structure of sway variability only in anterior-posterior direction. Importantly, the bilateral MV stimulation generally produced larger effects than the unilateral. This is an important finding that confirmed our experimental design and the results produced could guide a more reliable screening of vestibular system deterioration.

  18. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    PubMed

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  19. Differential Flatness and Cooperative Tracking in the Lorenz System

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.

    2002-01-01

    In this paper the control of the Lorenz system for both stabilization and tracking problems is studied via feedback linearization and differential flatness. By using the Rayleigh number as the control, only variable physically tunable, a barrier in the controllability of the system is incidentally imposed. This is reflected in the appearance of a singularity in the state transformation. Composite controllers that overcome this difficulty are designed and evaluated. The transition through the manifold defined by such a singularity is achieved by inducing a chaotic response within a boundary layer that contains it. Outside this region, a conventional feedback nonlinear control is applied. In this fashion, the authority of the control is enlarged to the whole. state space and the need for high control efforts is mitigated. In addition, the differential parametrization of the problem is used to track nonlinear functions of one state variable (single tracking) as well as several state variables (cooperative tracking). Control tasks that lead to integrable and non-integrable differential equations for the nominal flat output in steady-state are considered. In particular, a novel numerical strategy to deal with the non-integrable case is proposed. Numerical results validate very well the control design.

  20. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  1. Microwave blackbodies for spaceborne receivers

    NASA Technical Reports Server (NTRS)

    Stacey, J. M.

    1985-01-01

    The properties of microwave blackbody targets are explained as they apply to the calibration of spaceborne receivers. Also described are several practicable, blackbody targets used to test and calibrate receivers in the laboratory and in the thermal vacuum chamber. Problems with the precision and the accuracy of blackbody targets, and blackbody target design concepts that overcome some of the accuracy limitations present in existing target designs, are presented. The principle of the Brewster angle blackbody target is described where the blackbody is applied as a fixed-temperature test target in the laboratory and as a variable-temperature target in the thermal vacuum chamber. The reflectivity of a Brewster angle target is measured in the laboratory. From this measurement, the emissivity of the target is calculated. Radiatively cooled thermal suspensions are discussed as the coolants of blackbody targets and waveguide terminations that function as calibration devices in spaceborne receivers. Examples are given for the design of radiatively cooled thermal suspensions. Corrugated-horn antennas used to observe the cosmic background and to provide a cold-calibration source for spaceborne receivers are described.

  2. Reliability considerations for the total strain range version of strainrange partitioning

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y. T.

    1984-01-01

    A proposed total strainrange version of strainrange partitioning (SRP) to enhance the manner in which SRP is applied to life prediction is considered with emphasis on how advanced reliability technology can be applied to perform risk analysis and to derive safety check expressions. Uncertainties existing in the design factors associated with life prediction of a component which experiences the combined effects of creep and fatigue can be identified. Examples illustrate how reliability analyses of such a component can be performed when all design factors in the SRP model are random variables reflecting these uncertainties. The Rackwitz-Fiessler and Wu algorithms are used and estimates of the safety index and the probablity of failure are demonstrated for a SRP problem. Methods of analysis of creep-fatigue data with emphasis on procedures for producing synoptic statistics are presented. An attempt to demonstrate the importance of the contribution of the uncertainties associated with small sample sizes (fatique data) to risk estimates is discussed. The procedure for deriving a safety check expression for possible use in a design criteria document is presented.

  3. Multiobjective robust design of the double wishbone suspension system based on particle swarm optimization.

    PubMed

    Cheng, Xianfu; Lin, Yuqun

    2014-01-01

    The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.

  4. Spatial analysis of agri-environmental policy uptake and expenditure in Scotland.

    PubMed

    Yang, Anastasia L; Rounsevell, Mark D A; Wilson, Ronald M; Haggett, Claire

    2014-01-15

    Agri-environment is one of the most widely supported rural development policy measures in Scotland in terms of number of participants and expenditure. It comprises 69 management options and sub-options that are delivered primarily through the competitive 'Rural Priorities scheme'. Understanding the spatial determinants of uptake and expenditure would assist policy-makers in guiding future policy targeting efforts for the rural environment. This study is unique in examining the spatial dependency and determinants of Scotland's agri-environmental measures and categorised options uptake and payments at the parish level. Spatial econometrics is applied to test the influence of 40 explanatory variables on farming characteristics, land capability, designated sites, accessibility and population. Results identified spatial dependency for each of the dependent variables, which supported the use of spatially-explicit models. The goodness of fit of the spatial models was better than for the aspatial regression models. There was also notable improvement in the models for participation compared with the models for expenditure. Furthermore a range of expected explanatory variables were found to be significant and varied according to the dependent variable used. The majority of models for both payment and uptake showed a significant positive relationship with SSSI (Sites of Special Scientific Interest), which are designated sites prioritised in Scottish policy. These results indicate that environmental targeting efforts by the government for AEP uptake in designated sites can be effective. However habitats outside of SSSI, termed here the 'wider countryside' may not be sufficiently competitive to receive funding in the current policy system. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Structural reliability assessment of the Oman India Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Sharif, A.M.; Preston, R.

    1996-12-31

    Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less

  6. Application of shape memory alloy (SMA) spars for aircraft maneuver enhancement

    NASA Astrophysics Data System (ADS)

    Nam, Changho; Chattopadhyay, Aditi; Kim, Youdan

    2002-07-01

    Modern combat aircraft are required to achieve aggressive maneuverability and high agility performance, while maintaining handling qualities over a wide range of flight conditions. Recently, a new adaptive-structural concept called variable stiffness spar is proposed in order to increase the maneuverability of the flexible aircraft. The variable stiffness spar controls wing torsional stiffness to enhance roll performance in the complete flight envelope. However, variable stiffness spar requires the mechanical actuation system in order to rotate the Variable stiffness spar during flight. The mechanical actuation system to rotate variable stiffness spar may cause an additional weight increase. In this paper, we will apply Shape Memory Alloy (SMA) spars for aeroelastic performance enhancement. In order to explore the potential of SMA spar design, roll performance of the composite smart wings will be investigated using ASTROS. Parametric study will be conducted to investigate the SMA spar effects by changing the spar locations and geometry. The results show that with activation of the SMA spar, the roll effectiveness can be increased up to 61% compared with the baseline model.

  7. Finite Element Analysis in the Estimation of Air-Gap Torque and Surface Temperature of Induction Machine

    NASA Astrophysics Data System (ADS)

    Mr., J. Ravi Kumar; Banakara, Basavaraja, Dr.

    2017-08-01

    This paper presents electromagnetic and thermal behavior of Induction Motor (IM) through the modeling and analysis by applying multiphysics coupled Finite Element Analysis (FEA). Therefore prediction of the magnetic flux, electromagnetic torque, stator and rotor losses and temperature distribution inside an operating electric motor are the most important issues during its design. Prediction and estimation of these parameters allows design engineers to decide capability of the machine for the proposed load, temperature rating and its application for which it is being designed ensuring normal motor operation at rated conditions. In this work, multiphysics coupled electromagnetic - thermal modeling and analysis of induction motor at rated and high frequency has carried out applying Arkkio’s torque method. COMSOL Multiphysics software is used for modeling and finite element analysis of IM. Transient electromagnetic torque, magnetic field distribution, speed-torque characteristics of IM were plotted and studied at different frequencies. This proposed work helps in the design and prediction of accurate performance of induction motor specific to various industrial drive applications. Results obtained are also validated with experimental analysis. The main purpose of this model is to use it as an integral part of the design aiming to system optimization of Variable Speed Drive (VSD) and its components using coupled simulations.

  8. Utilization of a modified special-cubic design and an electronic tongue for bitterness masking formulation optimization.

    PubMed

    Li, Lianli; Naini, Venkatesh; Ahmed, Salah U

    2007-10-01

    A unique modification of simplex design was applied to an electronic tongue (E-Tongue) analysis in bitterness masking formulation optimization. Three formulation variables were evaluated in the simplex design, i.e. concentrations of two taste masking polymers, Amberlite and Carbopol, and pH of the granulating fluid. Response of the design was a bitterness distance measured using an E-Tongue by applying a principle component analysis, which represents taste masking efficiency of the formulation. The smaller the distance, the better the bitterness masking effect. Contour plots and polynomial equations of the bitterness distance response were generated as a function of formulation composition and pH. It was found that interactions between polymer and pH reduced the bitterness of the formulation, attributed to pH-dependent ionization and complexation properties of the ionic polymers, thus keeping the drug out of solution and unavailable to bitterness perception. At pH 4.9 and an Amberlite/Carbopol ratio of 1.4:1 (w/w), the optimal taste masking formulation was achieved and in agreement with human gustatory sensation study results. Therefore, adopting a modified simplex experimental design on response measured using an E-Tongue provided an efficient approach to taste masking formulation optimization using ionic binding polymers. (c) 2007 Wiley-Liss, Inc.

  9. Process for applying control variables having fractal structures

    DOEpatents

    Bullock, IV, Jonathan S.; Lawson, Roger L.

    1996-01-01

    A process and apparatus for the application of a control variable having a fractal structure to a body or process. The process of the present invention comprises the steps of generating a control variable having a fractal structure and applying the control variable to a body or process reacting in accordance with the control variable. The process is applicable to electroforming where first, second and successive pulsed-currents are applied to cause the deposition of material onto a substrate, such that the first pulsed-current, the second pulsed-current, and successive pulsed currents form a fractal pulsed-current waveform.

  10. Process for applying control variables having fractal structures

    DOEpatents

    Bullock, J.S. IV; Lawson, R.L.

    1996-01-23

    A process and apparatus are disclosed for the application of a control variable having a fractal structure to a body or process. The process of the present invention comprises the steps of generating a control variable having a fractal structure and applying the control variable to a body or process reacting in accordance with the control variable. The process is applicable to electroforming where first, second and successive pulsed-currents are applied to cause the deposition of material onto a substrate, such that the first pulsed-current, the second pulsed-current, and successive pulsed currents form a fractal pulsed-current waveform. 3 figs.

  11. Melt Flow Control in the Directional Solidification of Binary Alloys

    NASA Technical Reports Server (NTRS)

    Zabaras, Nicholas

    2003-01-01

    Our main project objectives are to develop computational techniques based on inverse problem theory that can be used to design directional solidification processes that lead to desired temperature gradient and growth conditions at the freezing front at various levels of gravity. It is known that control of these conditions plays a significant role in the selection of the form and scale of the obtained solidification microstructures. Emphasis is given on the control of the effects of various melt flow mechanisms on the local to the solidification front conditions. The thermal boundary conditions (furnace design) as well as the magnitude and direction of an externally applied magnetic field are the main design variables. We will highlight computational design models for sharp front solidification models and briefly discuss work in progress toward the development of design techniques for multi-phase volume-averaging based solidification models.

  12. Optical design and simulation of a new coherence beamline at NSLS-II

    NASA Astrophysics Data System (ADS)

    Williams, Garth J.; Chubar, Oleg; Berman, Lonny; Chu, Yong S.; Robinson, Ian K.

    2017-08-01

    We will discuss the optical design for a proposed beamline at NSLS-II, a late-third generation storage ring source, designed to exploit the spatial coherence of the X-rays to extract high-resolution spatial information from ordered and disordered materials through Coherent Diffractive Imaging, executed in the Bragg- and forward-scattering geometries. This technique offers a powerful tool to image sub-10 nm spatial features and, within ordered materials, sub-Angstrom mapping of deformation fields. Driven by the opportunity to apply CDI to a wide range of samples, with sizes ranging from sub-micron to tens-of-microns, two optical designs have been proposed and simulated under a wide variety of optical configurations using the software package Synchrotron Radiation Workshop. The designs, their goals, and the results of the simulation, including NSLS-II ring and undulator source parameters, of the beamline performance as a function of its variable optical components is described.

  13. Relationship between body composition and postural control in prepubertal overweight/obese children: A cross-sectional study.

    PubMed

    Villarrasa-Sapiña, Israel; Álvarez-Pitti, Julio; Cabeza-Ruiz, Ruth; Redón, Pau; Lurbe, Empar; García-Massó, Xavier

    2018-02-01

    Excess body weight during childhood causes reduced motor functionality and problems in postural control, a negative influence which has been reported in the literature. Nevertheless, no information regarding the effect of body composition on the postural control of overweight and obese children is available. The objective of this study was therefore to establish these relationships. A cross-sectional design was used to establish relationships between body composition and postural control variables obtained in bipedal eyes-open and eyes-closed conditions in twenty-two children. Centre of pressure signals were analysed in the temporal and frequency domains. Pearson correlations were applied to establish relationships between variables. Principal component analysis was applied to the body composition variables to avoid potential multicollinearity in the regression models. These principal components were used to perform a multiple linear regression analysis, from which regression models were obtained to predict postural control. Height and leg mass were the body composition variables that showed the highest correlation with postural control. Multiple regression models were also obtained and several of these models showed a higher correlation coefficient in predicting postural control than simple correlations. These models revealed that leg and trunk mass were good predictors of postural control. More equations were found in the eyes-open than eyes-closed condition. Body weight and height are negatively correlated with postural control. However, leg and trunk mass are better postural control predictors than arm or body mass. Finally, body composition variables are more useful in predicting postural control when the eyes are open. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Translational applications of evaluating physiologic variability in human endotoxemia

    PubMed Central

    Scheff, Jeremy D.; Mavroudis, Panteleimon D.; Calvano, Steve E.; Androulakis, Ioannis P.

    2012-01-01

    Dysregulation of the inflammatory response is a critical component of many clinically challenging disorders such as sepsis. Inflammation is a biological process designed to lead to healing and recovery, ultimately restoring homeostasis; however, the failure to fully achieve those beneficial results can leave a patient in a dangerous persistent inflammatory state. One of the primary challenges in developing novel therapies in this area is that inflammation is comprised of a complex network of interacting pathways. Here, we discuss our approaches towards addressing this problem through computational systems biology, with a particular focus on how the presence of biological rhythms and the disruption of these rhythms in inflammation may be applied in a translational context. By leveraging the information content embedded in physiologic variability, ranging in scale from oscillations in autonomic activity driving short-term heart rate variability (HRV) to circadian rhythms in immunomodulatory hormones, there is significant potential to gain insight into the underlying physiology. PMID:23203205

  15. Variable context Markov chains for HIV protease cleavage site prediction.

    PubMed

    Oğul, Hasan

    2009-06-01

    Deciphering the knowledge of HIV protease specificity and developing computational tools for detecting its cleavage sites in protein polypeptide chain are very desirable for designing efficient and specific chemical inhibitors to prevent acquired immunodeficiency syndrome. In this study, we developed a generative model based on a generalization of variable order Markov chains (VOMC) for peptide sequences and adapted the model for prediction of their cleavability by certain proteases. The new method, called variable context Markov chains (VCMC), attempts to identify the context equivalence based on the evolutionary similarities between individual amino acids. It was applied for HIV-1 protease cleavage site prediction problem and shown to outperform existing methods in terms of prediction accuracy on a common dataset. In general, the method is a promising tool for prediction of cleavage sites of all proteases and encouraged to be used for any kind of peptide classification problem as well.

  16. Overtone Mobility Spectrometry (Part 2): Theoretical Considerations of Resolving Power

    PubMed Central

    Valentine, Stephen J.; Stokes, Sarah T.; Kurulugama, Ruwan T.; Nachtigall, Fabiane M.; Clemmer, David E.

    2009-01-01

    The transport of ions through multiple drift regions is modeled in order to develop an equation that is useful for an understanding of the resolving power of an overtone mobility spectrometry (OMS) technique. It is found that resolving power is influenced by a number of experimental variables, including those that define ion mobility spectrometry (IMS) resolving power: drift field (E), drift region length (L), and buffer gas temperature (T). However, unlike IMS, the resolving power of OMS is also influenced by the number of drift regions (n), harmonic frequency value (m), and the phase number (ϕ) of the applied drift field. The OMS resolving power dependence upon the new OMS variables (n, m, and ϕ) scales differently than the square root dependence of the E, L, and T variables in IMS. The results provide insight about optimal instrumental design and operation. PMID:19230705

  17. A study protocol to evaluate the relationship between outdoor air pollution and pregnancy outcomes

    PubMed Central

    2010-01-01

    Background The present study protocol is designed to assess the relationship between outdoor air pollution and low birth weight and preterm births outcomes performing a semi-ecological analysis. Semi-ecological design studies are widely used to assess effects of air pollution in humans. In this type of analysis, health outcomes and covariates are measured in individuals and exposure assignments are usually based on air quality monitor stations. Therefore, estimating individual exposures are one of the major challenges when investigating these relationships with a semi-ecologic design. Methods/Design Semi-ecologic study consisting of a retrospective cohort study with ecologic assignment of exposure is applied. Health outcomes and covariates are collected at Primary Health Care Center. Data from pregnant registry, clinical record and specific questionnaire administered orally to the mothers of children born in period 2007-2010 in Portuguese Alentejo Litoral region, are collected by the research team. Outdoor air pollution data are collected with a lichen diversity biomonitoring program, and individual pregnancy exposures are assessed with spatial geostatistical simulation, which provides the basis for uncertainty analysis of individual exposures. Awareness of outdoor air pollution uncertainty will improve validity of individual exposures assignments for further statistical analysis with multivariate regression models. Discussion Exposure misclassification is an issue of concern in semi-ecological design. In this study, personal exposures are assigned to each pregnant using geocoded addresses data. A stochastic simulation method is applied to lichen diversity values index measured at biomonitoring survey locations, in order to assess spatial uncertainty of lichen diversity value index at each geocoded address. These methods assume a model for spatial autocorrelation of exposure and provide a distribution of exposures in each study location. We believe that variability of simulated exposure values at geocoded addresses will improve knowledge on variability of exposures, improving therefore validity of individual exposures to input in posterior statistical analysis. PMID:20950449

  18. Predictive models of lyophilization process for development, scale-up/tech transfer and manufacturing.

    PubMed

    Zhu, Tong; Moussa, Ehab M; Witting, Madeleine; Zhou, Deliang; Sinha, Kushal; Hirth, Mario; Gastens, Martin; Shang, Sherwin; Nere, Nandkishor; Somashekar, Shubha Chetan; Alexeenko, Alina; Jameel, Feroz

    2018-07-01

    Scale-up and technology transfer of lyophilization processes remains a challenge that requires thorough characterization of the laboratory and larger scale lyophilizers. In this study, computational fluid dynamics (CFD) was employed to develop computer-based models of both laboratory and manufacturing scale lyophilizers in order to understand the differences in equipment performance arising from distinct designs. CFD coupled with steady state heat and mass transfer modeling of the vial were then utilized to study and predict independent variables such as shelf temperature and chamber pressure, and response variables such as product resistance, product temperature and primary drying time for a given formulation. The models were then verified experimentally for the different lyophilizers. Additionally, the models were applied to create and evaluate a design space for a lyophilized product in order to provide justification for the flexibility to operate within a certain range of process parameters without the need for validation. Published by Elsevier B.V.

  19. Process, mechanism, and explanation related to externalizing behavior in developmental psychopathology.

    PubMed

    Hinshaw, Stephen P

    2002-10-01

    Advances in conceptualization and statistical modeling, on the one hand, and enhanced appreciation of transactional pathways, gene-environment correlations and interactions, and moderator and mediator variables, on the other, have heightened awareness of the need to consider factors and processes that explain the development and maintenance of psychopathology. With a focus on attentional problems, impulsivity, and disruptive behavior patterns, I address the kinds of conceptual approaches most likely to lead to advances regarding explanatory models in the field. Findings from my own research program on processes and mechanisms reveal both promise and limitations. Progress will emanate from use of genetically informative designs, blends of variable and person-centered research, explicit testing of developmental processes, systematic approaches to moderation and mediation, exploitation of "natural experiments," and the conduct of prevention and intervention trials designed to accentuate explanation as well as outcome. In all, breakthroughs will occur only with advances in translational research-linking basic and applied science-and with the further development of transactional, systemic approaches to explanation.

  20. Parametric Cost Models for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney

    2010-01-01

    Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.

  1. The effect of workstation and task variables on forces applied during simulated meat cutting.

    PubMed

    McGorry, Raymond W; Dempsey, Patrick G; O'Brien, Niall V

    2004-12-01

    The purpose of the study was to investigate factors related to force and postural exposure during a simulated meat cutting task. The hypothesis was that workstation, tool and task variables would affect the dependent kinetic variables of gripping force, cutting moment and the dependent kinematic variables of elbow elevation and wrist angular displacement in the flexion/extension and radial/ulnar deviation planes. To evaluate this hypothesis a 3 x 3 x 2 x 2 x 2 (surface orientation by surface height by blade angle by cut complexity by work pace) within-subject factorial design was conducted with 12 participants. The results indicated that the variables can act and interact to modify the kinematics and kinetics of a cutting task. Participants used greater grip force and cutting moment when working at a pace based on productivity. The interactions of the work surface height and orientation indicated that the use of an adjustable workstation could minimize wrist deviation from neutral and improve shoulder posture during cutting operations. Angling the knife blade also interacted with workstation variables to improve wrist and upper extremity posture, but this benefit must be weighed against the potential for small increases in force exposure.

  2. Scramjet Fuel Injection Array Optimization Utilizing Mixed Variable Pattern Search With Kriging Surrogates

    DTIC Science & Technology

    2008-03-01

    injector con- figurations for Scramjet applications.” International Journal of Heat and Mass Transfer 49: 3634–3644 (2006). 8. Anderson, C.D...Experimental Attainment of Optimal Conditions,” Journal of the Royal Statistical Society, B(13): 1–38, 1951. 19. Brewer, K.M. Exergy Methods for the Mission...second applies mvps to a new scramjet design in support of the Hypersonic International Flight Re- search Experimentation (hifire). The results

  3. Theoretical and experimental investigations of thermal conditions of household biogas plant

    NASA Astrophysics Data System (ADS)

    Zhelykh, Vasil; Furdas, Yura; Dzeryn, Oleksandra

    2016-06-01

    The construction of domestic continuous bioreactor is proposed. The modeling of thermal modes of household biogas plant using graph theory was done. The correction factor taking into account with the influence of variables on its value was determined. The system of balance equations for the desired thermal conditions in the bioreactor was presented. The graphical and analytical capabilities were represented that can be applied in the design of domestic biogas plants of organic waste recycling.

  4. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  5. Evaluating sex as a biological variable in preclinical research: the devil in the details.

    PubMed

    Tannenbaum, Cara; Schwarz, Jaclyn M; Clayton, Janine A; de Vries, Geert J; Sullivan, Casey

    2016-01-01

    Translating policy into action is a complex task, with much debate surrounding the process whereby US and Canadian health funding agencies intend to integrate sex and gender science as an integral component of methodological rigor and reporting in health research. Effective January 25, 2016, the US National Institutes of Health implemented a policy that expects scientists to account for the possible role of sex as a biological variable (SABV) in vertebrate animal and human studies. Applicants for NIH-funded research and career development awards will be asked to explain how they plan to factor consideration of SABV into their research design, analysis, and reporting; strong justification will be required for proposing single-sex studies. The Canadian Institutes of Health Research is revising their peer review accreditation process to ensure that peer reviewers are skilled in applying a critical lens to protocols that should be incorporating sex and gender science. The current paper outlines the components that peer reviewers in North America will be asked to assess when considering whether SABV is appropriately integrated into research designs, analyses, and reporting. Consensus argues against narrowly defining rules of engagement in applying SABV, with criteria provided for reviewers as guidance only. Scores will not be given for each criterion; applications will be judged on the overall merit of scientific innovation, rigor, reproducibility, and potential impact.

  6. Optimizing tuning masses for helicopter rotor blade vibration reduction including computed airloads and comparison with test data

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Walsh, Joanne L.; Wilbur, Matthew L.

    1992-01-01

    The development and validation of an optimization procedure to systematically place tuning masses along a rotor blade span to minimize vibratory loads are described. The masses and their corresponding locations are the design variables that are manipulated to reduce the harmonics of hub shear for a four-bladed rotor system without adding a large mass penalty. The procedure incorporates a comprehensive helicopter analysis to calculate the airloads. Predicting changes in airloads due to changes in design variables is an important feature of this research. The procedure was applied to a one-sixth, Mach-scaled rotor blade model to place three masses and then again to place six masses. In both cases the added mass was able to achieve significant reductions in the hub shear. In addition, the procedure was applied to place a single mass of fixed value on a blade model to reduce the hub shear for three flight conditions. The analytical results were compared to experimental data from a wind tunnel test performed in the Langley Transonic Dynamics Tunnel. The correlation of the mass location was good and the trend of the mass location with respect to flight speed was predicted fairly well. However, it was noted that the analysis was not entirely successful at predicting the absolute magnitudes of the fixed system loads.

  7. Experimental design approach applied to the elimination of crystal violet in water by electrocoagulation with Fe or Al electrodes.

    PubMed

    Durango-Usuga, Paula; Guzmán-Duque, Fernando; Mosteo, Rosa; Vazquez, Mario V; Peñuela, Gustavo; Torres-Palma, Ricardo A

    2010-07-15

    An experimental design methodology was applied to evaluate the decolourization of crystal violet (CV) dye by electrocoagulation using iron or aluminium electrodes. The effects and interactions of four parameters, initial pH (3-9), current density (6-28 A m(-2)), substrate concentration (50-200 mg L(-1)) and supporting electrolyte concentration (284-1420 mg L(-1) of Na(2)SO(4)), were optimized and evaluated. Although the results using iron anodes were better than for aluminium, the effects and interactions of the studied parameters were quite similar. With a confidence level of 95%, initial pH and supporting electrolyte concentration showed limited effects on the removal rate of CV, whereas current density, pollutant concentration and the interaction of both were significant. Reduced models taking into account significant variables and interactions between variables have shown good correlations with the experimental results. Under optimal conditions, almost complete removal of CV and chemical oxygen demand were obtained after electrocoagulation for 5 and 30 min, using iron and aluminium electrodes, respectively. These results indicate that electrocoagulation with iron anodes is a rapid, economical and effective alternative to the complete removal of CV in waters. Evolutions of pH and residual iron or aluminium concentrations in solution are also discussed. 2010 Elsevier B.V. All rights reserved.

  8. The role of chemometrics in single and sequential extraction assays: a review. Part II. Cluster analysis, multiple linear regression, mixture resolution, experimental design and other techniques.

    PubMed

    Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo

    2011-03-04

    Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Microwave assisted extraction of iodine and bromine from edible seaweed for inductively coupled plasma-mass spectrometry determination.

    PubMed

    Romarís-Hortas, Vanessa; Moreda-Piñeiro, Antonio; Bermejo-Barrera, Pilar

    2009-08-15

    The feasibility of microwave energy to assist the solubilisation of edible seaweed samples by tetramethylammonium hydroxide (TMAH) has been investigated to extract iodine and bromine. Inductively coupled plasma-mass spectrometry (ICP-MS) has been used as a multi-element detector. Variables affecting the microwave assisted extraction/solubilisation (temperature, TMAH volume, ramp time and hold time) were firstly screened by applying a fractional factorial design (2(5-1)+2), resolution V and 2 centre points. When extracting both halogens, results showed statistical significance (confidence interval of 95%) for TMAH volume and temperature, and also for the two order interaction between both variables. Therefore, these two variables were finally optimized by a 2(2)+star orthogonal central composite design with 5 centre points and 2 replicates, and optimum values of 200 degrees C and 10 mL for temperature and TMAH volume, respectively, were found. The extraction time (ramp and hold times) was found statistically non-significant, and values of 10 and 5 min were chosen for the ramp time and the hold time, respectively. This means a fast microwave heating cycle. Repeatability of the over-all procedure has been found to be 6% for both elements, while iodine and bromine concentrations of 24.6 and 19.9 ng g(-1), respectively, were established for the limit of detection. Accuracy of the method was assessed by analyzing the NIES-09 (Sargasso, Sargassum fulvellum) certified reference material (CRM) and the iodine and bromine concentrations found have been in good agreement with the indicative values for this CRM. Finally, the method was applied to several edible dried and canned seaweed samples.

  10. Heat And Mass Transfer Analysis of a Film Evaporative MEMS Tunable Array

    NASA Astrophysics Data System (ADS)

    O'Neill, William J.

    This thesis details the heat and mass transfer analysis of a MEMs microthruster designed to provide propulsive, attitude control and thermal control capabilities to a cubesat. This thruster is designed to function by retaining water as a propellant and applying resistive heating in order to increase the temperature of the liquid-vapor interface to either increase evaporation or induce boiling to regulate mass flow. The resulting vapor is then expanded out of a diverging nozzle to produce thrust. Because of the low operating pressure and small length scale of this thruster, unique forms of mass transfer analysis such as non-continuum gas flow were modeled using the Direct Simulation Monte Carlo method. Continuum fluid/thermal simulations using COMSOL Multiphysics have been applied to model heat and mass transfer in the solid and liquid portions of the thruster. The two methods were coupled through variables at the liquid-vapor interface and solved iteratively by the bisection method. The simulations presented in this thesis confirm the thermal valving concept. It is shown that when power is applied to the thruster there is a nearly linear increase in mass flow and thrust. Thus, mass flow can be regulated by regulating the applied power. This concept can also be used as a thermal control device for spacecraft.

  11. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  12. Analytical Model-Based Design Optimization of a Transverse Flux Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz

    This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less

  13. A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    1998-01-01

    This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.

  14. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  15. The performance of a centrifugal compressor with high inlet prewhirl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitfield, A.; Abdullah, A.H.

    1998-07-01

    The performance requirements of centrifugal compressors usually include a broad operating range between surge and choke. This becomes increasingly difficult to achieve as increased pressure ratio is demanded. In order to suppress the tendency to surge and extend the operating range at low flow rates, inlet swirl is often considered through the application of inlet guide vanes. To generate high inlet swirl angles efficiently, an inlet volute has been applied as the swirl generator, and a variable geometry design developed in order to provide zero swirl. The variable geometry approach can be applied to increase the swirl progressively or tomore » switch rapidly from zero swirl to maximum swirl. The variable geometry volute and the swirl conditions generated are described. The performance of a small centrifugal compressor is presented for a wide range of inlet swirl angles. In addition to the basic performance characteristics of the compressor, the onsets of flow reversals at impeller inlet are presented, together with the development of pressure pulsations, in the inlet and discharge ducts, through to full surge. The flow rate at which surge occurred was shown, by the shift of the peak pressure condition and by the measurement of the pressure pulsations, to be reduced by over 40%.« less

  16. Applying ILAMB to data from several generations of the Community Land Model to assess the relative contribution of model improvements and forcing uncertainty to model-data agreement

    NASA Astrophysics Data System (ADS)

    Lawrence, D. M.; Fisher, R.; Koven, C.; Oleson, K. W.; Swenson, S. C.; Hoffman, F. M.; Randerson, J. T.; Collier, N.; Mu, M.

    2017-12-01

    The International Land Model Benchmarking (ILAMB) project is a model-data intercomparison and integration project designed to assess and help improve land models. The current package includes assessment of more than 25 land variables across more than 60 global, regional, and site-level (e.g., FLUXNET) datasets. ILAMB employs a broad range of metrics including RMSE, mean error, spatial distributions, interannual variability, and functional relationships. Here, we apply ILAMB for the purpose of assessment of several generations of the Community Land Model (CLM4, CLM4.5, and CLM5). Encouragingly, CLM5, which is the result of model development over the last several years by more than 50 researchers from 15 different institutions, shows broad improvements across many ILAMB metrics including LAI, GPP, vegetation carbon stocks, and the historical net ecosystem carbon balance among others. We will also show that considerable uncertainty arises from the historical climate forcing data used (GSWP3v1 and CRUNCEPv7). ILAMB score variations due to forcing data can be as large for many variables as that due to model structural differences. Strengths and weaknesses and persistent biases across model generations will also be presented.

  17. Final Technical Report: Distributed Controls for High Penetrations of Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Neely, Jason C.; Rashkin, Lee J.

    2015-12-01

    The goal of this effort was to apply four potential control analysis/design approaches to the design of distributed grid control systems to address the impact of latency and communications uncertainty with high penetrations of photovoltaic (PV) generation. The four techniques considered were: optimal fixed structure control; Nyquist stability criterion; vector Lyapunov analysis; and Hamiltonian design methods. A reduced order model of the Western Electricity Coordinating Council (WECC) developed for the Matlab Power Systems Toolbox (PST) was employed for the study, as well as representative smaller systems (e.g., a two-area, three-area, and four-area power system). Excellent results were obtained with themore » optimal fixed structure approach, and the methodology we developed was published in a journal article. This approach is promising because it offers a method for designing optimal control systems with the feedback signals available from Phasor Measurement Unit (PMU) data as opposed to full state feedback or the design of an observer. The Nyquist approach inherently handles time delay and incorporates performance guarantees (e.g., gain and phase margin). We developed a technique that works for moderate sized systems, but the approach does not scale well to extremely large system because of computational complexity. The vector Lyapunov approach was applied to a two area model to demonstrate the utility for modeling communications uncertainty. Application to large power systems requires a method to automatically expand/contract the state space and partition the system so that communications uncertainty can be considered. The Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) design methodology was selected to investigate grid systems for energy storage requirements to support high penetration of variable or stochastic generation (such as wind and PV) and loads. This method was applied to several small system models.« less

  18. An interactive tool for outdoor computer controlled cultivation of microalgae in a tubular photobioreactor system.

    PubMed

    Dormido, Raquel; Sánchez, José; Duro, Natividad; Dormido-Canto, Sebastián; Guinaldo, María; Dormido, Sebastián

    2014-03-06

    This paper describes an interactive virtual laboratory for experimenting with an outdoor tubular photobioreactor (henceforth PBR for short). This virtual laboratory it makes possible to: (a) accurately reproduce the structure of a real plant (the PBR designed and built by the Department of Chemical Engineering of the University of Almería, Spain); (b) simulate a generic tubular PBR by changing the PBR geometry; (c) simulate the effects of changing different operating parameters such as the conditions of the culture (pH, biomass concentration, dissolved O2, inyected CO2, etc.); (d) simulate the PBR in its environmental context; it is possible to change the geographic location of the system or the solar irradiation profile; (e) apply different control strategies to adjust different variables such as the CO2 injection, culture circulation rate or culture temperature in order to maximize the biomass production; (f) simulate the harvesting. In this way, users can learn in an intuitive way how productivity is affected by any change in the design. It facilitates the learning of how to manipulate essential variables for microalgae growth to design an optimal PBR. The simulator has been developed with Easy Java Simulations, a freeware open-source tool developed in Java, specifically designed for the creation of interactive dynamic simulations.

  19. An Interactive Tool for Outdoor Computer Controlled Cultivation of Microalgae in a Tubular Photobioreactor System

    PubMed Central

    Dormido, Raquel; Sánchez, José; Duro, Natividad; Dormido-Canto, Sebastián; Guinaldo, María; Dormido, Sebastián

    2014-01-01

    This paper describes an interactive virtual laboratory for experimenting with an outdoor tubular photobioreactor (henceforth PBR for short). This virtual laboratory it makes possible to: (a) accurately reproduce the structure of a real plant (the PBR designed and built by the Department of Chemical Engineering of the University of Almería, Spain); (b) simulate a generic tubular PBR by changing the PBR geometry; (c) simulate the effects of changing different operating parameters such as the conditions of the culture (pH, biomass concentration, dissolved O2, inyected CO2, etc.); (d) simulate the PBR in its environmental context; it is possible to change the geographic location of the system or the solar irradiation profile; (e) apply different control strategies to adjust different variables such as the CO2 injection, culture circulation rate or culture temperature in order to maximize the biomass production; (f) simulate the harvesting. In this way, users can learn in an intuitive way how productivity is affected by any change in the design. It facilitates the learning of how to manipulate essential variables for microalgae growth to design an optimal PBR. The simulator has been developed with Easy Java Simulations, a freeware open-source tool developed in Java, specifically designed for the creation of interactive dynamic simulations. PMID:24662450

  20. Linear Quadratic Tracking Design for a Generic Transport Aircraft with Structural Load Constraints

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Frost, Susan A.; Taylor, Brian R.

    2011-01-01

    When designing control laws for systems with constraints added to the tracking performance, control allocation methods can be utilized. Control allocations methods are used when there are more command inputs than controlled variables. Constraints that require allocators are such task as; surface saturation limits, structural load limits, drag reduction constraints or actuator failures. Most transport aircraft have many actuated surfaces compared to the three controlled variables (such as angle of attack, roll rate & angle of side slip). To distribute the control effort among the redundant set of actuators a fixed mixer approach can be utilized or online control allocation techniques. The benefit of an online allocator is that constraints can be considered in the design whereas the fixed mixer cannot. However, an online control allocator mixer has a disadvantage of not guaranteeing a surface schedule, which can then produce ill defined loads on the aircraft. The load uncertainty and complexity has prevented some controller designs from using advanced allocation techniques. This paper considers actuator redundancy management for a class of over actuated systems with real-time structural load limits using linear quadratic tracking applied to the generic transport model. A roll maneuver example of an artificial load limit constraint is shown and compared to the same no load limitation maneuver.

  1. Development of a Multifidelity Approach to Acoustic Liner Impedance Eduction

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Jones, Michael G.

    2017-01-01

    The use of acoustic liners has proven to be extremely effective in reducing aircraft engine fan noise transmission/radiation. However, the introduction of advanced fan designs and shorter engine nacelles has highlighted a need for novel acoustic liner designs that provide increased fan noise reduction over a broader frequency range. To achieve aggressive noise reduction goals, advanced broadband liner designs, such as zone liners and variable impedance liners, will likely depart from conventional uniform impedance configurations. Therefore, educing the impedance of these axial- and/or spanwise-variable impedance liners will require models that account for three-dimensional effects, thereby increasing computational expense. Thus, it would seem advantageous to investigate the use of multifidelity modeling approaches to impedance eduction for these advanced designs. This paper describes an extension of the use of the CDUCT-LaRC code to acoustic liner impedance eduction. The proposed approach is applied to a hardwall insert and conventional liner using simulated data. Educed values compare well with those educed using two extensively tested and validated approaches. The results are very promising and provide justification to further pursue the complementary use of CDUCT-LaRC with the currently used finite element codes to increase the efficiency of the eduction process for configurations involving three-dimensional effects.

  2. Multiswitching combination synchronisation of non-identical fractional-order chaotic systems

    NASA Astrophysics Data System (ADS)

    Bhat, Muzaffar Ahmad; Khan, Ayub

    2018-06-01

    In this paper, multiswitching combination synchronisation (MSCS) scheme has been investigated in a class of three non-identical fractional-order chaotic systems. The fractional-order Lorenz and Chen systems are taken as the drive systems. The combination of multidrive systems is then synchronised with the fractional-order Lü chaotic system. In MSCS, the state variables of the two drive systems synchronise with different state variables of the response system, simultaneously. Based on the stability of fractional-order chaotic systems, the MSCS of three fractional-order non-identical systems has been investigated. For the synchronisation of three non-identical fractional-order chaotic systems, suitable controllers have been designed. Theoretical analysis and numerical results are presented to demonstrate the validity and feasibility of the applied method.

  3. An investigation of constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1990-01-01

    Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.

  4. An application of software design and documentation language. [Galileo spacecraft command and data subsystem

    NASA Technical Reports Server (NTRS)

    Callender, E. D.; Clarkson, T. B.; Frasier, C. E.

    1980-01-01

    The software design and documentation language (SDDL) is a general purpose processor to support a lanugage for the description of any system, structure, concept, or procedure that may be presented from the viewpoint of a collection of hierarchical entities linked together by means of binary connections. The language comprises a set of rules of syntax, primitive construct classes (module, block, and module invocation), and language control directives. The result is a language with a fixed grammar, variable alphabet and punctuation, and an extendable vocabulary. The application of SDDL to the detailed software design of the Command Data Subsystem for the Galileo Spacecraft is discussed. A set of constructs was developed and applied. These constructs are evaluated and examples of their application are considered.

  5. Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions

    NASA Astrophysics Data System (ADS)

    Alemany, Kristina

    Electric propulsion has recently become a viable technology for spacecraft, enabling shorter flight times, fewer required planetary gravity assists, larger payloads, and/or smaller launch vehicles. With the maturation of this technology, however, comes a new set of challenges in the area of trajectory design. Because low-thrust trajectory optimization has historically required long run-times and significant user-manipulation, mission design has relied on expert-based knowledge for selecting departure and arrival dates, times of flight, and/or target bodies and gravitational swing-bys. These choices are generally based on known configurations that have worked well in previous analyses or simply on trial and error. At the conceptual design level, however, the ability to explore the full extent of the design space is imperative to locating the best solutions in terms of mass and/or flight times. Beginning in 2005, the Global Trajectory Optimization Competition posed a series of difficult mission design problems, all requiring low-thrust propulsion and visiting one or more asteroids. These problems all had large ranges on the continuous variables---launch date, time of flight, and asteroid stay times (when applicable)---as well as being characterized by millions or even billions of possible asteroid sequences. Even with recent advances in low-thrust trajectory optimization, full enumeration of these problems was not possible within the stringent time limits of the competition. This investigation develops a systematic methodology for determining a broad suite of good solutions to the combinatorial, low-thrust, asteroid tour problem. The target application is for conceptual design, where broad exploration of the design space is critical, with the goal being to rapidly identify a reasonable number of promising solutions for future analysis. The proposed methodology has two steps. The first step applies a three-level heuristic sequence developed from the physics of the problem, which allows for efficient pruning of the design space. The second phase applies a global optimization scheme to locate a broad suite of good solutions to the reduced problem. The global optimization scheme developed combines a novel branch-and-bound algorithm with a genetic algorithm and an industry-standard low-thrust trajectory optimization program to solve for the following design variables: asteroid sequence, launch date, times of flight, and asteroid stay times. The methodology is developed based on a small sample problem, which is enumerated and solved so that all possible discretized solutions are known. The methodology is then validated by applying it to a larger intermediate sample problem, which also has a known solution. Next, the methodology is applied to several larger combinatorial asteroid rendezvous problems, using previously identified good solutions as validation benchmarks. These problems include the 2nd and 3rd Global Trajectory Optimization Competition problems. The methodology is shown to be capable of achieving a reduction in the number of asteroid sequences of 6-7 orders of magnitude, in terms of the number of sequences that require low-thrust optimization as compared to the number of sequences in the original problem. More than 70% of the previously known good solutions are identified, along with several new solutions that were not previously reported by any of the competitors. Overall, the methodology developed in this investigation provides an organized search technique for the low-thrust mission design of asteroid rendezvous problems.

  6. VaST: A variability search toolkit

    NASA Astrophysics Data System (ADS)

    Sokolovsky, K. V.; Lebedev, A. A.

    2018-01-01

    Variability Search Toolkit (VaST) is a software package designed to find variable objects in a series of sky images. It can be run from a script or interactively using its graphical interface. VaST relies on source list matching as opposed to image subtraction. SExtractor is used to generate source lists and perform aperture or PSF-fitting photometry (with PSFEx). Variability indices that characterize scatter and smoothness of a lightcurve are computed for all objects. Candidate variables are identified as objects having high variability index values compared to other objects of similar brightness. The two distinguishing features of VaST are its ability to perform accurate aperture photometry of images obtained with non-linear detectors and handle complex image distortions. The software has been successfully applied to images obtained with telescopes ranging from 0.08 to 2.5 m in diameter equipped with a variety of detectors including CCD, CMOS, MIC and photographic plates. About 1800 variable stars have been discovered with VaST. It is used as a transient detection engine in the New Milky Way (NMW) nova patrol. The code is written in C and can be easily compiled on the majority of UNIX-like systems. VaST is free software available at http://scan.sai.msu.ru/vast/.

  7. On the measurement of stability in over-time data.

    PubMed

    Kenny, D A; Campbell, D T

    1989-06-01

    In this article, autoregressive models and growth curve models are compared. Autoregressive models are useful because they allow for random change, permit scores to increase or decrease, and do not require strong assumptions about the level of measurement. Three previously presented designs for estimating stability are described: (a) time-series, (b) simplex, and (c) two-wave, one-factor methods. A two-wave, multiple-factor model also is presented, in which the variables are assumed to be caused by a set of latent variables. The factor structure does not change over time and so the synchronous relationships are temporally invariant. The factors do not cause each other and have the same stability. The parameters of the model are the factor loading structure, each variable's reliability, and the stability of the factors. We apply the model to two data sets. For eight cognitive skill variables measured at four times, the 2-year stability is estimated to be .92 and the 6-year stability is .83. For nine personality variables, the 3-year stability is .68. We speculate that for many variables there are two components: one component that changes very slowly (the trait component) and another that changes very rapidly (the state component); thus each variable is a mixture of trait and state. Circumstantial evidence supporting this view is presented.

  8. Network-based regularization for matched case-control analysis of high-dimensional DNA methylation data.

    PubMed

    Sun, Hokeun; Wang, Shuang

    2013-05-30

    The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Reasoning and Action: Implementation of a Decision-Making Program in Sport.

    PubMed

    Gil-Arias, Alexander; Moreno, M Perla; García-Mas, Alex; Moreno, Alberto; García-González, Luíz; Del Villar, Fernando

    2016-09-20

    The objective of this study was to apply a decision training programme, based on the use of video-feedback and questioning, in real game time, in order to improve decision-making in volleyball attack actions. A three-phase quasi-experimental design was implemented: Phase A (pre-test), Phase B (Intervention) and Phase C (Retention). The sample was made up of 8 female Under-16 volleyball players, who were divided into two groups: experimental group (n = 4) and control group (n = 4). The independent variable was the decision training program, which was applied for 11 weeks in a training context, more specifically in a 6x6 game situation. The player had to analyze the reasons and causes of the decision taken. The dependent variable was decision-making, which was assessed based on systematic observation, using the "Game Performance Assessment Instrument" (GPAI) (Oslin, Mitchell, & Griffin, 1998). Results showed that, after applying the decision training program, the experimental group showed a significantly higher average percentage of successful decisions than the control group F(1, 6) = 11.26; p = .015; η2 p = .652; 95% CI [056, 360]. These results highlight the need to complement the training process with cognitive tools such as video-feedback and questioning in order to improve athletes' decision-making.

  10. Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.

    PubMed

    Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H

    2016-06-01

    Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.

  11. Design Optimization and In Vitro-In Vivo Evaluation of Orally Dissolving Strips of Clobazam

    PubMed Central

    Bala, Rajni; Khanna, Sushil; Pawar, Pravin

    2014-01-01

    Clobazam orally dissolving strips were prepared by solvent casting method. A full 32 factorial design was applied for optimization using different concentration of film forming polymer and disintegrating agent as independent variable and disintegration time, % cumulative drug release, and tensile strength as dependent variable. In addition the prepared films were also evaluated for surface pH, folding endurance, and content uniformity. The optimized film formulation showing the maximum in vitro drug release, satisfactory in vitro disintegration time, and tensile strength was selected for bioavailability study and compared with a reference marketed product (frisium5 tablets) in rabbits. Formulation (F6) was selected by the Design-expert software which exhibited DT (24 sec), TS (2.85 N/cm2), and in vitro drug release (96.6%). Statistical evaluation revealed no significant difference between the bioavailability parameters of the test film (F6) and the reference product. The mean ratio values (test/reference) of C max (95.87%), t max (71.42%), AUC0−t (98.125%), and AUC0−∞ (99.213%) indicated that the two formulae exhibited comparable plasma level-time profiles. PMID:25328709

  12. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    PubMed

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  13. A new polytopic approach for the unknown input functional observer design

    NASA Astrophysics Data System (ADS)

    Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed

    2018-03-01

    In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.

  14. Moderately reverberant learning ultrasonic pinch panel.

    PubMed

    Nikolovski, Jean-Pierre

    2013-10-01

    Tactile sensing is widely used in human-computer interfaces. However, mechanical integration of touch technologies is often perceived as difficult by engineers because it often limits the freedom of style or form factor requested by designers. Recent work in active ultrasonic touch technologies has made it possible to transform thin glass plates, metallic sheets, or plastic shells into interactive surfaces. The method is based on a learning process of touch-induced, amplitude-disturbed diffraction patterns. This paper proposes, first, an evolution in the design with multiple dipole transducers that improves touch sensitivity or maximum panel size by a factor of ten, and improves robustness and usability in moderately reverberant panels, and second, defines a set of acoustic variables in the signal processing for the evaluation of sensitivity and radiating features. For proof of concept purposes, the design and process are applied to 3.2- and 6-mm-thick glass plates with variable damping conditions. Transducers are bonded to only one short side of the rectangular substrates. Measurements show that the highly sensitive free lateral sides are perfectly adapted for pinch-touch and pinch-slide interactions. The advantage of relative versus absolute touch disturbance measurement is discussed, together with tolerance to abutting contaminants.

  15. Immunization in pregnancy clinical research in low- and middle-income countries - Study design, regulatory and safety considerations.

    PubMed

    Kochhar, Sonali; Bonhoeffer, Jan; Jones, Christine E; Muñoz, Flor M; Honrado, Angel; Bauwens, Jorgen; Sobanjo-Ter Meulen, Ajoke; Hirschfeld, Steven

    2017-12-04

    Immunization of pregnant women is a promising public health strategy to reduce morbidity and mortality among both the mothers and their infants. Establishing safety and efficacy of vaccines generally uses a hybrid design between a conventional interventional study and an observational study that requires enrolling thousands of study participants to detect an unknown number of uncommon events. Historically, enrollment of pregnant women in clinical research studies encountered many barriers based on risk aversion, lack of knowledge, and regulatory ambiguity. Conducting research enrolling pregnant women in low- and middle-income countries can have additional factors to address such as limited availability of baseline epidemiologic data on disease burden and maternal and neonatal outcomes during and after pregnancy; challenges in recruiting and retaining pregnant women in research studies, variability in applying and interpreting assessment methods, and variability in locally acceptable and available infrastructure. Some measures to address these challenges include adjustment of study design, tailoring recruitment, consent process, retention strategies, operational and logistical processes, and the use of definitions and data collection methods that will align with efforts globally. Copyright © 2017. Published by Elsevier Ltd.

  16. Single-Vector Calibration of Wind-Tunnel Force Balances

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2003-01-01

    An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.

  17. Mixed-strain housing for female C57BL/6, DBA/2, and BALB/c mice: validating a split-plot design that promotes refinement and reduction.

    PubMed

    Walker, Michael; Fureix, Carole; Palme, Rupert; Newman, Jonathan A; Ahloy Dallaire, Jamie; Mason, Georgia

    2016-01-27

    Inefficient experimental designs are common in animal-based biomedical research, wasting resources and potentially leading to unreplicable results. Here we illustrate the intrinsic statistical power of split-plot designs, wherein three or more sub-units (e.g. individual subjects) differing in a variable of interest (e.g. genotype) share an experimental unit (e.g. a cage or litter) to which a treatment is applied (e.g. a drug, diet, or cage manipulation). We also empirically validate one example of such a design, mixing different mouse strains -- C57BL/6, DBA/2, and BALB/c -- within cages varying in degree of enrichment. As well as boosting statistical power, no other manipulations are needed for individual identification if co-housed strains are differentially pigmented, so also sparing mice from stressful marking procedures. The validation involved housing 240 females from weaning to 5 months of age in single- or mixed- strain trios, in cages allocated to enriched or standard treatments. Mice were screened for a range of 26 commonly-measured behavioural, physiological and haematological variables. Living in mixed-strain trios did not compromise mouse welfare (assessed via corticosterone metabolite output, stereotypic behaviour, signs of aggression, and other variables). It also did not alter the direction or magnitude of any strain- or enrichment-typical difference across the 26 measured variables, or increase variance in the data: indeed variance was significantly decreased by mixed- strain housing. Furthermore, using Monte Carlo simulations to quantify the statistical power benefits of this approach over a conventional design demonstrated that for our effect sizes, the split- plot design would require significantly fewer mice (under half in most cases) to achieve a power of 80%. Mixed-strain housing allows several strains to be tested at once, and potentially refines traditional marking practices for research mice. Furthermore, it dramatically illustrates the enhanced statistical power of split-plot designs, allowing many fewer animals to be used. More powerful designs can also increase the chances of replicable findings, and increase the ability of small-scale studies to yield significant results. Using mixed-strain housing for female C57BL/6, DBA/2 and BALB/c mice is therefore an effective, efficient way to promote both refinement and the reduction of animal-use in research.

  18. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  19. Application of multivariable search techniques to the optimization of airfoils in a low speed nonlinear inviscid flow field

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1975-01-01

    Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.

  20. Double degree master program: Optical Design

    NASA Astrophysics Data System (ADS)

    Bakholdin, Alexey; Kujawinska, Malgorzata; Livshits, Irina; Styk, Adam; Voznesenskaya, Anna; Ezhova, Kseniia; Ermolayeva, Elena; Ivanova, Tatiana; Romanova, Galina; Tolstoba, Nadezhda

    2015-10-01

    Modern tendencies of higher education require development of master programs providing achievement of learning outcomes corresponding to quickly variable job market needs. ITMO University represented by Applied and Computer Optics Department and Optical Design and Testing Laboratory jointly with Warsaw University of Technology represented by the Institute of Micromechanics and Photonics at The Faculty of Mechatronics have developed a novel international master double-degree program "Optical Design" accumulating the expertise of both universities including experienced teaching staff, educational technologies, and experimental resources. The program presents studies targeting research and professional activities in high-tech fields connected with optical and optoelectronics devices, optical engineering, numerical methods and computer technologies. This master program deals with the design of optical systems of various types, assemblies and layouts using computer modeling means; investigation of light distribution phenomena; image modeling and formation; development of optical methods for image analysis and optical metrology including optical testing, materials characterization, NDT and industrial control and monitoring. The goal of this program is training a graduate capable to solve a wide range of research and engineering tasks in optical design and metrology leading to modern manufacturing and innovation. Variability of the program structure provides its flexibility and adoption according to current job market demands and personal learning paths for each student. In addition considerable proportion of internship and research expands practical skills. Some special features of the "Optical Design" program which implements the best practices of both Universities, the challenges and lessons learnt during its realization are presented in the paper.

  1. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  2. Changing space and sound: Parametric design and variable acoustics

    NASA Astrophysics Data System (ADS)

    Norton, Christopher William

    This thesis examines the potential for parametric design software to create performance based design using acoustic metrics as the design criteria. A former soundstage at the University of Southern California used by the Thornton School of Music is used as a case study for a multiuse space for orchestral, percussion, master class and recital use. The criteria used for each programmatic use include reverberation time, bass ratio, and the early energy ratios of the clarity index and objective support. Using a panelized ceiling as a design element to vary the parameters of volume, panel orientation and type of absorptive material, the relationships between these parameters and the design criteria are explored. These relationships and subsequently derived equations are applied to Grasshopper parametric modeling software for Rhino 3D (a NURBS modeling software). Using the target reverberation time and bass ratio for each programmatic use as input for the parametric model, the genomic optimization function of Grasshopper - Galapagos - is run to identify the optimum ceiling geometry and material distribution.

  3. Optimisation of nano-silica modified self-compacting high-Volume fly ash mortar

    NASA Astrophysics Data System (ADS)

    Achara, Bitrus Emmanuel; Mohammed, Bashar S.; Fadhil Nuruddin, Muhd

    2017-05-01

    Evaluation of the effects of nano-silica amount and superplasticizer (SP) dosage on the compressive strength, porosity and slump flow on high-volume fly ash self-consolidating mortar was investigated. Multiobjective optimisation technique using Design-Expert software was applied to obtain solution based on desirability function that simultaneously optimises the variables and the responses. A desirability function of 0.811 gives the optimised solution. The experimental and predicted results showed minimal errors in all the measured responses.

  4. Seabird aggregative patterns: a new tool for offshore wind energy risk assessment.

    PubMed

    Christel, Isadora; Certain, Grégoire; Cama, Albert; Vieites, David R; Ferrer, Xavier

    2013-01-15

    The emerging development of offshore wind energy has raised public concern over its impact on seabird communities. There is a need for an adequate methodology to determine its potential impacts on seabirds. Environmental Impact Assessments (EIAs) are mostly relying on a succession of plain density maps without integrated interpretation of seabird spatio-temporal variability. Using Taylor's power law coupled with mixed effect models, the spatio-temporal variability of species' distributions can be synthesized in a measure of the aggregation levels of individuals over time and space. Applying the method to a seabird aerial survey in the Ebro Delta, NW Mediterranean Sea, we were able to make an explicit distinction between transitional and feeding areas to define and map the potential impacts of an offshore wind farm project. We use the Ebro Delta study case to discuss the advantages of potential impacts maps over density maps, as well as to illustrate how these potential impact maps can be applied to inform on concern levels, optimal EIA design and monitoring in the assessment of local offshore wind energy projects. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Identification of material constants for piezoelectric transformers by three-dimensional, finite-element method and a design-sensitivity method.

    PubMed

    Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo

    2003-08-01

    In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.

  6. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  7. Structural Optimization of a Knuckle with Consideration of Stiffness and Durability Requirements

    PubMed Central

    Kim, Geun-Yeon

    2014-01-01

    The automobile's knuckle is connected to the parts of the steering system and the suspension system and it is used for adjusting the direction of a rotation through its attachment to the wheel. This study changes the existing material made of GCD45 to Al6082M and recommends the lightweight design of the knuckle as the optimal design technique to be installed in small cars. Six shape design variables were selected for the optimization of the knuckle and the criteria relevant to stiffness and durability were considered as the design requirements during the optimization process. The metamodel-based optimization method that uses the kriging interpolation method as the optimization technique was applied. The result shows that all constraints for stiffness and durability are satisfied using A16082M, while reducing the weight of the knuckle by 60% compared to that of the existing GCD450. PMID:24995359

  8. Improved specificity of TALE-based genome editing using an expanded RVD repertoire.

    PubMed

    Miller, Jeffrey C; Zhang, Lei; Xia, Danny F; Campo, John J; Ankoudinova, Irina V; Guschin, Dmitry Y; Babiarz, Joshua E; Meng, Xiangdong; Hinkley, Sarah J; Lam, Stephen C; Paschon, David E; Vincent, Anna I; Dulay, Gladys P; Barlow, Kyle A; Shivak, David A; Leung, Elo; Kim, Jinwon D; Amora, Rainier; Urnov, Fyodor D; Gregory, Philip D; Rebar, Edward J

    2015-05-01

    Transcription activator-like effector (TALE) proteins have gained broad appeal as a platform for targeted DNA recognition, largely owing to their simple rules for design. These rules relate the base specified by a single TALE repeat to the identity of two key residues (the repeat variable diresidue, or RVD) and enable design for new sequence targets via modular shuffling of these units. A key limitation of these rules is that their simplicity precludes options for improving designs that are insufficiently active or specific. Here we address this limitation by developing an expanded set of RVDs and applying them to improve the performance of previously described TALEs. As an extreme example, total conversion of a TALE nuclease to new RVDs substantially reduced off-target cleavage in cellular studies. By providing new RVDs and design strategies, these studies establish options for developing improved TALEs for broader application across medicine and biotechnology.

  9. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  10. Assessment and Calibration of Terrestrial Water Storage in North America with GRACE Level-1B Inter-satellite Residuals

    NASA Astrophysics Data System (ADS)

    Loomis, B.; Luthcke, S. B.

    2016-12-01

    The global time-variable gravity products from GRACE continue to provide unique and important measurements of vertically integrated terrestrial water storage (TWS). Despite substantial improvements in recent years to the quality of the GRACE solutions and analysis techniques, significant disagreements can still exist between various approaches to compute basin scale TWS. Applying the GRACE spherical harmonic solutions to TWS analysis requires the selection, design, and implementation of one of a wide variety of available filters. It is common to then estimate and apply a set of scale factors to these filtered solutions in an attempt to restore lost signal. The advent of global mascon solutions, such as those produced by our group at NASA GSFC, are an important advancement in time-variable gravity estimation. This method applies data-driven regularization at the normal equation level, resulting in improved estimates of regional TWS. Though mascons are a valuable product, the design of the constraint matrix, the global minimization of observation residuals, and the arc-specific parameters, all introduce the possibility that localized basin scale signals are not perfectly recovered. The precise inter-satellite ranging instrument provides the primary observation set for the GRACE gravity solutions. Recently, we have developed an approach to analyze and calibrate basin scale TWS estimates directly from the inter-satellite observation residuals. To summarize, we compute the range-acceleration residuals for two different forward models by executing separate runs of our Level-1B processing system. We then quantify the linear relationship that exists between the modeled mass and the residual differences, defining a simple differential correction procedure that is applied to the modeled signals. This new calibration procedure does not require the computationally expensive formation and inversion of normal equations, and it eliminates any influence the solution technique may have on the determined regional time series of TWS. We apply this calibration approach to sixteen drainage basins that cover North America and present new measurements of TWS determined directly from the Level-1B range-acceleration residuals. Lastly, we compare these new solutions to other GRACE solutions and independent datasets.

  11. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  12. Assessing ecological integrity of Ozark rivers to determine suitability for protective status

    USGS Publications Warehouse

    Radwell, A.J.; Kwak, T.J.

    2005-01-01

    Preservation of extraordinary natural resources, protection of water quality, and restoration of impaired waters require a strategy to identify and protect least-disturbed streams and rivers. We applied two objective, quantitative methods to determine stream ecological integrity of headwater reaches of 10 Ozark rivers, 5 with Wild and Scenic River federal protective status. Thirty-four variables representing macroinvertebrate and fish assemblage characteristics, in-stream habitat, riparian vegetation, water quality, and watershed attributes were quantified for each river and analyzed using two multivariate approaches. The first approach, cluster and discriminant analyses, identified two groups of river with only one variable (% forested watershed) reliably distinguishing groups. Our second approach employed ordinal scaling to compare variables for each river to conceptually ideal conditions that were developed as a composite of optimal attributes among the 10 rivers. The composite distance of each river from ideal was then calculated using a unidimensional ranking technique. Two rivers without Wild and Scenic River designation ranked highest relative to ideal (highest ecological integrity), and two others, also without designation, ranked most distant from ideal (lowest ecological integrity). Fish density, number of intolerant fish species, and invertebrate density were influential biotic variables for scaling. Contributing physical variables included riparian forest cover, water nitrate concentration, water turbidity, percentage of forested watershed, percentage of private land ownership, and road density. These methods provide a framework for refinement and application in other regions to facilitate the process of establishing least-disturbed reference conditions and identifying rivers for protection and restoration. ?? 2005 Springer Science+Business Media, Inc.

  13. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.

  14. Optimization of Bioactive Ingredient Extraction from Chinese Herbal Medicine Glycyrrhiza glabra: A Comparative Study of Three Optimization Models

    PubMed Central

    Li, Xiaohong; Zhang, Yuyan

    2018-01-01

    The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907

  15. Optimization of Bioactive Ingredient Extraction from Chinese Herbal Medicine Glycyrrhiza glabra: A Comparative Study of Three Optimization Models.

    PubMed

    Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan

    2018-01-01

    The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .

  16. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  17. Carbaryl washoff from soybean plants.

    PubMed

    Willis, G H; Smith, S; McDowell, L L; Southwick, L M

    1996-08-01

    Both the efficacy and fate of most foliar-applied pesticides may be affected by weather variables, especially rain. A multiple-intensity rainfall simulator was used to determine the effects of rainfall intensity and amount on concentrations of carbaryl (Sevin(R) XLS Plus) washed from soybean plants. Two hours after carbaryl was applied at 1.12 kg/ha, 25 mm of rain was applied at intensities of 13.0, 27.4, 53.8, or 105.1 mm/h. About 67% of the carbaryl on the plants was washed off by 25 mm of rain. Rainfall intensity affected carbaryl concentrations in washoff; higher concentrations occurred at lower intensities. Even though the experimental conditions were designed for "worst-case" conditions, washoff patterns suggested improved carbaryl rainfastness when compared to carbaryl (formulated as a wettable powder) washoff from cotton plants in earlier studies. Rainfall amount had a greater effect on carbaryl concentrations in washoff than rainfall intensity.

  18. Determining anthropometric variables as a tool in the preparation of furniture and designs of interior spaces: the case of children 6 to 11 years old of Vicosa, State of Minas Gerais, Brazil.

    PubMed

    Zanuncio, Sharinna Venturim; Mafra, Simone Caldas Tavares; Antônio, Carlos Emílio Barbosa; Filho, Jugurta Lisboa; Vidigal Guimarães, Elza Maria; da Silva, Vania Eugênia; de Souza, Amaury Paulo; Minette, Luciano José

    2012-01-01

    The adequacy of facilities and the individual securities in their different age groups is importance to ensure greater functionality to them, allowing full development of daily activities. For this to occur more efficiently it is necessary the use of ergonomics which can ensure more comfort and safety for end users of products and spaces. The present study aimed to measure body dimensions of a representative sample of children aged 6 to 11 years old, children of graduate and pos graduate students, faculty and staff of the Federal University of Vicosa and also residents of the city of Vicosa, State of Minas Gerais, Brazil, coming from different municipalities of State of Minas Gerais, to organize a database that will provide the furniture industry, anthropometric variables more appropriate to design products for both the leisure activities, and for the school sector. To realize this research we used the methodology proposed by the authors Panero and Zelnik, based on samples distributed in six age groups, and providing a measurement of 10 variables. By applying the methodology to the field was possible to compare the observed data, with the tables of the aforementioned authors. The main results revealed a significant variation of the 10 variables analyzed, and it is believed that this variation could lead to possible flaws in the designs of products that use the data from these authors. The completion of the study provided data on Vicosa considered more appropriate for the design of products and environments for the population of the study, considering age and region, of Brazil (State of Minas Gerais) and it is believed that the future may expand to the Brazilian population, with the progress of study of this nature.

  19. Design and validation of a questionnaire to measure the attitudes of hospital staff concerning pandemic influenza.

    PubMed

    Naghavi, Seyed Hamid Reza; Shabestari, Omid; Roudsari, Abdul V; Harrison, John

    2012-03-01

    When pandemics lead to a higher workload in the healthcare sector, the attitude of healthcare staff and, more importantly, the ability to predict the rate of absence due to sickness are crucial factors in emergency preparedness and resource allocation. The aim of this study was to design and validate a questionnaire to measure the attitude of hospital staff toward work attendance during an influenza pandemic. An online questionnaire was designed and electronically distributed to the staff of a teaching medical institution in the United Kingdom. The questionnaire was designed de novo following discussions with colleagues at Imperial College and with reference to the literature on the severe acute respiratory syndrome (SARS) epidemic. The questionnaire included 15 independent fact variables and 33 dependent measure variables. A total of 367 responses were received in this survey. The data from the measurement variables were not normally distributed. Three different methods (standardized residuals, Mahalanobis distance and Cook's distance) were used to identify the outliers. In all, 19 respondents (5.17%) were identified as outliers and were excluded. The responses to this questionnaire had a wide range of missing data, from 1 to 74 cases in the measured variables. To improve the quality of the data, missing value analysis, using Expectation Maximization Algorithm (EMA) with a non-normal distribution model, was applied to the responses. The collected data were checked for homoscedasticity and multicollinearity of the variables. These tests suggested that some of the questions should be merged. In the last step, the reliability of the questionnaire was evaluated. This process showed that three questions reduced the reliability of the questionnaire. Removing those questions helped to achieve the desired level of reliability. With the changes proposed in this article, the questionnaire for measuring staff attitudes concerning pandemic influenza can be converted to a standardized and validated questionnaire to properly measure the expectations and attendance of healthcare staff in the event of pandemic flu. Copyright © 2011 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  20. Packet Randomized Experiments for Eliminating Classes of Confounders

    PubMed Central

    Pavela, Greg; Wiener, Howard; Fontaine, Kevin R.; Fields, David A.; Voss, Jameson D.; Allison, David B.

    2014-01-01

    Background Although randomization is considered essential for causal inference, it is often not possible to randomize in nutrition and obesity research. To address this, we develop a framework for an experimental design—packet randomized experiments (PREs), which improves causal inferences when randomization on a single treatment variable is not possible. This situation arises when subjects are randomly assigned to a condition (such as a new roommate) which varies in one characteristic of interest (such as weight), but also varies across many others. There has been no general discussion of this experimental design, including its strengths, limitations, and statistical properties. As such, researchers are left to develop and apply PREs on an ad hoc basis, limiting its potential to improve causal inferences among nutrition and obesity researchers. Methods We introduce PREs as an intermediary design between randomized controlled trials and observational studies. We review previous research that used the PRE design and describe its application in obesity-related research, including random roommate assignments, heterochronic parabiosis, and the quasi-random assignment of subjects to geographic areas. We then provide a statistical framework to control for potential packet-level confounders not accounted for by randomization. Results PREs have successfully been used to improve causal estimates of the effect of roommates, altitude, and breastfeeding on weight outcomes. When certain assumptions are met, PREs can asymptotically control for packet-level characteristics. This has the potential to statistically estimate the effect of a single treatment even when randomization to a single treatment did not occur. Conclusions Applying PREs to obesity-related research will improve decisions about clinical, public health, and policy actions insofar as it offers researchers new insight into cause and effect relationships among variables. PMID:25444088

  1. Optimum design of bolted composite lap joints under mechanical and thermal loading

    NASA Astrophysics Data System (ADS)

    Kradinov, Vladimir Yurievich

    A new approach is developed for the analysis and design of mechanically fastened composite lap joints under mechanical and thermal loading. Based on the combined complex potential and variational formulation, the solution method satisfies the equilibrium equations exactly while the boundary conditions are satisfied by minimizing the total potential. This approach is capable of modeling finite laminate planform dimensions, uniform and variable laminate thickness, laminate lay-up, interaction among bolts, bolt torque, bolt flexibility, bolt size, bolt-hole clearance and interference, insert dimensions and insert material properties. Comparing to the finite element analysis, the robustness of the method does not decrease when modeling the interaction of many bolts; also, the method is more suitable for parametric study and design optimization. The Genetic Algorithm (GA), a powerful optimization technique for multiple extrema functions in multiple dimensions search spaces, is applied in conjunction with the complex potential and variational formulation to achieve optimum designs of bolted composite lap joints. The objective of the optimization is to acquire such a design that ensures the highest strength of the joint. The fitness function for the GA optimization is based on the average stress failure criterion predicting net-section, shear-out, and bearing failure modes in bolted lap joints. The criterion accounts for the stress distribution in the thickness direction at the bolt location by applying an approach utilizing a beam on an elastic foundation formulation.

  2. 13 CFR 120.214 - What conditions apply for variable interest rates?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... interest rates? 120.214 Section 120.214 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Policies Specific to 7(a) Loans Maturities; Interest Rates; Loan and Guarantee Amounts § 120.214 What conditions apply for variable interest rates? A Lender may use a variable rate of interest...

  3. Is Solar Variability Reflected in the Nile River?

    NASA Technical Reports Server (NTRS)

    Ruzmaikin, Alexander; Feynman, Joan; Yung, Yuk L.

    2006-01-01

    We investigate the possibility that solar variability influences North African climate by using annual records of the water level of the Nile collected in 622-1470 A.D. The time series of these records are nonstationary, in that the amplitudes and frequencies of the quasi-periodic variations are time-dependent. We apply the Empirical Mode Decomposition technique especially designed to deal with such time series. We identify two characteristic timescales in the records that may be linked to solar variability: a period of about 88 years and one exceeding 200 years. We show that these timescales are present in the number of auroras reported per decade in the Northern Hemisphere at the same time. The 11-year cycle is seen in the Nile's high-water level variations, but it is damped in the low-water anomalies. We suggest a possible physical link between solar variability and the low-frequency variations of the Nile water level. This link involves the influence of solar variability on the atmospheric Northern Annual Mode and on its North Atlantic Ocean and Indian Ocean patterns that affect the rainfall over the sources of the Nile in eastern equatorial Africa.

  4. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  5. Optimization of Turbine Blade Design for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1998-01-01

    To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.

  6. Adaptive sampling in behavioral surveys.

    PubMed

    Thompson, S K

    1997-01-01

    Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.

  7. Reduction of Sample Size Requirements by Bilateral Versus Unilateral Research Designs in Animal Models for Cartilage Tissue Engineering

    PubMed Central

    Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali

    2013-01-01

    Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering. PMID:23510128

  8. Optimum Design of Forging Process Parameters and Preform Shape under Uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2004-06-01

    Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness.

  9. NESSUS (Numerical Evaluation of Stochastic Structures Under Stress)/EXPERT: Bridging the gap between artificial intelligence and FORTRAN

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Palmer, Karol K.

    1988-01-01

    The development of a probabilistic structural analysis methodology (PSAM) is described. In the near-term, the methodology will be applied to designing critical components of the next generation space shuttle main engine. In the long-term, PSAM will be applied very broadly, providing designers with a new technology for more effective design of structures whose character and performance are significantly affected by random variables. The software under development to implement the ideas developed in PSAM resembles, in many ways, conventional deterministic structural analysis code. However, several additional capabilities regarding the probabilistic analysis makes the input data requirements and the resulting output even more complex. As a result, an intelligent front- and back-end to the code is being developed to assist the design engineer in providing the input data in a correct and appropriate manner. The type of knowledge that this entails is, in general, heuristically-based, allowing the fairly well-understood technology of production rules to apply with little difficulty. However, the PSAM code, called NESSUS, is written in FORTRAN-77 and runs on a DEC VAX. Thus, the associated expert system, called NESSUS/EXPERT, must run on a DEC VAX as well, and integrate effectively and efficiently with the existing FORTRAN code. This paper discusses the process undergone to select a suitable tool, identify an appropriate division between the functions that should be performed in FORTRAN and those that should be performed by production rules, and how integration of the conventional and AI technologies was achieved.

  10. A Survey of Phase Variable Candidates of Human Locomotion

    PubMed Central

    Villarreal, Dario J.; Gregg, Robert D.

    2014-01-01

    Studies show that the human nervous system is able to parameterize gait cycle phase using sensory feedback. In the field of bipedal robots, the concept of a phase variable has been successfully used to mimic this behavior by parameterizing the gait cycle in a time-independent manner. This approach has been applied to control a powered transfemoral prosthetic leg, but the proposed phase variable was limited to the stance period of the prosthesis only. In order to achieve a more robust controller, we attempt to find a new phase variable that fully parameterizes the gait cycle of a prosthetic leg. The angle with respect to a global reference frame at the hip is able to monotonically parameterize both the stance and swing periods of the gait cycle. This survey looks at multiple phase variable candidates involving the hip angle with respect to a global reference frame across multiple tasks including level-ground walking, running, and stair negotiation. In particular, we propose a novel phase variable candidate that monotonically parameterizes the whole gait cycle across all tasks, and does so particularly well across level-ground walking. In addition to furthering the design of robust robotic prosthetic leg controllers, this survey could help neuroscientists and physicians study human locomotion across tasks from a time-independent perspective. PMID:25570873

  11. Group Variable Selection Via Convex Log-Exp-Sum Penalty with Application to a Breast Cancer Survivor Study

    PubMed Central

    Geng, Zhigeng; Wang, Sijian; Yu, Menggang; Monahan, Patrick O.; Champion, Victoria; Wahba, Grace

    2017-01-01

    Summary In many scientific and engineering applications, covariates are naturally grouped. When the group structures are available among covariates, people are usually interested in identifying both important groups and important variables within the selected groups. Among existing successful group variable selection methods, some methods fail to conduct the within group selection. Some methods are able to conduct both group and within group selection, but the corresponding objective functions are non-convex. Such a non-convexity may require extra numerical effort. In this article, we propose a novel Log-Exp-Sum(LES) penalty for group variable selection. The LES penalty is strictly convex. It can identify important groups as well as select important variables within the group. We develop an efficient group-level coordinate descent algorithm to fit the model. We also derive non-asymptotic error bounds and asymptotic group selection consistency for our method in the high-dimensional setting where the number of covariates can be much larger than the sample size. Numerical results demonstrate the good performance of our method in both variable selection and prediction. We applied the proposed method to an American Cancer Society breast cancer survivor dataset. The findings are clinically meaningful and may help design intervention programs to improve the qualify of life for breast cancer survivors. PMID:25257196

  12. A study protocol to evaluate the relationship between outdoor air pollution and pregnancy outcomes.

    PubMed

    Ribeiro, Manuel C; Pereira, Maria J; Soares, Amílcar; Branquinho, Cristina; Augusto, Sofia; Llop, Esteve; Fonseca, Susana; Nave, Joaquim G; Tavares, António B; Dias, Carlos M; Silva, Ana; Selemane, Ismael; de Toro, Joaquin; Santos, Mário J; Santos, Fernanda

    2010-10-15

    The present study protocol is designed to assess the relationship between outdoor air pollution and low birth weight and preterm births outcomes performing a semi-ecological analysis. Semi-ecological design studies are widely used to assess effects of air pollution in humans. In this type of analysis, health outcomes and covariates are measured in individuals and exposure assignments are usually based on air quality monitor stations. Therefore, estimating individual exposures are one of the major challenges when investigating these relationships with a semi-ecologic design. Semi-ecologic study consisting of a retrospective cohort study with ecologic assignment of exposure is applied. Health outcomes and covariates are collected at Primary Health Care Center. Data from pregnant registry, clinical record and specific questionnaire administered orally to the mothers of children born in period 2007-2010 in Portuguese Alentejo Litoral region, are collected by the research team. Outdoor air pollution data are collected with a lichen diversity biomonitoring program, and individual pregnancy exposures are assessed with spatial geostatistical simulation, which provides the basis for uncertainty analysis of individual exposures. Awareness of outdoor air pollution uncertainty will improve validity of individual exposures assignments for further statistical analysis with multivariate regression models. Exposure misclassification is an issue of concern in semi-ecological design. In this study, personal exposures are assigned to each pregnant using geocoded addresses data. A stochastic simulation method is applied to lichen diversity values index measured at biomonitoring survey locations, in order to assess spatial uncertainty of lichen diversity value index at each geocoded address. These methods assume a model for spatial autocorrelation of exposure and provide a distribution of exposures in each study location. We believe that variability of simulated exposure values at geocoded addresses will improve knowledge on variability of exposures, improving therefore validity of individual exposures to input in posterior statistical analysis.

  13. Preparation of Salicylic Acid Loaded Nanostructured Lipid Carriers Using Box-Behnken Design: Optimization, Characterization and Physicochemical Stability.

    PubMed

    Pantub, Ketrawee; Wongtrakul, Paveena; Janwitayanuchit, Wicharn

    2017-01-01

    Nanostructured lipid carriers loaded salicylic acid (NLCs-SA) were developed and optimized by using the design of experiment (DOE). Box-Behnken experimental design of 3-factor, 3-level was applied for optimization of nanostructured lipid carriers prepared by emulsification method. The independent variables were total lipid concentration (X 1 ), stearic acid to Lexol ® GT-865 ratio (X 2 ) and Tween ® 80 concentration (X 3 ) while the particle size was a dependent variable (Y). Box-Behnken design could create 15 runs by setting response optimizer as minimum particle size. The optimized formulation consists of 10% of total lipid, a mixture of stearic acid and capric/caprylic triglyceride at a 4:1 ratio, and 25% of Tween ® 80 which the formulation was applied in order to prepare in both loaded and unloaded salicylic acid. After preparation for 24 hours, the particle size of loaded and unloaded salicylic acid was 189.62±1.82 nm and 369.00±3.37 nm, respectively. Response surface analysis revealed that the amount of total lipid is a main factor which could affect the particle size of lipid carriers. In addition, the stability studies showed a significant change in particle size by time. Compared to unloaded nanoparticles, the addition of salicylic acid into the particles resulted in physically stable dispersion. After 30 days, sedimentation of unloaded lipid carriers was clearly observed. Absolute values of zeta potential of both systems were in the range of 3 to 18 mV since non-ionic surfactant, Tween ® 80, providing steric barrier was used. Differential thermograms indicated a shift of endothermic peak from 55°C for α-crystal form in freshly prepared samples to 60°C for β´-crystal form in storage samples. It was found that the presence of capric/caprylic triglyceride oil could enhance encapsulation efficiency up to 80% and facilitate stability of the particles.

  14. Are your covariates under control? How normalization can re-introduce covariate effects.

    PubMed

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  15. Genetic reinforcement learning through symbiotic evolution for fuzzy controller design.

    PubMed

    Juang, C F; Lin, J Y; Lin, C T

    2000-01-01

    An efficient genetic reinforcement learning algorithm for designing fuzzy controllers is proposed in this paper. The genetic algorithm (GA) adopted in this paper is based upon symbiotic evolution which, when applied to fuzzy controller design, complements the local mapping property of a fuzzy rule. Using this Symbiotic-Evolution-based Fuzzy Controller (SEFC) design method, the number of control trials, as well as consumed CPU time, are considerably reduced when compared to traditional GA-based fuzzy controller design methods and other types of genetic reinforcement learning schemes. Moreover, unlike traditional fuzzy controllers, which partition the input space into a grid, SEFC partitions the input space in a flexible way, thus creating fewer fuzzy rules. In SEFC, different types of fuzzy rules whose consequent parts are singletons, fuzzy sets, or linear equations (TSK-type fuzzy rules) are allowed. Further, the free parameters (e.g., centers and widths of membership functions) and fuzzy rules are all tuned automatically. For the TSK-type fuzzy rule especially, which put the proposed learning algorithm in use, only the significant input variables are selected to participate in the consequent of a rule. The proposed SEFC design method has been applied to different simulated control problems, including the cart-pole balancing system, a magnetic levitation system, and a water bath temperature control system. The proposed SEFC has been verified to be efficient and superior from these control problems, and from comparisons with some traditional GA-based fuzzy systems.

  16. Process optimization for the preparation of oligomycin-loaded folate-conjugated chitosan nanoparticles as a tumor-targeted drug delivery system using a two-level factorial design method.

    PubMed

    Zu, Yuangang; Zhao, Qi; Zhao, Xiuhua; Zu, Shuchong; Meng, Li

    2011-01-01

    Oligomycin-A (Oli-A), an anticancer drug, was loaded to the folate (FA)-conjugated chitosan as a tumor-targeted drug delivery system for the purpose of overcoming the nonspecific targeting characteristics and the hydrophobicity of the compound. The two-level factorial design (2-LFD) was applied to modeling the preparation process, which was composed of five independent variables, namely FA-conjugated chitosan (FA-CS) concentration, Oli-A concentration, sodium tripolyphosphate (TPP) concentration, the mass ratio of FA-CS to TPP, and crosslinking time. The mean particle size (MPS) and the drug loading rate (DLR) of the resulting Oli-loaded FA-CS nanoparticles (FA-Oli-CSNPs) were used as response variables. The interactive effects of the five independent variables on the response variables were studied. The characteristics of the nanoparticles, such as amount of FA conjugation, drug entrapment rate (DER), DLR, surface morphology, and release kinetics properties in vitro were investigated. The FA-Oli-CSNPs with MPS of 182.6 nm, DER of 17.3%, DLR of 58.5%, and zeta potential (ZP) of 24.6 mV were obtained under optimum conditions. The amount of FA conjugation was 45.9 mg/g chitosan. The FA-Oli-CSNPs showed sustained-release characteristics for 576 hours in vitro. The results indicated that FA-Oli-CSNPs obtained as a targeted drug delivery system could be effective in the therapy of leukemia in the future.

  17. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  19. Software integration for automated stability analysis and design optimization of a bearingless rotor blade

    NASA Astrophysics Data System (ADS)

    Gunduz, Mustafa Emre

    Many government agencies and corporations around the world have found the unique capabilities of rotorcraft indispensable. Incorporating such capabilities into rotorcraft design poses extra challenges because it is a complicated multidisciplinary process. The concept of applying several disciplines to the design and optimization processes may not be new, but it does not currently seem to be widely accepted in industry. The reason for this might be the lack of well-known tools for realizing a complete multidisciplinary design and analysis of a product. This study aims to propose a method that enables engineers in some design disciplines to perform a fairly detailed analysis and optimization of a design using commercially available software as well as codes developed at Georgia Tech. The ultimate goal is when the system is set up properly, the CAD model of the design, including all subsystems, will be automatically updated as soon as a new part or assembly is added to the design; or it will be updated when an analysis and/or an optimization is performed and the geometry needs to be modified. Designers and engineers will be involved in only checking the latest design for errors or adding/removing features. Such a design process will take dramatically less time to complete; therefore, it should reduce development time and costs. The optimization method is demonstrated on an existing helicopter rotor originally designed in the 1960's. The rotor is already an effective design with novel features. However, application of the optimization principles together with high-speed computing resulted in an even better design. The objective function to be minimized is related to the vibrations of the rotor system under gusty wind conditions. The design parameters are all continuous variables. Optimization is performed in a number of steps. First, the most crucial design variables of the objective function are identified. With these variables, Latin Hypercube Sampling method is used to probe the design space of several local minima and maxima. After analysis of numerous samples, an optimum configuration of the design that is more stable than that of the initial design is reached. The above process requires several software tools: CATIA as the CAD tool, ANSYS as the FEA tool, VABS for obtaining the cross-sectional structural properties, and DYMORE for the frequency and dynamic analysis of the rotor. MATLAB codes are also employed to generate input files and read output files of DYMORE. All these tools are connected using ModelCenter.

  20. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1994-01-01

    The primary accomplishments of the project are as follows: (1) Using the transonic small perturbation equation as a flowfield model, the project demonstrated that the quasi-analytical method could be used to obtain aerodynamic sensitivity coefficients for airfoils at subsonic, transonic, and supersonic conditions for design variables such as Mach number, airfoil thickness, maximum camber, angle of attack, and location of maximum camber. It was established that the quasi-analytical approach was an accurate method for obtaining aerodynamic sensitivity derivatives for airfoils at transonic conditions and usually more efficient than the finite difference approach. (2) The usage of symbolic manipulation software to determine the appropriate expressions and computer coding associated with the quasi-analytical method for sensitivity derivatives was investigated. Using the three dimensional fully conservative full potential flowfield model, it was determined that symbolic manipulation along with a chain rule approach was extremely useful in developing a combined flowfield and quasi-analytical sensitivity derivative code capable of considering a large number of realistic design variables. (3) Using the three dimensional fully conservative full potential flowfield model, the quasi-analytical method was applied to swept wings (i.e. three dimensional) at transonic flow conditions. (4) The incremental iterative technique has been applied to the three dimensional transonic nonlinear small perturbation flowfield formulation, an equivalent plate deflection model, and the associated aerodynamic and structural discipline sensitivity equations; and coupled aeroelastic results for an aspect ratio three wing in transonic flow have been obtained.

  1. Quantitative Analysis Of User Interfaces For Large Electronic Home Appliances And Mobile Devices Based On Lifestyle Categorization Of Older Users.

    PubMed

    Shin, Wonkyoung; Park, Minyong

    2017-01-01

    Background/Study Context: The increasing longevity and health of older users as well as aging populations has created the need to develop senior-oriented product interfaces. This study aims to find user interface (UI) priorities according to older user groups based on their lifestyle and develop quality of UI (QUI) models for large electronic home appliances and mobile products. A segmentation table designed to show how older users can be categorized was created through a review of the literature to survey 252 subjects with a questionnaire. Factor analysis was performed to extract six preliminary lifestyle factors, which were then used for subsequent cluster analysis. The analysis resulted in four groups. Cross-analysis was carried out to investigate which characteristics were included in the groups. Analysis of variance was then applied to investigate the differences in the UI priorities among the user groups for various electronic devices. Finally, QUI models were developed and applied to those electronic devices. Differences in UI priorities were found according to the four lifestyles ("money-oriented," "innovation-oriented," "stability- and simplicity-oriented," and "innovation- and intellectual-oriented"). Twelve QUI models were developed for four different lifestyle groups associated with different products. Three washers and three smartphones were used as an example for testing the QUI models. The UI differences of the older user groups by the segmentation in this study using several key (i.e., demographic, socioeconomic, and physical-cognitive) variables are distinct from earlier studies made by a single variable. The differences in responses clearly indicate the benefits of integrating various factors of older users, rather than single variable, in order to design and develop more innovative and better consumer products in the future. The results of this study showed that older users with a potentially high buying power in the future are likely to have higher satisfaction when selecting products customized for their lifestyle. Designers could also use the results of UI evaluation for older users based on their lifestyle before developing products through QUI modeling. This approach would save time and costs.

  2. Optimal design of earth-moving machine elements with cusp catastrophe theory application

    NASA Astrophysics Data System (ADS)

    Pitukhin, A. V.; Skobtsov, I. G.

    2017-10-01

    This paper deals with the optimal design problem solution for the operator of an earth-moving machine with a roll-over protective structure (ROPS) in terms of the catastrophe theory. A brief description of the catastrophe theory is presented, the cusp catastrophe is considered, control parameters are viewed as Gaussian stochastic quantities in the first part of the paper. The statement of optimal design problem is given in the second part of the paper. It includes the choice of the objective function and independent design variables, establishment of system limits. The objective function is determined as mean total cost that includes initial cost and cost of failure according to the cusp catastrophe probability. Algorithm of random search method with an interval reduction subject to side and functional constraints is given in the last part of the paper. The way of optimal design problem solution can be applied to choose rational ROPS parameters, which will increase safety and reduce production and exploitation expenses.

  3. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    PubMed

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  4. A Mixed Integer Efficient Global Optimization Framework: Applied to the Simultaneous Aircraft Design, Airline Allocation and Revenue Management Problem

    NASA Astrophysics Data System (ADS)

    Roy, Satadru

    Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.

  5. Case studies of conservation plans that incorporate geodiversity.

    PubMed

    Anderson, M G; Comer, P J; Beier, P; Lawler, J J; Schloss, C A; Buttrick, S; Albano, C M; Faith, D P

    2015-06-01

    Geodiversity has been used as a surrogate for biodiversity when species locations are unknown, and this utility can be extended to situations where species locations are in flux. Recently, scientists have designed conservation networks that aim to explicitly represent the range of geophysical environments, identifying a network of physical stages that could sustain biodiversity while allowing for change in species composition in response to climate change. Because there is no standard approach to designing such networks, we compiled 8 case studies illustrating a variety of ways scientists have approached the challenge. These studies show how geodiversity has been partitioned and used to develop site portfolios and connectivity designs; how geodiversity-based portfolios compare with those derived from species and communities; and how the selection and combination of variables influences the results. Collectively, they suggest 4 key steps when using geodiversity to augment traditional biodiversity-based conservation planning: create land units from species-relevant variables combined in an ecologically meaningful way; represent land units in a logical spatial configuration and integrate with species locations when possible; apply selection criteria to individual sites to ensure they are appropriate for conservation; and develop connectivity among sites to maintain movements and processes. With these considerations, conservationists can design more effective site portfolios to ensure the lasting conservation of biodiversity under a changing climate. © 2015 Society for Conservation Biology.

  6. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  7. Ultrasound-assisted magnetic dispersive solid-phase microextraction: A novel approach for the rapid and efficient microextraction of naproxen and ibuprofen employing experimental design with high-performance liquid chromatography.

    PubMed

    Ghorbani, Mahdi; Chamsaz, Mahmoud; Rounaghi, Gholam Hossein

    2016-03-01

    A simple, rapid, and sensitive method for the determination of naproxen and ibuprofen in complex biological and water matrices (cow milk, human urine, river, and well water samples) has been developed using ultrasound-assisted magnetic dispersive solid-phase microextraction. Magnetic ethylendiamine-functionalized graphene oxide nanocomposite was synthesized and used as a novel adsorbent for the microextraction process and showed great adsorptive ability toward these analytes. Different parameters affecting the microextraction were optimized with the aid of the experimental design approach. A Plackett-Burman screening design was used to study the main variables affecting the microextraction process, and the Box-Behnken optimization design was used to optimize the previously selected variables for extraction of naproxen and ibuprofen. The optimized technique provides good repeatability (relative standard deviations of the intraday precision 3.1 and 3.3, interday precision of 5.6 and 6.1%), linearity (0.1-500 and 0.3-650 ng/mL), low limits of detection (0.03 and 0.1 ng/mL), and a high enrichment factor (168 and 146) for naproxen and ibuprofen, respectively. The proposed method can be successfully applied in routine analysis for determination of naproxen and ibuprofen in cow milk, human urine, and real water samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Using generalized additive (mixed) models to analyze single case designs.

    PubMed

    Shadish, William R; Zuur, Alain F; Sullivan, Kristynn J

    2014-04-01

    This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  9. Fuel-optimal, low-thrust transfers between libration point orbits

    NASA Astrophysics Data System (ADS)

    Stuart, Jeffrey R.

    Mission design requires the efficient management of spacecraft fuel to reduce mission cost, increase payload mass, and extend mission life. High efficiency, low-thrust propulsion devices potentially offer significant propellant reductions. Periodic orbits that exist in a multi-body regime and low-thrust transfers between these orbits can be applied in many potential mission scenarios, including scientific observation and communications missions as well as cargo transport. In light of the recent discovery of water ice in lunar craters, libration point orbits that support human missions within the Earth-Moon region are of particular interest. This investigation considers orbit transfer trajectories generated by a variable specific impulse, low-thrust engine with a primer-vector-based, fuel-optimizing transfer strategy. A multiple shooting procedure with analytical gradients yields rapid solutions and serves as the basis for an investigation into the trade space between flight time and consumption of fuel mass. Path and performance constraints can be included at node points along any thrust arc. Integration of invariant manifolds into the design strategy may also yield improved performance and greater fuel savings. The resultant transfers offer insight into the performance of the variable specific impulse engine and suggest novel implementations of conventional impulsive thrusters. Transfers incorporating invariant manifolds demonstrate the fuel savings and expand the mission design capabilities that are gained by exploiting system symmetry. A number of design applications are generated.

  10. Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.

    2001-01-01

    Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.

  11. Guidelines for the design and statistical analysis of experiments in papers submitted to ATLA.

    PubMed

    Festing, M F

    2001-01-01

    In vitro experiments need to be well designed and correctly analysed if they are to achieve their full potential to replace the use of animals in research. An "experiment" is a procedure for collecting scientific data in order to answer a hypothesis, or to provide material for generating new hypotheses, and differs from a survey because the scientist has control over the treatments that can be applied. Most experiments can be classified into one of a few formal designs, the most common being completely randomised, and randomised block designs. These are quite common with in vitro experiments, which are often replicated in time. Some experiments involve a single independent (treatment) variable, while other "factorial" designs simultaneously vary two or more independent variables, such as drug treatment and cell line. Factorial designs often provide additional information at little extra cost. Experiments need to be carefully planned to avoid bias, be powerful yet simple, provide for a valid statistical analysis and, in some cases, have a wide range of applicability. Virtually all experiments need some sort of statistical analysis in order to take account of biological variation among the experimental subjects. Parametric methods using the t test or analysis of variance are usually more powerful than non-parametric methods, provided the underlying assumptions of normality of the residuals and equal variances are approximately valid. The statistical analyses of data from a completely randomised design, and from a randomised-block design are demonstrated in Appendices 1 and 2, and methods of determining sample size are discussed in Appendix 3. Appendix 4 gives a checklist for authors submitting papers to ATLA.

  12. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  13. Optimization of a GO2/GH2 Swirl Coaxial Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    1999-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.

  14. Enhanced α-amylase production by a marine protist, Ulkenia sp. using response surface methodology and genetic algorithm.

    PubMed

    Shirodkar, Priyanka V; Muraleedharan, Usha Devi

    2017-11-26

    Amylases are a group of enzymes with a wide variety of industrial applications. Enhancement of α-amylase production from the marine protists, thraustochytrids has been attempted for the first time by applying statistical-based experimental designs using response surface methodology (RSM) and genetic algorithm (GA) for optimization of the most influencing process variables. A full factorial central composite experimental design was used to study the cumulative interactive effect of nutritional components viz., glucose, corn starch, and yeast extract. RSM was performed on two objectives, that is, growth of Ulkenia sp. AH-2 (ATCC® PRA-296) and α-amylase activity. When GA was conducted for maximization of the enzyme activity, the optimal α-amylase activity was found to be 71.20 U/mL which was close to that obtained by RSM (71.93 U/mL), both of which were in agreement with the predicted value of 72.37 U/mL. Optimal growth at the optimized process variables was found to be 1.89A 660nm . The optimized medium increased α-amylase production by 1.2-fold.

  15. The influence of the Inquiry Institute on elementary teachers' perceptions of inquiry learning in the science classroom

    NASA Astrophysics Data System (ADS)

    Williams-Rossi, Dara

    Despite the positive outcomes for inquiry-based science education and recommendations from national and state standards, many teachers continue to rely upon more traditional methods of instruction This causal-comparative study was designed to determine the effects of the Inquiry Institute, a professional development program that is intended to strengthen science teachers' pedagogical knowledge and provide practice with inquiry methods based from a constructivist approach. This study will provide a understanding of a cause and effect relationship within three levels of the independent variable---length of participation in the Inquiry Institute (zero, three, or six days)---to determine whether or not the three groups differ on the dependent variables---beliefs, implementation, and barriers. Quantitative data were collected with the Science Inquiry Survey, a researcher-developed instrument designed to also ascertain qualitative information with the use of open-ended survey items. One-way ANOVAs were applied to the data to test for a significant difference in the means of the three groups. The findings of this study indicate that lengthier professional development in the Inquiry Institute holds the most benefits for the participants.

  16. Designing hydrological and financial instruments for small scale farmers in Sub-Saharan Africa: A socio-hydrological analysis

    NASA Astrophysics Data System (ADS)

    Moshtaghi, M.; Pande, S.; Savenije, H. H. G.; den Besten, N. I.

    2016-12-01

    Eighty percent of the farmland in Sub-Saharan Africa is managed by smallholders and they are often economically stressed; low income as a result of poor crop yields. Indeed, smallholders' well-being is naturally important, which often suffers due to hydro-climatic variability and fluctuations in prices of inputs (seeds, fertilizer) and outputs (crops). Appropriate designed insurances can guarantee their wellbeing and food security in whole continent, if they focus on specified requirement of smallholders in each region. In this research, we apply recently developed socio-hydrologic modelling, which interprets a small scale farm system as a coupled system of 6 variables: soil moisture, solid fertility, capital, livestock, fodder and labor availability. By using datasets of potential evaporation, rainfall, land cover and etc, we want to make a comparison between application of yield index insurance, weather index insurance and biomass Index Insurance to highlight the importance of considering the interplay between fertilizer and water availability in food security and also determine type of regional insurance which works better in a certain land.

  17. Conducting-polymer-driven actively shaped propellers and screws

    NASA Astrophysics Data System (ADS)

    Madden, John D.; Schmid, Bryan; Lafontaine, Serge R.; Madden, Peter G. A.; Hover, Franz S.; McLetchie, Karl; Hunter, Ian W.

    2003-07-01

    Conducting polymer actuators are employed to create actively shaped hydrodynamic foils. The active foils are designed to allow control over camber, much like the ailerons of an airplane wing. Control of camber promises to enable variable thrust in propellers and screws, increased maneuverability, and improved stealth. The design and fabrication of the active foils are presented, the forces are measured and operation is demonstrated both in still air and water. The foils have a "wing" span of 240 mm, and an average chord length (width) of 70 mm. The trailing 30 mm of the foil is composed of a thin polypyrrole actuator that curls chordwise to achieve variable camber. The actuator consists of two 30 μm thick sheets of hexafluorophosphate doped polypyrrole separated from each other by a gel electrolyte. A polymer layer encapsulates the entire structure. Potentials are applied between the polymer layers to induce reversible bending by approximately 35 degrees, and generating forces of 0.15 N. These forces and displacements are expected to enable operation in water at flow rates of > 1 m/s and ~ 30 m/s in air.

  18. Response surface methodological approach for the decolorization of simulated dye effluent using Aspergillus fumigatus fresenius.

    PubMed

    Sharma, Praveen; Singh, Lakhvinder; Dilbaghi, Neeraj

    2009-01-30

    The aim of our research was to study, effect of temperature, pH and initial dye concentration on decolorization of diazo dye Acid Red 151 (AR 151) from simulated dye solution using a fungal isolate Aspergillus fumigatus fresenius have been investigated. The central composite design matrix and response surface methodology (RSM) have been applied to design the experiments to evaluate the interactive effects of three most important operating variables: temperature (25-35 degrees C), pH (4.0-7.0), and initial dye concentration (100-200 mg/L) on the biodegradation of AR 151. The total 20 experiments were conducted in the present study towards the construction of a quadratic model. Very high regression coefficient between the variables and the response (R(2)=0.9934) indicated excellent evaluation of experimental data by second-order polynomial regression model. The RSM indicated that initial dye concentration of 150 mg/L, pH 5.5 and a temperature of 30 degrees C were optimal for maximum % decolorization of AR 151 in simulated dye solution, and 84.8% decolorization of AR 151 was observed at optimum growth conditions.

  19. Formulation and Optimization of Multiparticulate Drug Delivery System Approach for High Drug Loading.

    PubMed

    Shah, Neha; Mehta, Tejal; Gohel, Mukesh

    2017-08-01

    The aim of the present work was to develop and optimize multiparticulate formulation viz. pellets of naproxen by employing QbD and risk assessment approach. Mixture design with extreme vertices was applied to the formulation with high loading of drug (about 90%) and extrusion-spheronization as a process for manufacturing pellets. Independent variables chosen were level of microcrystalline cellulose (MCC)-X 1 , polyvinylpyrrolidone K-90 (PVP K-90)-X 2 , croscarmellose sodium (CCS)-X 3 , and polacrilin potassium (PP)-X 4 . Dependent variables considered were disintegration time (DT)-Y 1 , sphericity-Y 2 , and percent drug release-Y 3 . The formulation was optimized based on the batches generated by MiniTab 17 software. The batch with maximum composite desirability (0.98) proved to be optimum. From the evaluation of design batches, it was observed that, even in low variation, the excipients affect the pelletization property of the blend and also the final drug release. In conclusion, pellets with high drug loading can be effectively manufactured and optimized systematically using QbD approach.

  20. Geographic profiling and animal foraging.

    PubMed

    Le Comber, Steven C; Nicholls, Barry; Rossmo, D Kim; Racey, Paul A

    2006-05-21

    Geographic profiling was originally developed as a statistical tool for use in criminal cases, particularly those involving serial killers and rapists. It is designed to help police forces prioritize lists of suspects by using the location of crime scenes to identify the areas in which the criminal is most likely to live. Two important concepts are the buffer zone (criminals are less likely to commit crimes in the immediate vicinity of their home) and distance decay (criminals commit fewer crimes as the distance from their home increases). In this study, we show how the techniques of geographic profiling may be applied to animal data, using as an example foraging patterns in two sympatric colonies of pipistrelle bats, Pipistrellus pipistrellus and P. pygmaeus, in the northeast of Scotland. We show that if model variables are fitted to known roost locations, these variables may be used as numerical descriptors of foraging patterns. We go on to show that these variables can be used to differentiate patterns of foraging in these two species.

  1. The interaction rainfall vs. weight as determinant of total mercury concentration in fish from a tropical estuary.

    PubMed

    Barletta, M; Lucena, L R R; Costa, M F; Barbosa-Cintra, S C T; Cysneiros, F J A

    2012-08-01

    Mercury loads in tropical estuaries are largely controlled by the rainfall regime that may cause biodilution due to increased amounts of organic matter (both live and non-living) in the system. Top predators, as Trichiurus lepturus, reflect the changing mercury bioavailability situations in their muscle tissues. In this work two variables [fish weight (g) and monthly total rainfall (mm)] are presented as being important predictors of total mercury concentration (T-Hg) in fish muscle. These important explanatory variables were identified by a Weibull Regression model, which best fit the dataset. A predictive model using readily available variables as rainfall is important, and can be applied for human and ecological health assessments and decisions. The main contribution will be to further protect vulnerable groups as pregnant women and children. Nature conservation directives could also improve by considering monitoring sample designs that include this hypothesis, helping to establish complete and detailed mercury contamination scenarios. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. An ANOVA approach for statistical comparisons of brain networks.

    PubMed

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  3. Antigenic variability: Obstacles on the road to vaccines against traditionally difficult targets.

    PubMed

    Servín-Blanco, R; Zamora-Alvarado, R; Gevorkian, G; Manoutcharian, K

    2016-10-02

    Despite the impressive impact of vaccines on public health, the success of vaccines targeting many important pathogens and cancers has to date been limited. The burden of infectious diseases today is mainly caused by antigenically variable pathogens (AVPs), which escape immune responses induced by prior infection or vaccination through changes in molecular structures recognized by antibodies or T cells. Extensive genetic and antigenic variability is the major obstacle for the development of new or improved vaccines against "difficult" targets. Alternative, qualitatively new approaches leading to the generation of disease- and patient-specific vaccine immunogens that incorporate complex permanently changing epitope landscapes of intended targets accompanied by appropriate immunomodulators are urgently needed. In this review, we highlight some of the most critical common issues related to the development of vaccines against many pathogens and cancers that escape protective immune responses owing to antigenic variation, and discuss recent efforts to overcome the obstacles by applying alternative approaches for the rational design of new types of immunogens.

  4. Self-Determination Theory Applied to Health Contexts: A Meta-Analysis.

    PubMed

    Ng, Johan Y Y; Ntoumanis, Nikos; Thøgersen-Ntoumani, Cecilie; Deci, Edward L; Ryan, Richard M; Duda, Joan L; Williams, Geoffrey C

    2012-07-01

    Behavior change is more effective and lasting when patients are autonomously motivated. To examine this idea, we identified 184 independent data sets from studies that utilized self-determination theory (SDT; Deci & Ryan, 2000) in health care and health promotion contexts. A meta-analysis evaluated relations between the SDT-based constructs of practitioner support for patient autonomy and patients' experience of psychological need satisfaction, as well as relations between these SDT constructs and indices of mental and physical health. Results showed the expected relations among the SDT variables, as well as positive relations of psychological need satisfaction and autonomous motivation to beneficial health outcomes. Several variables (e.g., participants' age, study design) were tested as potential moderators when effect sizes were heterogeneous. Finally, we used path analyses of the meta-analyzed correlations to test the interrelations among the SDT variables. Results suggested that SDT is a viable conceptual framework to study antecedents and outcomes of motivation for health-related behaviors. © The Author(s) 2012.

  5. New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise

    NASA Astrophysics Data System (ADS)

    Pal'a, Jozef; Ušák, Elemír

    2016-03-01

    A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.

  6. Generalized Scaling and the Master Variable for Brownian Magnetic Nanoparticle Dynamics

    PubMed Central

    Reeves, Daniel B.; Shi, Yipeng; Weaver, John B.

    2016-01-01

    Understanding the dynamics of magnetic particles can help to advance several biomedical nanotechnologies. Previously, scaling relationships have been used in magnetic spectroscopy of nanoparticle Brownian motion (MSB) to measure biologically relevant properties (e.g., temperature, viscosity, bound state) surrounding nanoparticles in vivo. Those scaling relationships can be generalized with the introduction of a master variable found from non-dimensionalizing the dynamical Langevin equation. The variable encapsulates the dynamical variables of the surroundings and additionally includes the particles’ size distribution and moment and the applied field’s amplitude and frequency. From an applied perspective, the master variable allows tuning to an optimal MSB biosensing sensitivity range by manipulating both frequency and field amplitude. Calculation of magnetization harmonics in an oscillating applied field is also possible with an approximate closed-form solution in terms of the master variable and a single free parameter. PMID:26959493

  7. On the Asymptotic Relative Efficiency of Planned Missingness Designs.

    PubMed

    Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D

    2016-03-01

    In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.

  8. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  9. Molecular system identification for enzyme directed evolution and design

    NASA Astrophysics Data System (ADS)

    Guan, Xiangying; Chakrabarti, Raj

    2017-09-01

    The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.

  10. Human Tolerance to Rapidly Applied Accelerations: A Summary of the Literature

    NASA Technical Reports Server (NTRS)

    Eiband, A. Martin

    1959-01-01

    The literature is surveyed to determine human tolerance to rapidly applied accelerations. Pertinent human and animal experiments applicable to space flight and to crash impact forces are analyzed and discussed. These data are compared and presented on the basis of a trapezoidal pulse. The effects of body restraint and of acceleration direction, onset rate, and plateau duration on the maximum tolerable and survivable rapidly applied accelerations are shown. Results of the survey indicate that adequate torso and extremity restraint is the primary variable in tolerance to rapidly applied accelerations. The harness, or restraint system, must be arranged to transmit the major portion of the accelerating force directly to the pelvic structure and not via the vertebral column. When the conditions of adequate restraint have been met, then the other variables, direction, magnitude, and onset rate of rapidly applied accelerations, govern maximum tolerance and injury limits. The results also indicate that adequately stressed aft-faced passenger seats offer maximum complete body support with minimum objectionable harnessing. Such a seat, whether designed for 20-, 30-, or 40-G dynamic loading, would include lap strap, chest (axillary) strap, and winged-back seat to increase headward and lateral G protection, full-height integral head rest, arm rests (load-bearing) with recessed hand-holds and provisions to prevent arms from slipping either laterally or beyond the seat back, and leg support to keep the legs from being wedged under the seat. For crew members and others whose duties require forward-facing seats, maximum complete body support requires lap, shoulder, and thigh straps, lap-belt tie-down strap, and full-height seat back with integral head support.

  11. A split-cavity design for the incorporation of a DC bias in a 3D microwave cavity

    NASA Astrophysics Data System (ADS)

    Cohen, Martijn A.; Yuan, Mingyun; de Jong, Bas W. A.; Beukers, Ewout; Bosman, Sal J.; Steele, Gary A.

    2017-04-01

    We report on a technique for applying a DC bias in a 3D microwave cavity. We achieve this by isolating the two halves of the cavity with a dielectric and directly using them as DC electrodes. As a proof of concept, we embed a variable capacitance diode in the cavity and tune the resonant frequency with a DC voltage, demonstrating the incorporation of a DC bias into the 3D cavity with no measurable change in its quality factor at room temperature. We also characterize the architecture at millikelvin temperatures and show that the split cavity design maintains a quality factor Qi ˜ 8.8 × 105, making it promising for future quantum applications.

  12. Plug nozzles: The ultimate customer driven propulsion system

    NASA Technical Reports Server (NTRS)

    Aukerman, Carl A.

    1991-01-01

    This paper presents the results of a study applying the plug cluster nozzle concept to the propulsion system for a typical lunar excursion vehicle. Primary attention for the design criteria is given to user defined factors such as reliability, low volume, and ease of propulsion system development. Total thrust and specific impulse are held constant in the study while other parameters are explored to minimize the design chamber pressure. A brief history of the plug nozzle concept is included to point out the advanced level of technology of the concept and the feasibility of exploiting the variables considered in this study. The plug cluster concept looks very promising as a candidate for consideration for the ultimate customer driven propulsion system.

  13. A new surface-potential-based compact model for the MoS2 field effect transistors in active matrix display applications

    NASA Astrophysics Data System (ADS)

    Cao, Jingchen; Peng, Songang; Liu, Wei; Wu, Quantan; Li, Ling; Geng, Di; Yang, Guanhua; Ji, Zhouyu; Lu, Nianduan; Liu, Ming

    2018-02-01

    We present a continuous surface-potential-based compact model for molybdenum disulfide (MoS2) field effect transistors based on the multiple trapping release theory and the variable-range hopping theory. We also built contact resistance and velocity saturation models based on the analytical surface potential. This model is verified with experimental data and is able to accurately predict the temperature dependent behavior of the MoS2 field effect transistor. Our compact model is coded in Verilog-A, which can be implemented in a computer-aided design environment. Finally, we carried out an active matrix display simulation, which suggested that the proposed model can be successfully applied to circuit design.

  14. Effects of sources of variability on sample sizes required for RCTs, applied to trials of lipid-altering therapies on carotid artery intima-media thickness.

    PubMed

    Gould, A Lawrence; Koglin, Joerg; Bain, Raymond P; Pinto, Cathy-Anne; Mitchel, Yale B; Pasternak, Richard C; Sapre, Aditi

    2009-08-01

    Studies measuring progression of carotid artery intima-media thickness (cIMT) have been used to estimate the effect of lipid-modifying therapies cardiovascular event risk. The likelihood that future cIMT clinical trials will detect a true treatment effect is estimated by leveraging results from prior studies. The present analyses assess the impact of between- and within-study variability based on currently published data from prior clinical studies on the likelihood that ongoing or future cIMT trials will detect the true treatment effect of lipid-modifying therapies. Published data from six contemporary cIMT studies (ASAP, ARBITER 2, RADIANCE 1, RADIANCE 2, ENHANCE, and METEOR) including data from a total of 3563 patients were examined. Bayesian and frequentist methods were used to assess the impact of between study variability on the likelihood of detecting true treatment effects on 1-year cIMT progression/regression and to provide a sample size estimate that would specifically compensate for the effect of between-study variability. In addition to the well-described within-study variability, there is considerable between-study variability associated with the measurement of annualized change in cIMT. Accounting for the additional between-study variability decreases the power for existing study designs. In order to account for the added between-study variability, it is likely that future cIMT studies would require a large increase in sample size in order to provide substantial probability (> or =90%) to have 90% power of detecting a true treatment effect.Limitation Analyses are based on study level data. Future meta-analyses incorporating patient-level data would be useful for confirmation. Due to substantial within- and between-study variability in the measure of 1-year change of cIMT, as well as uncertainty about progression rates in contemporary populations, future study designs evaluating the effect of new lipid-modifying therapies on atherosclerotic disease progression are likely to be challenged by large sample sizes in order to demonstrate a true treatment effect.

  15. Optimization of the Critical Parameters of the Spherical Agglomeration Crystallization Method by the Application of the Quality by Design Approach.

    PubMed

    Gyulai, Orsolya; Kovács, Anita; Sovány, Tamás; Csóka, Ildikó; Aigner, Zoltán

    2018-04-20

    This research work presents the use of the Quality by Design (QbD) concept for optimization of the spherical agglomeration crystallization method in the case of the active agent, ambroxol hydrochloride (AMB HCl). AMB HCl spherical crystals were formulated by the spherical agglomeration method, which was applied as an antisolvent technique. Spherical crystals have good flowing properties, which makes the direct compression tableting method applicable. This means that the amount of additives used can be reduced and smaller tablets can be formed. For the risk assessment, LeanQbD Software was used. According to its results, four independent variables (mixing type and time, dT (temperature difference between solvent and antisolvent), and composition (solvent/antisolvent volume ratio)) and three dependent variables (mean particle size, aspect ratio, and roundness) were selected. Based on these, a 2⁻3 mixed-level factorial design was constructed, crystallization was accomplished, and the results were evaluated using Statistica for Windows 13 program. Product assay was performed and it was revealed that improvements in the mean particle size (from ~13 to ~200 µm), roundness (from ~2.4 to ~1.5), aspect ratio (from ~1.7 to ~1.4), and flow properties were observed while polymorphic transitions were avoided.

  16. Magnetic hydroxyapatite nanoparticles: an efficient adsorbent for the separation and removal of nitrate and nitrite ions from environmental samples.

    PubMed

    Ghasemi, Ensieh; Sillanpää, Mika

    2015-01-01

    A novel type of magnetic nanosorbent, hydroxyapatite-coated Fe2O3 nanoparticles was synthesized and used for the adsorption and removal of nitrite and nitrate ions from environmental samples. The properties of synthesized magnetic nanoparticles were characterized by scanning electron microscopy, Fourier transform infrared spectroscopy, and X-ray powder diffraction. After the adsorption process, the separation of γ-Fe2O3@hydroxyapatite nanoparticles from the aqueous solution was simply achieved by applying an external magnetic field. The effects of different variables on the adsorption efficiency were studied simultaneously using an experimental design. The variables of interest were amount of magnetic hydroxyapatite nanoparticles, sample volume, pH, stirring rate, adsorption time, and temperature. The experimental parameters were optimized using a Box-Behnken design and response surface methodology after a Plackett-Burman screening design. Under the optimum conditions, the adsorption efficiencies of magnetic hydroxyapatite nanoparticles adsorbents toward NO3(-) and NO2(-) ions (100 mg/L) were in the range of 93-101%. The results revealed that the magnetic hydroxyapatite nanoparticles adsorbent could be used as a simple, efficient, and cost-effective material for the removal of nitrate and nitrite ions from environmental water and soil samples. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    PubMed Central

    Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh

    2014-01-01

    This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359

  18. Ongoing Analysis of Rocket Based Combined Cycle Engines by the Applied Fluid Dynamics Analysis Group at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph; Holt, James B.; Canabal, Francisco

    1999-01-01

    This paper presents the status of analyses on three Rocket Based Combined Cycle configurations underway in the Applied Fluid Dynamics Analysis Group (TD64). TD64 is performing computational fluid dynamics analysis on a Penn State RBCC test rig, the proposed Draco axisymmetric RBCC engine and the Trailblazer engine. The intent of the analysis on the Penn State test rig is to benchmark the Finite Difference Navier Stokes code for ejector mode fluid dynamics. The Draco engine analysis is a trade study to determine the ejector mode performance as a function of three engine design variables. The Trailblazer analysis is to evaluate the nozzle performance in scramjet mode. Results to date of each analysis are presented.

  19. Ongoing Analyses of Rocket Based Combined Cycle Engines by the Applied Fluid Dynamics Analysis Group at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.; Holt, James B.; Canabal, Francisco

    2001-01-01

    This paper presents the status of analyses on three Rocket Based Combined Cycle (RBCC) configurations underway in the Applied Fluid Dynamics Analysis Group (TD64). TD64 is performing computational fluid dynamics (CFD) analysis on a Penn State RBCC test rig, the proposed Draco axisymmetric RBCC engine and the Trailblazer engine. The intent of the analysis on the Penn State test rig is to benchmark the Finite Difference Navier Stokes (FDNS) code for ejector mode fluid dynamics. The Draco analysis was a trade study to determine the ejector mode performance as a function of three engine design variables. The Trailblazer analysis is to evaluate the nozzle performance in scramjet mode. Results to date of each analysis are presented.

  20. High-temperature optical fiber instrumentation for gas flow monitoring in gas turbine engines

    NASA Astrophysics Data System (ADS)

    Roberts, Adrian; May, Russell G.; Pickrell, Gary R.; Wang, Anbo

    2002-02-01

    In the design and testing of gas turbine engines, real-time data about such physical variables as temperature, pressure and acoustics are of critical importance. The high temperature environment experienced in the engines makes conventional electronic sensors devices difficult to apply. Therefore, there is a need for innovative sensors that can reliably operate under the high temperature conditions and with the desirable resolution and frequency response. A fiber optic high temperature sensor system for dynamic pressure measurement is presented in this paper. This sensor is based on a new sensor technology - the self-calibrated interferometric/intensity-based (SCIIB) sensor, recently developed at Virginia Tech. State-of-the-art digital signal processing (DSP) methods are applied to process the signal from the sensor to acquire high-speed frequency response.

  1. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    PubMed

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.

  2. A novel variable baseline visibility detection system and its measurement method

    NASA Astrophysics Data System (ADS)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long; Zhang, Guizhong; Yao, JianQuan

    2017-10-01

    As an important meteorological observation instrument, the visibility meter can ensure the safety of traffic operation. However, due to the optical system contamination as well as sample error, the accuracy and stability of the equipment are difficult to meet the requirement in the low-visibility environment. To settle this matter, a novel measurement equipment was designed based upon multiple baseline, which essentially acts as an atmospheric transmission meter with movable optical receiver, applying weighted least square method to process signal. Theoretical analysis and experiments in real atmosphere environment support this technique.

  3. Two-range magnetoelectric sensor

    NASA Astrophysics Data System (ADS)

    Bichurin, M.; Petrov, V.; Leontyev, V.; Saplev, A.

    2017-01-01

    In this study, we present a two-range magnetoelectric ME sensor design comprising of permendur (alloy of Fe-Co-V), nickel, and lead zirconate titanate (PZT) laminate composite. A systematic study was conducted to clarify the contribution of magnetostrictive layers variables to the ME response over the applied range of magnetic bias field. The two-range behavior was characterized by opposite sign of the ME response when magnetic dc bias is in different sub-ranges. The ME coefficient as a function of magnetic bias field was found to be dependent on the laminate composite structure.

  4. Orbiting passive microwave sensor simulation applied to soil moisture estimation

    NASA Technical Reports Server (NTRS)

    Newton, R. W. (Principal Investigator); Clark, B. V.; Pitchford, W. M.; Paris, J. F.

    1979-01-01

    A sensor/scene simulation program was developed and used to determine the effects of scene heterogeneity, resolution, frequency, look angle, and surface and temperature relations on the performance of a spaceborne passive microwave system designed to estimate soil water information. The ground scene is based on classified LANDSAT images which provide realistic ground classes, as well as geometries. It was determined that the average sensitivity of antenna temperature to soil moisture improves as the antenna footprint size increased. Also, the precision (or variability) of the sensitivity changes as a function of resolution.

  5. Sexual cognitive predictors of sexual communication in junior college adolescents: medical student perspectives.

    PubMed

    Lou, Jiunn-Horng; Chen, Sheng-Hwang; Yu, Hsing-Yi; Lin, Yen-Chin; Li, Ren-Hau

    2010-12-01

    Further understanding the relationship between sexual cognition and sexual communication in adolescents may facilitate sexual health promotion in this population. This study was designed to investigate associations between sexual cognitive variables and sexual communication in adolescents. This study used a cross-sectional research design with conventional sampling. Data were collected from one medical college in central Taiwan. A total of 900 questionnaires were dispatched, with 748 copies returned, giving a response rate of 83.1%. Structural questionnaires were designed to collect demographic data, sexual self-concept inventory, sexual risk cognition, sexual self-efficacy, and sexual communication scale. This study applied statistical methods, including descriptive statistics, Pearson product-moment correlation, and multiple regression analysis. Major findings revealed that (a) adolescents talked about sexual activity and sexual issues with their parents at a moderate level (mean = 2.52, SD = 1.24), (b) all sexual cognitive variables (sexual self-concept, sexual risk cognitions, and sexual self-efficacy) correlated positively with sexual communication, and (c) predictors of sexual communication were supported by demographic data (having heterosexual friends, satisfaction with heterosexual friends, and duration of relationships with heterosexual friends) and sexual cognitive variables, which accounted for 62.0% of variance. Study results can contribute to the development of safe sexual health programs and improve healthcare provider knowledge of sexual communication among adolescents. More sexual communication between adolescents and their parents is encouraged. Moreover, sexual health programs must give increased focus on the issue of adolescent sexual cognition to help encourage increased discussion between adolescents and their parents regarding sexual activity and issues.

  6. Optimization of a GO2/GH2 Impinging Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    2001-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.

  7. Timing analysis by model checking

    NASA Technical Reports Server (NTRS)

    Naydich, Dimitri; Guaspari, David

    2000-01-01

    The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.

  8. CORSSTOL: Cylinder Optimization of Rings, Skin, and Stringers with Tolerance sensitivity

    NASA Technical Reports Server (NTRS)

    Finckenor, J.; Bevill, M.

    1995-01-01

    Cylinder Optimization of Rings, Skin, and Stringers with Tolerance (CORSSTOL) sensitivity is a design optimization program incorporating a method to examine the effects of user-provided manufacturing tolerances on weight and failure. CORSSTOL gives designers a tool to determine tolerances based on need. This is a decisive way to choose the best design among several manufacturing methods with differing capabilities and costs. CORSSTOL initially optimizes a stringer-stiffened cylinder for weight without tolerances. The skin and stringer geometry are varied, subject to stress and buckling constraints. Then the same analysis and optimization routines are used to minimize the maximum material condition weight subject to the least favorable combination of tolerances. The adjusted optimum dimensions are provided with the weight and constraint sensitivities of each design variable. The designer can immediately identify critical tolerances. The safety of parts made out of tolerance can also be determined. During design and development of weight-critical systems, design/analysis tools that provide product-oriented results are of vital significance. The development of this program and methodology provides designers with an effective cost- and weight-saving design tool. The tolerance sensitivity method can be applied to any system defined by a set of deterministic equations.

  9. Maximum life spiral bevel reduction design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-01-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  10. Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.

    2003-01-01

    Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle (GAMA), elastic axis (ELAXS), Mach number (MACH), mass ratio (MASSR), and frequency ratio (WHWB). The cascade is considered to be in subsonic flow with Mach 0.7. The results of the probabilistic aeroelastic analysis are the probability density function of predicted aerodynamic damping and frequency for flutter and the response amplitudes for forced response.

  11. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  12. Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang

    2016-11-01

    A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.

  13. Partial gravity habitat study: With application to lunar base design

    NASA Technical Reports Server (NTRS)

    Capps, Stephen; Lorandos, Jason; Akhidime, Eval; Bunch, Michael; Lund, Denise; Moore, Nathan; Murakawa, Kio; Bell, Larry; Trotti, Guillermo; Neubek, Deb

    1989-01-01

    Comprehensive design requirements associated with designing habitats for humans in a partial gravity environment were investigated and then applied to a lunar base design. Other potential sites for application include planetary surfaces such as Mars, variable gravity research facilities, or a rotating spacecraft. Design requirements for partial gravity environments include: (1) locomotion changes in less than normal Earth gravity; (2) facility design issues, such as interior configuration, module diameter and geometry; and (3) volumetric requirements based on the previous as well as psychological issues involved in prolonged isolation. For application to a Lunar Base, it was necessary to study the exterior architecture and configuration to insure optimum circulation patterns while providing dual egress. Radiation protection issues were addressed to provide a safe and healthy environment for the crew, and finally, the overall site was studied to locate all associated facilities in context with the habitat. Mission planning was not the purpose of this study; therefore, a Lockheed scenario was used as an outline for the Lunar Base application, which was then modified to meet the project needs.

  14. Natural streamflow simulation for two largest river basins in Poland: a baseline for identification of flow alterations

    NASA Astrophysics Data System (ADS)

    Piniewski, Mikołaj

    2016-05-01

    The objective of this study was to apply a previously developed large-scale and high-resolution SWAT model of the Vistula and the Odra basins, calibrated with the focus of natural flow simulation, in order to assess the impact of three different dam reservoirs on streamflow using the Indicators of Hydrologic Alteration (IHA). A tailored spatial calibration approach was designed, in which calibration was focused on a large set of relatively small non-nested sub-catchments with semi-natural flow regime. These were classified into calibration clusters based on the flow statistics similarity. After performing calibration and validation that gave overall positive results, the calibrated parameter values were transferred to the remaining part of the basins using an approach based on hydrological similarity of donor and target catchments. The calibrated model was applied in three case studies with the purpose of assessing the effect of dam reservoirs (Włocławek, Siemianówka and Czorsztyn Reservoirs) on streamflow alteration. Both the assessment based on gauged streamflow (Before-After design) and the one based on simulated natural streamflow showed large alterations in selected flow statistics related to magnitude, duration, high and low flow pulses and rate of change. Some benefits of using a large-scale and high-resolution hydrological model for the assessment of streamflow alteration include: (1) providing an alternative or complementary approach to the classical Before-After designs, (2) isolating the climate variability effect from the dam (or any other source of alteration) effect, (3) providing a practical tool that can be applied at a range of spatial scales over large area such as a country, in a uniform way. Thus, presented approach can be applied for designing more natural flow regimes, which is crucial for river and floodplain ecosystem restoration in the context of the European Union's policy on environmental flows.

  15. Automated trajectory planning for multiple-flyby interplanetary missions

    NASA Astrophysics Data System (ADS)

    Englander, Jacob

    Many space mission planning problems may be formulated as hybrid optimal control problems (HOCP), i.e. problems that include both real-valued variables and categorical variables. In interplanetary trajectory design problems the categorical variables will typically specify the sequence of planets at which to perform flybys, and the real-valued variables will represent the launch date, ight times between planets, magnitudes and directions of thrust, flyby altitudes, etc. The contribution of this work is a framework for the autonomous optimization of multiple-flyby interplanetary trajectories. The trajectory design problem is converted into a HOCP with two nested loops: an "outer-loop" that finds the sequence of flybys and an "inner-loop" that optimizes the trajectory for each candidate yby sequence. The problem of choosing a sequence of flybys is posed as an integer programming problem and solved using a genetic algorithm (GA). This is an especially difficult problem to solve because GAs normally operate on a fixed-length set of decision variables. Since in interplanetary trajectory design the number of flyby maneuvers is not known a priori, it was necessary to devise a method of parameterizing the problem such that the GA can evolve a variable-length sequence of flybys. A novel "null gene" transcription was developed to meet this need. Then, for each candidate sequence of flybys, a trajectory must be found that visits each of the flyby targets and arrives at the final destination while optimizing some cost metric, such as minimizing ▵v or maximizing the final mass of the spacecraft. Three different classes of trajectory are described in this work, each of which requireda different physical model and optimization method. The choice of a trajectory model and optimization method is especially challenging because of the nature of the hybrid optimal control problem. Because the trajectory optimization problem is generated in real time by the outer-loop, the inner-loop optimization algorithm cannot require any a priori information and must always return a solution. In addition, the upper and lower bounds on each decision variable cannot be chosen a priori by the user because the user has no way to know what problem will be solved. Instead a method of choosing upper and lower bounds via a set of simple rules was developed and used for all three types of trajectory optimization problem. Many optimization algorithms were tested and discarded until suitable algorithms were found for each type of trajectory. The first class of trajectories use chemical propulsion and may only apply a ▵v at the periapse of each flyby. These Multiple Gravity Assist (MGA) trajectories are optimized using a cooperative algorithm of Differential Evolution (DE) and Particle Swarm Optimization (PSO). The second class of trajectories, known as Multiple Gravity Assist with one Deep Space Maneuver (MGA-DSM), also use chemical propulsion but instead of maneuvering at the periapse of each flyby as in the MGA case a maneuver is applied at a free point along each planet-to-planet arc, i.e. there is one maneuver for each pair of flybys. MGA-DSM trajectories are parameterized by more variables than MGA trajectories, and so the cooperative algorithm of DE and PSO that was used to optimize MGA trajectories was found to be less effective when applied to MGA-DSM. Instead, either PSO or DE alone were found to be more effective. The third class of trajectories addressed in this work are those using continuousthrust propulsion. Continuous-thrust trajectory optimization problems are more challenging than impulsive-thrust problems because the control variables are a continuous time series rather than a small set of parameters and because the spacecraft does not follow a conic section trajectory, leading to a large number of nonlinear constraints that must be satisfied to ensure that the spacecraft obeys the equations of motion. Many models and optimization algorithms were applied including direct transcription with nonlinear programming (DTNLP), the inverse-polynomial shapebased method, and feasible region analysis. However the only physical model and optimization method that proved reliable enough were the Sims-Flanagan transcription coupled with a nonlinear programming solver and the monotonic basin hopping (MBH) global search heuristic. The methods developed here are demonstrated to optimize a set of example trajectories, including a recreation of the Cassini mission, a Galileo-like mission, and conceptual continuous-thrust missions to Jupiter, Mercury, and Uranus.

  16. Additional security features for optically variable foils

    NASA Astrophysics Data System (ADS)

    Marshall, Allan C.; Russo, Frank

    1998-04-01

    For thousands of years, man has exploited the attraction and radiance of pure gold to adorn articles of great significance. Today, designers decorate packaging with metallic gold foils to maintain the prestige of luxury items such as perfumes, chocolates, wine and whisky, and to add visible appeal and value to wide range of products. However, today's products do not call for the hand beaten gold leaf of the Ancient Egyptians, instead a rapid production technology exists which makes use of accurately coated thin polymer films and vacuum deposited metallic layers. Stamping Foils Technology is highly versatile since several different layers may be combined into one product, each providing a different function. Not only can a foil bring visual appeal to an article, it can provide physical and chemical resistance properties and also protect an article from human forms of interference, such as counterfeiting, copying or tampering. Stamping foils have proved to be a highly effective vehicle for applying optical devices to items requiring this type of protection. Credit cards, bank notes, personal identification documents and more recently high value packaged items such as software and perfumes are protected by optically variable devices applied using stamping foil technology.

  17. Fuzzy logic control of rotating drum bioreactor for improved production of amylase and protease enzymes by Aspergillus oryzae in solid-state fermentation.

    PubMed

    Sukumprasertsri, Monton; Unrean, Pornkamol; Pimsamarn, Jindarat; Kitsubun, Panit; Tongta, Anan

    2013-03-01

    In this study, we compared the performance of two control systems, fuzzy logic control (FLC) and conventional control (CC). The control systems were applied for controlling temperature and substrate moisture content in a solidstate fermentation for the biosynthesis of amylase and protease enzymes by Aspergillus oryzae. The fermentation process was achieved in a 200 L rotating drum bioreactor. Three factors affecting temperature and moisture content in the solid-state fermentation were considered. They were inlet air velocity, speed of the rotating drum bioreactor, and spray water addition. The fuzzy logic control system was designed using four input variables: air velocity, substrate temperature, fermentation time, and rotation speed. The temperature was controlled by two variables, inlet air velocity and rotational speed of bioreactor, while the moisture content was controlled by spray water. Experimental results confirmed that the FLC system could effectively control the temperature and moisture content of substrate better than the CC system, resulting in an increased enzyme production by A. oryzae. Thus, the fuzzy logic control is a promising control system that can be applied for enhanced production of enzymes in solidstate fermentation.

  18. Improved HPLC method with the aid of chemometric strategy: determination of loxoprofen in pharmaceutical formulation.

    PubMed

    Venkatesan, P; Janardhanan, V Sree; Muralidharan, C; Valliappan, K

    2012-06-01

    Loxoprofen belongs to a class of Nonsteroidal anti-inflammatory drug acts by inhibiting isoforms of cyclo-oxygenase 1 and 2. In this study an improved RP-HPLC method was developed for the quantification of loxoprofen in pharmaceutical dosage form. For that purpose an experimental design approach was employed. Factors-independent variables (organic modifier, pH of the mobile phase and flow rate) were extracted from the preliminary study and as dependent variables three responses (loxoprofen retention factor, resolution between loxoprofen probenecid and retention time of probenecid) were selected. For the improvement of method development and optimization step, Derringer's desirability function was applied to simultaneously optimize the chosen three responses. The procedure allowed deduction of optimal conditions and the predicted optimum was acetonitrile: water (53:47, v/v), pH of the mobile phase adjusted at to 2.9 with ortho phosphoric acid. The separation was achieved in less than 4minutes. The method was applied in the quality control of commercial tablets. The method showed good agreement between the experimental data and predictive value throughout the studied parameter space. The optimized assay condition was validated according to International conference on harmonisation guidelines to confirm specificity, linearity, accuracy and precision.

  19. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  20. Organisational determinants of production and efficiency in general practice: a population-based study.

    PubMed

    Olsen, Kim Rose; Gyrd-Hansen, Dorte; Sørensen, Torben Højmark; Kristensen, Troels; Vedsted, Peter; Street, Andrew

    2013-04-01

    Shortage of general practitioners (GPs) and an increased political focus on primary care have enforced the interest in efficiency analysis in the Danish primary care sector. This paper assesses the association between organisational factors of general practices and production and efficiency. We assume that production and efficiency can be modelled using a behavioural production function. We apply the Battese and Coelli (Empir Econ 20:325-332, 1995) estimator to accomplish a decomposition of exogenous variables to determine the production frontier and variables determining the individual GPs distance to this frontier. Two different measures of practice outputs (number of office visits and total production) were applied and the results compared. The results indicate that nurses do not substitute GPs in the production. The production function exhibited constant returns to scale. The mean level of efficiency was between 0.79 and 0.84, and list size was the most important determinant of variation in efficiency levels. Nurses are currently undertaking other tasks than GPs, and larger practices do not lead to increased production per GP. However, a relative increase in list size increased the efficiency. This indicates that organisational changes aiming to increase capacity in general practice should be carefully designed and tested.

  1. Definition study of a Variable Cycle Experimental Engine (VCEE) and associated test program and test plan

    NASA Technical Reports Server (NTRS)

    Allan, R. D.

    1978-01-01

    The Definition Study of a Variable Cycle Experimental Engine (VCEE) and Associated Test Program and Test Plan, was initiated to identify the most cost effective program for a follow-on to the AST Test Bed Program. The VCEE Study defined various subscale VCE's based on different available core engine components, and a full scale VCEE utilizing current technology. The cycles were selected, preliminary design accomplished and program plans and engineering costs developed for several program options. In addition to the VCEE program plans and options, a limited effort was applied to identifying programs that could logically be accomplished on the AST Test Bed Program VCE to extend the usefulness of this test hardware. Component programs were provided that could be accomplished prior to the start of a VCEE program.

  2. Hierarchical Fuzzy Control Applied to Parallel Connected UPS Inverters Using Average Current Sharing Scheme

    NASA Astrophysics Data System (ADS)

    Singh, Santosh Kumar; Ghatak Choudhuri, Sumit

    2018-05-01

    Parallel connection of UPS inverters to enhance power rating is a widely accepted practice. Inter-modular circulating currents appear when multiple inverter modules are connected in parallel to supply variable critical load. Interfacing of modules henceforth requires an intensive design, using proper control strategy. The potentiality of human intuitive Fuzzy Logic (FL) control with imprecise system model is well known and thus can be utilised in parallel-connected UPS systems. Conventional FL controller is computational intensive, especially with higher number of input variables. This paper proposes application of Hierarchical-Fuzzy Logic control for parallel connected Multi-modular inverters system for reduced computational burden on the processor for a given switching frequency. Simulated results in MATLAB environment and experimental verification using Texas TMS320F2812 DSP are included to demonstrate feasibility of the proposed control scheme.

  3. A Randomized Clinical Trial Comparison Between Pivotal Response Treatment (PRT) and Structured Applied Behavior Analysis (ABA) Intervention for Children with Autism

    PubMed Central

    Mohammadzaheri, Fereshteh; Koegel, Lynn Kern; Rezaee, Mohammad; Rafiee, Seyed Majid

    2014-01-01

    Accumulating studies are documenting specific motivational variables that, when combined into a naturalistic teaching paradigm, can positively influence the effectiveness of interventions for children with autism spectrum disorder (ASD). The purpose of this study was to compare two ABA intervention procedures, a naturalistic approach, Pivotal Response Treatment (PRT) with a structured ABA approach in a school setting. A Randomized Clinical Trial design using two groups of children, matched according to age, sex and mean length of utterance was used to compare the interventions. The data showed that the PRT approach was significantly more effective in improving targeted and untargeted areas after three months of intervention. The results are discussed in terms of variables that produce more rapid improvements in communication for children with ASD. PMID:24840596

  4. SPACEBAR: Kinematic design by computer graphics

    NASA Technical Reports Server (NTRS)

    Ricci, R. J.

    1975-01-01

    The interactive graphics computer program SPACEBAR, conceived to reduce the time and complexity associated with the development of kinematic mechanisms on the design board, was described. This program allows the direct design and analysis of mechanisms right at the terminal screen. All input variables, including linkage geometry, stiffness, and applied loading conditions, can be fed into or changed at the terminal and may be displayed in three dimensions. All mechanism configurations can be cycled through their range of travel and viewed in their various geometric positions. Output data includes geometric positioning in orthogonal coordinates of each node point in the mechanism, velocity and acceleration of the node points, and internal loads and displacements of the node points and linkages. All analysis calculations take at most a few seconds to complete. Output data can be viewed at the scope and also printed at the discretion of the user.

  5. Probabilistic confidence for decisions based on uncertain reliability estimates

    NASA Astrophysics Data System (ADS)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  6. A depictive neural model for the representation of motion verbs.

    PubMed

    Rao, Sunil; Aleksander, Igor

    2011-11-01

    In this paper, we present a depictive neural model for the representation of motion verb semantics in neural models of visual awareness. The problem of modelling motion verb representation is shown to be one of function application, mapping a set of given input variables defining the moving object and the path of motion to a defined output outcome in the motion recognition context. The particular function-applicative implementation and consequent recognition model design presented are seen as arising from a noun-adjective recognition model enabling the recognition of colour adjectives as applied to a set of shapes representing objects to be recognised. The presence of such a function application scheme and a separately implemented position identification and path labelling scheme are accordingly shown to be the primitives required to enable the design and construction of a composite depictive motion verb recognition scheme. Extensions to the presented design to enable the representation of transitive verbs are also discussed.

  7. Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J.-W.

    2001-01-01

    The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.

  8. Multivariable control altitude demonstration on the F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Dehoff, R. L.; Hackney, R. D.

    1979-01-01

    The F100 Multivariable control synthesis (MVCS) program, was aimed at demonstrating the benefits of LGR synthesis theory in the design of a multivariable engine control system for operation throughout the flight envelope. The advantages of such procedures include: (1) enhanced performance from cross-coupled controls, (2) maximum use of engine variable geometry, and (3) a systematic design procedure that can be applied efficiently to new engine systems. The control system designed, under the MVCS program, for the Pratt & Whitney F100 turbofan engine is described. Basic components of the control include: (1) a reference value generator for deriving a desired equilibrium state and an approximate control vector, (2) a transition model to produce compatible reference point trajectories during gross transients, (3) gain schedules for producing feedback terms appropriate to the flight condition, and (4) integral switching logic to produce acceptable steady-state performance without engine operating limit exceedance.

  9. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  10. Control Design for an Advanced Geared Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes W.; Litt, Jonathan S.

    2017-01-01

    This paper describes the design process for the control system of an advanced geared turbofan engine. This process is applied to a simulation that is representative of a 30,000 pound-force thrust class concept engine with two main spools, ultra-high bypass ratio, and a variable area fan nozzle. Control system requirements constrain the non-linear engine model as it operates throughout its flight envelope of sea level to 40,000 feet and from 0 to 0.8 Mach. The purpose of this paper is to review the engine control design process for an advanced turbofan engine configuration. The control architecture selected for this project was developed from literature and reflects a configuration that utilizes a proportional integral controller with sets of limiters that enable the engine to operate safely throughout its flight envelope. Simulation results show the overall system meets performance requirements without exceeding operational limits.

  11. Evaluation on Compressive Characteristics of Medical Stents Applied by Mesh Structures

    NASA Astrophysics Data System (ADS)

    Hirayama, Kazuki; He, Jianmei

    2017-11-01

    There are concerns about strength reduction and fatigue fracture due to stress concentration in currently used medical stents. To address these problems, meshed stents applied by mesh structures were interested for achieving long life and high strength perfromance of medical stents. The purpose of this study is to design basic mesh shapes to obatin three dimensional (3D) meshed stent models for mechanical property evaluation. The influence of introduced design variables on compressive characteristics of meshed stent models are evaluated through finite element analysis using ANSYS Workbench code. From the analytical results, the compressive stiffness are changed periodically with compressive directions, average results need to be introduced as the mean value of compressive stiffness of meshed stents. Secondly, compressive flexibility of meshed stents can be improved by increasing the angle proportional to the arm length of the mesh basic shape. By increasing the number of basic mesh shapes arranged in stent’s circumferential direction, compressive rigidity of meshed stent tends to be increased. Finaly reducing the mesh line width is found effective to improve compressive flexibility of meshed stents.

  12. Effect of gamma irradiation and storage time on microbial growth and physicochemical characteristics of pumpkin (Cucurbita Moschata Duchesne ex Poiret) puree.

    PubMed

    Gliemmo, María F; Latorre, María E; Narvaiz, Patricia; Campos, Carmen A; Gerschenson, Lía N

    2014-01-01

    The effect of gamma irradiation (0-2 kGy) and storage time (0-28 days) on microbial growth and physicochemical characteristics of a packed pumpkin puree was studied. For that purpose, a factorial design was applied. The puree contained potassium sorbate, glucose and vanillin was stored at 25°C . Gamma irradiation diminished and storage time increased microbial growth. A synergistic effect between both variables on microbial growth was observed. Storage time decreased pH and color of purees. Sorbate content decreased with storage time and gamma irradiation. Mathematical models of microbial growth generated by the factorial design allowed estimating that a puree absorbing 1.63 kGy would have a shelf-life of 4 days. In order to improve this time, some changes in the applied hurdles were assayed. These included a thermal treatment before irradiation, a reduction of irradiation dose to 0.75 kGy and a decrease in storage temperature at 20°C . As a result, the shelf-life of purees increased to 28 days.

  13. The state of RT-quantitative PCR: firsthand observations of implementation of minimum information for the publication of quantitative real-time PCR experiments (MIQE).

    PubMed

    Taylor, Sean C; Mrkusich, Eli M

    2014-01-01

    In the past decade, the techniques of quantitative PCR (qPCR) and reverse transcription (RT)-qPCR have become accessible to virtually all research labs, producing valuable data for peer-reviewed publications and supporting exciting research conclusions. However, the experimental design and validation processes applied to the associated projects are the result of historical biases adopted by individual labs that have evolved and changed since the inception of the techniques and associated technologies. This has resulted in wide variability in the quality, reproducibility and interpretability of published data as a direct result of how each lab has designed their RT-qPCR experiments. The 'minimum information for the publication of quantitative real-time PCR experiments' (MIQE) was published to provide the scientific community with a consistent workflow and key considerations to perform qPCR experiments. We use specific examples to highlight the serious negative ramifications for data quality when the MIQE guidelines are not applied and include a summary of good and poor practices for RT-qPCR. © 2013 S. Karger AG, Basel.

  14. Enhancement of docosahexaenoic acid production by Schizochytrium SW1 using response surface methodology

    NASA Astrophysics Data System (ADS)

    Nazir, Mohd Yusuf Mohd; Al-Shorgani, Najeeb Kaid Nasser; Kalil, Mohd Sahaid; Hamid, Aidil Abdul

    2015-09-01

    In this study, three factors (fructose concentration, agitation speed and monosodium glutamate (MSG) concentration) were optimized to enhance DHA production by Schizochytrium SW1 using response surface methodology (RSM). Central composite design was applied as the experimental design and analysis of variance (ANOVA) was used to analyze the data. The experiments were conducted using 500 mL flask with 100 mL working volume at 30°C for 96 hours. ANOVA analysis revealed that the process was adequately represented significantly by the quadratic model (p<0.0001) and two of the factors namely agitation speed and MSG concentration significantly affect DHA production (p<0.005). Level of influence for each variable and quadratic polynomial equation were obtained for DHA production by multiple regression analyses. The estimated optimum conditions for maximizing DHA production by SW1 were 70 g/L fructose, 250 rpm agitation speed and 12 g/L MSG. Consequently, the quadratic model was validated by applying of the estimated optimum conditions, which confirmed the model validity and 52.86% of DHA was produced.

  15. Evaluation of the Uncertainty in JP-7 Kinetics Models Applied to Scramjets

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    2017-01-01

    One of the challenges of designing and flying a scramjet-powered vehicle is the difficulty of preflight testing. Ground tests at realistic flight conditions introduce several sources of uncertainty to the flow that must be addressed. For example, the scales of the available facilities limit the size of vehicles that can be tested and so performance metrics for larger flight vehicles must be extrapolated from ground tests at smaller scales. To create the correct flow enthalpy for higher Mach number flows, most tunnels use a heater that introduces vitiates into the flow. At these conditions, the effects of the vitiates on the combustion process is of particular interest to the engine designer, where the ground test results must be extrapolated to flight conditions. In this paper, the uncertainty of the cracked JP-7 chemical kinetics used in the modeling of a hydrocarbon-fueled scramjet was investigated. The factors that were identified as contributing to uncertainty in the combustion process were the level of flow vitiation, the uncertainty of the kinetic model coefficients and the variation of flow properties between ground testing and flight. The method employed was to run simulations of small, unit problems and identify which variables were the principal sources of uncertainty for the mixture temperature. Then using this resulting subset of all the variables, the effects of the uncertainty caused by the chemical kinetics on a representative scramjet flow-path for both vitiated (ground) and nonvitiated (flight) flows were investigated. The simulations showed that only a few of the kinetic rate equations contribute to the uncertainty in the unit problem results, and when applied to the representative scramjet flowpath, the resulting temperature variability was on the order of 100 K. Both the vitiated and clean air results showed very similar levels of uncertainty, and the difference between the mean properties were generally within the range of uncertainty predicted.

  16. Optimization of Physical Conditions for the Aqueous Extraction of Antioxidant Compounds from Ginger (Zingiber officinale) Applying a Box-Behnken Design.

    PubMed

    Ramírez-Godínez, Juan; Jaimez-Ordaz, Judith; Castañeda-Ovando, Araceli; Añorve-Morga, Javier; Salazar-Pereda, Verónica; González-Olivares, Luis Guillermo; Contreras-López, Elizabeth

    2017-03-01

    Since ancient times, ginger (Zingiber officinale) has been widely used for culinary and medicinal purposes. This rhizome possesses several chemical constituents; most of them present antioxidant capacity due mainly to the presence of phenolic compounds. Thus, the physical conditions for the optimal extraction of antioxidant components of ginger were investigated by applying a Box-Behnken experimental design. Extracts of ginger were prepared using water as solvent in a conventional solid-liquid extraction. The analyzed variables were time (5, 15 and 25 min), temperature (20, 55 and 90 °C) and sample concentration (2, 6 and 10 %). The antioxidant activity was measured using the 2,2-diphenyl-1-picrylhydrazyl method and a modified ferric reducing antioxidant power assay while total phenolics were measured by Folin & Ciocalteu's method. The suggested experimental design allowed the acquisition of aqueous extracts of ginger with diverse antioxidant activity (100-555 mg Trolox/100 g, 147-1237 mg Fe 2+ /100 g and 50-332 mg gallic acid/100 g). Temperature was the determining factor in the extraction of components with antioxidant activity, regardless of time and sample quantity. The optimal physical conditions that allowed the highest antioxidant activity were: 90 °C, 15 min and 2 % of the sample. The correlation value between the antioxidant activity by ferric reducing antioxidant power assay and the content of total phenolics was R 2  = 0.83. The experimental design applied allowed the determination of the physical conditions under which ginger aqueous extracts liberate compounds with antioxidant activity. Most of them are of the phenolic type as it was demonstrated through the correlation established between different methods used to measure antioxidant capacity.

  17. Standardizing paleoclimate variables for data-intensive science

    NASA Astrophysics Data System (ADS)

    Lockshin, S.; Morrill, C.; Gille, E.; Gross, W.; McNeill, S.; Shepherd, E.; Wahl, E. R.; Bauer, B.

    2017-12-01

    Paleoclimate data are extremely heterogeneous. Scientists routinely make hundreds of types of measurements on a variety of physical samples. This heterogeneity is one of the biggest barriers to developing and accessing exhaustive, standardized paleoclimate data products. Moreover, it hinders the use of paleo data outside of the paleoclimate specialist community. We present our progress on creating a set of standards for documenting paleoclimate variables at the World Data Service for Paleoclimatology (WDS-Paleo). The current WDS-Paleo nine-part variable naming scheme provides the foundation for this project. This framework is designed for use with all eighteen proxy and reconstruction data types archived by the WDS-Paleo. Under the guidance of advisory panels consisting of subject matter experts, we have generated controlled vocabularies for use within this framework that are specific to individual data types yet integrated across data types. These vocabularies are thorough, precise, standardized and extensible. We have applied these new controlled vocabularies to existing WDS-Paleo datasets, creating homogeneous variable metadata as well as enabling a new paleoclimate metadata search by variable that is integrated across data types. This work will allow for the reuse of studies in larger compilations to forward scientific discovery that would not be possible from any single study. It will also facilitate new, interdisciplinary uses for paleoclimate datasets.

  18. Variability of suspended-sediment concentration at tidal to annual time scales in San Francisco Bay, USA

    USGS Publications Warehouse

    Schoellhamer, D.H.

    2002-01-01

    Singular spectrum analysis for time series with missing data (SSAM) was used to reconstruct components of a 6-yr time series of suspended-sediment concentration (SSC) from San Francisco Bay. Data were collected every 15 min and the time series contained missing values that primarily were due to sensor fouling. SSAM was applied in a sequential manner to calculate reconstructed components with time scales of variability that ranged from tidal to annual. Physical processes that controlled SSC and their contribution to the total variance of SSC were (1) diurnal, semidiurnal, and other higher frequency tidal constituents (24%), (2) semimonthly tidal cycles (21%), (3) monthly tidal cycles (19%), (4) semiannual tidal cycles (12%), and (5) annual pulses of sediment caused by freshwater inflow, deposition, and subsequent wind-wave resuspension (13%). Of the total variance 89% was explained and subtidal variability (65%) was greater than tidal variability (24%). Processes at subtidal time scales accounted for more variance of SSC than processes at tidal time scales because sediment accumulated in the water column and the supply of easily erodible bed sediment increased during periods of increased subtidal energy. This large range of time scales that each contained significant variability of SSC and associated contaminants can confound design of sampling programs and interpretation of resulting data.

  19. The influence of worksite and employee variables on employee engagement in telephonic health coaching programs: a retrospective multivariate analysis.

    PubMed

    Grossmeier, Jessica

    2013-01-01

    This study assessed 11 determinants of health coaching program participation. A cross-sectional study design used secondary data to assess the role of six employee-level and five worksite-level variables on telephone-based coaching enrollment, active participation, and completion. Data was provided by a national provider of worksite health promotion program services for employers. A random sample of 34,291 employees from 52 companies was selected for inclusion in the study. Survey-based measures included age, gender, job type, health risk status, tobacco risk, social support, financial incentives, comprehensive communications, senior leadership support, cultural support, and comprehensive program design. Gender-stratified multivariate logistic regression models were applied using backwards elimination procedures to yield parsimonious prediction models for each of the dependent variables. Employees were more likely to enroll in coaching programs if they were older, female, and in poorer health, and if they were at worksites with fewer environmental supports for health, clear financial incentives for participation in coaching, more comprehensive communications, and more comprehensive programs. Once employees were enrolled, program completion was greater among those who were older, did not use tobacco, worked at a company with strong communications, and had fewer environmental supports for health. Both worksite-level and employee-level factors have significant influences on health coaching engagement, and there are gender differences in the strength of these predictors.

  20. Tracking control of air-breathing hypersonic vehicles with non-affine dynamics via improved neural back-stepping design.

    PubMed

    Bu, Xiangwei; He, Guangjun; Wang, Ke

    2018-04-01

    This study considers the design of a new back-stepping control approach for air-breathing hypersonic vehicle (AHV) non-affine models via neural approximation. The AHV's non-affine dynamics is decomposed into velocity subsystem and altitude subsystem to be controlled separately, and robust adaptive tracking control laws are developed using improved back-stepping designs. Neural networks are applied to estimate the unknown non-affine dynamics, which guarantees the addressed controllers with satisfactory robustness against uncertainties. In comparison with the existing control methodologies, the special contributions are that the non-affine issue is handled by constructing two low-pass filters based on model transformations, and virtual controllers are treated as intermediate variables such that they aren't needed for back-stepping designs any more. Lyapunov techniques are employed to show the uniformly ultimately boundedness of all closed-loop signals. Finally, simulation results are presented to verify the tracking performance and superiorities of the investigated control strategy. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 high alpha research vehicle (HARV) automated test systems are discussed. It is noted that operational experiences in developing and using these automated testing techniques have highlighted the need for incorporating target system features to improve testability. Improved target system testability can be accomplished with the addition of nonreal-time and real-time features. Online access to target system implementation details, unobtrusive real-time access to internal user-selectable variables, and proper software instrumentation are all desirable features of the target system. Also, test system and target system design issues must be addressed during the early stages of the target system development. Processing speeds of up to 20 million instructions/s and the development of high-bandwidth reflective memory systems have improved the ability to integrate the target system and test system for the application of automated testing techniques. It is concluded that new methods of designing testability into the target systems are required.

  2. Optimal design approach for heating irregular-shaped objects in three-dimensional radiant furnaces using a hybrid genetic algorithm-artificial neural network method

    NASA Astrophysics Data System (ADS)

    Darvishvand, Leila; Kamkari, Babak; Kowsary, Farshad

    2018-03-01

    In this article, a new hybrid method based on the combination of the genetic algorithm (GA) and artificial neural network (ANN) is developed to optimize the design of three-dimensional (3-D) radiant furnaces. A 3-D irregular shape design body (DB) heated inside a 3-D radiant furnace is considered as a case study. The uniform thermal conditions on the DB surfaces are obtained by minimizing an objective function. An ANN is developed to predict the objective function value which is trained through the data produced by applying the Monte Carlo method. The trained ANN is used in conjunction with the GA to find the optimal design variables. The results show that the computational time using the GA-ANN approach is significantly less than that of the conventional method. It is concluded that the integration of the ANN with GA is an efficient technique for optimization of the radiant furnaces.

  3. Improved antimicrobial compound production by a new isolate Streptomyces hygroscopicus MTCC 4003 using Plackett-Burman design and response Surface methodology.

    PubMed

    Singh, Neha; Rai, Vibhuti

    2012-01-01

    An active strain, isolated from soil of Chhattisgarh, India, showed broad-spectrum antimicrobial activity against various pathogenic bacteria and fungi in glucose soybean meal broth. Strain was characterized as Streptomyces hygroscopicus MTCC 4003 based on 16S rRNA sequencing from Microbial Type culture Collection (MTCC), IMTECH, Chandigarh, India. Identification of the purified antimicrobial compound was done by using Infra-red (IR), Mass, Ultraviolet (UV), 1H and 13C nuclear magnetic resonance (NMR) spectra. Plackett-Burman design (PBD) and response surface methodology (RSM) methods were used for the optimization of antibiotic production. Effects of the four medium components soybean meal, glucose, CaCO3 and MgSO4 showed positive effect on antibiotic production, were investigated with the help of PBD. The individual and interaction effects of the selected variables were determined by RSM using central composite design (CCD). Applying statistical design, antibiotic production was improved nearly ten times (412 mg/L) compared with unoptimized production medium (37 mg/L).

  4. A novel program to design siRNAs simultaneously effective to highly variable virus genomes.

    PubMed

    Lee, Hui Sun; Ahn, Jeonghyun; Jun, Eun Jung; Yang, Sanghwa; Joo, Chul Hyun; Kim, Yoo Kyum; Lee, Heuiran

    2009-07-10

    A major concern of antiviral therapy using small interfering RNAs (siRNAs) targeting RNA viral genome is high sequence diversity and mutation rate due to genetic instability. To overcome this problem, it is indispensable to design siRNAs targeting highly conserved regions. We thus designed CAPSID (Convenient Application Program for siRNA Design), a novel bioinformatics program to identify siRNAs targeting highly conserved regions within RNA viral genomes. From a set of input RNAs of diverse sequences, CAPSID rapidly searches conserved patterns and suggests highly potent siRNA candidates in a hierarchical manner. To validate the usefulness of this novel program, we investigated the antiviral potency of universal siRNA for various Human enterovirus B (HEB) serotypes. Assessment of antiviral efficacy using Hela cells, clearly demonstrates that HEB-specific siRNAs exhibit protective effects against all HEBs examined. These findings strongly indicate that CAPSID can be applied to select universal antiviral siRNAs against highly divergent viral genomes.

  5. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  6. Knowledge Discovery for Transonic Regional-Jet Wing through Multidisciplinary Design Exploration

    NASA Astrophysics Data System (ADS)

    Chiba, Kazuhisa; Obayashi, Shigeru; Morino, Hiroyuki

    Data mining is an important facet of solving multi-objective optimization problem. Because it is one of the effective manner to discover the design knowledge in the multi-objective optimization problem which obtains large data. In the present study, data mining has been performed for a large-scale and real-world multidisciplinary design optimization (MDO) to provide knowledge regarding the design space. The MDO among aerodynamics, structures, and aeroelasticity of the regional-jet wing was carried out using high-fidelity evaluation models on the adaptive range multi-objective genetic algorithm. As a result, nine non-dominated solutions were generated and used for tradeoff analysis among three objectives. All solutions evaluated during the evolution were analyzed for the tradeoffs and influence of design variables using a self-organizing map to extract key features of the design space. Although the MDO results showed the inverted gull-wings as non-dominated solutions, one of the key features found by data mining was the non-gull wing geometry. When this knowledge was applied to one optimum solution, the resulting design was found to have better performance compared with the original geometry designed in the conventional manner.

  7. LQTA-QSAR: a new 4D-QSAR methodology.

    PubMed

    Martins, João Paulo A; Barbosa, Euzébio G; Pasqualoto, Kerly F M; Ferreira, Márcia M C

    2009-06-01

    A novel 4D-QSAR approach which makes use of the molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package is presented in this study. This new methodology, named LQTA-QSAR (LQTA, Laboratório de Quimiometria Teórica e Aplicada), has a module (LQTAgrid) that calculates intermolecular interaction energies at each grid point considering probes and all aligned conformations resulting from MD simulations. These interaction energies are the independent variables or descriptors employed in a QSAR analysis. The comparison of the proposed methodology to other 4D-QSAR and CoMFA formalisms was performed using a set of forty-seven glycogen phosphorylase b inhibitors (data set 1) and a set of forty-four MAP p38 kinase inhibitors (data set 2). The QSAR models for both data sets were built using the ordered predictor selection (OPS) algorithm for variable selection. Model validation was carried out applying y-randomization and leave-N-out cross-validation in addition to the external validation. PLS models for data set 1 and 2 provided the following statistics: q(2) = 0.72, r(2) = 0.81 for 12 variables selected and 2 latent variables and q(2) = 0.82, r(2) = 0.90 for 10 variables selected and 5 latent variables, respectively. Visualization of the descriptors in 3D space was successfully interpreted from the chemical point of view, supporting the applicability of this new approach in rational drug design.

  8. Determination of the dried product resistance variability and its influence on the product temperature in pharmaceutical freeze-drying.

    PubMed

    Scutellà, Bernadette; Trelea, Ioan Cristian; Bourlès, Erwan; Fonseca, Fernanda; Passot, Stephanie

    2018-07-01

    During the primary drying step of the freeze-drying process, mass transfer resistance strongly affects the product temperature, and consequently the final product quality. The main objective of this study was to evaluate the variability of the mass transfer resistance resulting from the dried product layer (R p ) in a manufacturing batch of vials, and its potential effect on the product temperature, from data obtained in a pilot scale freeze-dryer. Sublimation experiments were run at -25 °C and 10 Pa using two different freezing protocols: with spontaneous or controlled ice nucleation. Five repetitions of each condition were performed. Global (pressure rise test) and local (gravimetric) methods were applied as complementary approaches to estimate R p . The global method allowed to assess variability of the evolution of R p with the dried layer thickness between different experiments whereas the local method informed about R p variability at a fixed time within the vial batch. A product temperature variability of approximately ±4.4 °C was defined for a product dried layer thickness of 5 mm. The present approach can be used to estimate the risk of failure of the process due to mass transfer variability when designing freeze-drying cycle. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Development and validation of a building design waste reduction model.

    PubMed

    Llatas, C; Osmani, M

    2016-10-01

    Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Aerodynamic design on high-speed trains

    NASA Astrophysics Data System (ADS)

    Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li

    2016-04-01

    Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.

  11. Trust in automation: integrating empirical evidence on factors that influence trust.

    PubMed

    Hoff, Kevin Anthony; Bashir, Masooda

    2015-05-01

    We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge. Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators' trust. We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review. Our analysis revealed three layers of variability in human-automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust. Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust. © 2014, Human Factors and Ergonomics Society.

  12. Animal escapology I: theoretical issues and emerging trends in escape trajectories

    PubMed Central

    Domenici, Paolo; Blagburn, Jonathan M.; Bacon, Jonathan P.

    2011-01-01

    Summary Escape responses are used by many animal species as their main defence against predator attacks. Escape success is determined by a number of variables; important are the directionality (the percentage of responses directed away from the threat) and the escape trajectories (ETs) measured relative to the threat. Although logic would suggest that animals should always turn away from a predator, work on various species shows that these away responses occur only approximately 50–90% of the time. A small proportion of towards responses may introduce some unpredictability and may be an adaptive feature of the escape system. Similar issues apply to ETs. Theoretically, an optimal ET can be modelled on the geometry of predator–prey encounters. However, unpredictability (and hence high variability) in trajectories may be necessary for preventing predators from learning a simple escape pattern. This review discusses the emerging trends in escape trajectories, as well as the modulating key factors, such as the surroundings and body design. The main ET patterns identified are: (1) high ET variability within a limited angular sector (mainly 90–180 deg away from the threat; this variability is in some cases based on multiple peaks of ETs), (2) ETs that allow sensory tracking of the threat and (3) ETs towards a shelter. These characteristic features are observed across various taxa and, therefore, their expression may be mainly related to taxon-independent animal design features and to the environmental context in which prey live – for example whether the immediate surroundings of the prey provide potential refuges. PMID:21753039

  13. Multidisciplinary optimization of a controlled space structure using 150 design variables

    NASA Technical Reports Server (NTRS)

    James, Benjamin B.

    1993-01-01

    A controls-structures interaction design method is presented. The method coordinates standard finite-element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structure and control system of a spacecraft. Global sensitivity equations are used to account for coupling between the disciplines. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Design problems using 15, 63, and 150 design variables to optimize truss member sizes and feedback gain values are solved and the results are presented. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporation of the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables.

  14. Selection of key ambient particulate variables for epidemiological studies - applying cluster and heatmap analyses as tools for data reduction.

    PubMed

    Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef

    2012-10-01

    The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Genetic variability in sunflower (Helianthus annuus L.) and in the Helianthus genus as assessed by retrotransposon-based molecular markers.

    PubMed

    Vukich, M; Schulman, A H; Giordani, T; Natali, L; Kalendar, R; Cavallini, A

    2009-10-01

    The inter-retrotransposon amplified polymorphism (IRAP) protocol was applied for the first time within the genus Helianthus to assess intraspecific variability based on retrotransposon sequences among 36 wild accessions and 26 cultivars of Helianthus annuus L., and interspecific variability among 39 species of Helianthus. Two groups of LTRs, one belonging to a Copia-like retroelement and the other to a putative retrotransposon of unknown nature (SURE) have been isolated, sequenced and primers were designed to obtain IRAP fingerprints. The number of polymorphic bands in H. annuus wild accessions is as high as in Helianthus species. If we assume that a polymorphic band can be related to a retrotransposon insertion, this result suggests that retrotransposon activity continued after Helianthus speciation. Calculation of similarity indices from binary matrices (Shannon's and Jaccard's indices) show that variability is reduced among domesticated H. annuus. On the contrary, similarity indices among Helianthus species were as large as those observed among wild H. annuus accessions, probably related to their scattered geographic distribution. Principal component analysis of IRAP fingerprints allows the distinction between perennial and annual Helianthus species especially when the SURE element is concerned.

  16. Predicting the potential distribution of the amphibian pathogen Batrachochytrium dendrobatidis in East and Southeast Asia.

    PubMed

    Moriguchi, Sachiko; Tominaga, Atsushi; Irwin, Kelly J; Freake, Michael J; Suzuki, Kazutaka; Goka, Koichi

    2015-04-08

    Batrachochytrium dendrobatidis (Bd) is the pathogen responsible for chytridiomycosis, a disease that is associated with a worldwide amphibian population decline. In this study, we predicted the potential distribution of Bd in East and Southeast Asia based on limited occurrence data. Our goal was to design an effective survey area where efforts to detect the pathogen can be focused. We generated ecological niche models using the maximum-entropy approach, with alleviation of multicollinearity and spatial autocorrelation. We applied eigenvector-based spatial filters as independent variables, in addition to environmental variables, to resolve spatial autocorrelation, and compared the model's accuracy and the degree of spatial autocorrelation with those of a model estimated using only environmental variables. We were able to identify areas of high suitability for Bd with accuracy. Among the environmental variables, factors related to temperature and precipitation were more effective in predicting the potential distribution of Bd than factors related to land use and cover type. Our study successfully predicted the potential distribution of Bd in East and Southeast Asia. This information should now be used to prioritize survey areas and generate a surveillance program to detect the pathogen.

  17. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  18. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  19. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  20. Optimization of nanoparticles for cardiovascular tissue engineering.

    PubMed

    Izadifar, Mohammad; Kelly, Michael E; Haddadi, Azita; Chen, Xiongbiao

    2015-06-12

    Nano-particulate delivery systems have increasingly been playing important roles in cardiovascular tissue engineering. Properties of nanoparticles (e.g. size, polydispersity, loading capacity, zeta potential, morphology) are essential to system functions. Notably, these characteristics are regulated by fabrication variables, but in a complicated manner. This raises a great need to optimize fabrication process variables to ensure the desired nanoparticle characteristics. This paper presents a comprehensive experimental study on this matter, along with a novel method, the so-called Geno-Neural approach, to analyze, predict and optimize fabrication variables for desired nanoparticle characteristics. Specifically, ovalbumin was used as a protein model of growth factors used in cardiovascular tissue regeneration, and six fabrication variables were examined with regard to their influence on the characteristics of nanoparticles made from high molecular weight poly(lactide-co-glycolide). The six-factor five-level central composite rotatable design was applied to the conduction of experiments, and based on the experimental results, a geno-neural model was developed to determine the optimum fabrication conditions. For desired particle sizes of 150, 200, 250 and 300 nm, respectively, the optimum conditions to achieve the low polydispersity index, higher negative zeta potential and higher loading capacity were identified based on the developed geno-neural model and then evaluated experimentally. The experimental results revealed that the polymer and the external aqueous phase concentrations and their interactions with other fabrication variables were the most significant variables to affect the size, polydispersity index, zeta potential, loading capacity and initial burst release of the nanoparticles, while the electron microscopy images of the nanoparticles showed their spherical geometries with no sign of large pores or cracks on their surfaces. The release study revealed that the onset of the third phase of release can be affected by the polymer concentration. Circular dichroism spectroscopy indicated that ovalbumin structural integrity is preserved during the encapsulation process. Findings from this study would greatly contribute to the design of high molecular weight poly(lactide-co-glycolide) nanoparticles for prolonged release patterns in cardiovascular engineering.

Top