Science.gov

Sample records for experimental design techniques

  1. Experimental-design techniques in reliability-growth assessment

    NASA Astrophysics Data System (ADS)

    Benski, H. C.; Cabau, Emmanuel

    Several recent statistical methods, including a Bayesian technique, have been proposed to detect the presence of significant effects in unreplicated factorials. It is recognized that these techniques were developed for s-normally distributed responses; and this may or may not be the case for times between failures. In fact, for homogeneous Poisson processes (HPPs), these times are exponentially distributed. Still, response data transformations can be applied to these times so that, at least approximately, these procedures can be used. It was therefore considered important to determine how well these different techniques performed in terms of power. The results of an extensive Monte Carlo simulation are presented in which the power of techniques is analyzed. The actual details of a fractional factorial design applied in the context of reliability growth are described. Finally, power comparison results are presented.

  2. Nonmedical influences on medical decision making: an experimental technique using videotapes, factorial design, and survey sampling.

    PubMed Central

    Feldman, H A; McKinlay, J B; Potter, D A; Freund, K M; Burns, R B; Moskowitz, M A; Kasten, L E

    1997-01-01

    OBJECTIVE: To study nonmedical influences on the doctor-patient interaction. A technique using simulated patients and "real" doctors is described. DATA SOURCES: A random sample of physicians, stratified on such characteristics as demographics, specialty, or experience, and selected from commercial and professional listings. STUDY DESIGN: A medical appointment is depicted on videotape by professional actors. The patient's presenting complaint (e.g., chest pain) allows a range of valid interpretation. Several alternative versions are taped, featuring the same script with patient-actors of different age, sex, race, or other characteristics. Fractional factorial design is used to select a balanced subset of patient characteristics, reducing costs without biasing the outcome. DATA COLLECTION: Each physician is shown one version of the videotape appointment and is asked to describe how he or she would diagnose or treat such a patient. PRINCIPAL FINDINGS: Two studies using this technique have been completed to date, one involving chest pain and dyspnea and the other involving breast cancer. The factorial design provided sufficient power, despite limited sample size, to demonstrate with statistical significance various influences of the experimental and stratification variables, including the patient's gender and age and the physician's experience. Persistent recruitment produced a high response rate, minimizing selection bias and enhancing validity. CONCLUSION: These techniques permit us to determine, with a degree of control unattainable in observational studies, whether medical decisions as described by actual physicians and drawn from a demographic or professional group of interest, are influenced by a prescribed set of nonmedical factors. PMID:9240285

  3. Synthesis of designed materials by laser-based direct metal deposition technique: Experimental and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Qi, Huan

    Direct metal deposition (DMD), a laser-cladding based solid freeform fabrication technique, is capable of depositing multiple materials at desired composition which makes this technique a flexible method to fabricate heterogeneous components or functionally-graded structures. The inherently rapid cooling rate associated with the laser cladding process enables extended solid solubility in nonequilibrium phases, offering the possibility of tailoring new materials with advanced properties. This technical advantage opens the area of synthesizing a new class of materials designed by topology optimization method which have performance-based material properties. For better understanding of the fundamental phenomena occurring in multi-material laser cladding with coaxial powder injection, a self-consistent 3-D transient model was developed. Physical phenomena including laser-powder interaction, heat transfer, melting, solidification, mass addition, liquid metal flow, and species transportation were modeled and solved with a controlled-volume finite difference method. Level-set method was used to track the evolution of liquid free surface. The distribution of species concentration in cladding layer was obtained using a nonequilibrium partition coefficient model. Simulation results were compared with experimental observations and found to be reasonably matched. Multi-phase material microstructures which have negative coefficients of thermal expansion were studied for their DMD manufacturability. The pixel-based topology-optimal designs are boundary-smoothed by Bezier functions to facilitate toolpath design. It is found that the inevitable diffusion interface between different material-phases degrades the negative thermal expansion property of the whole microstructure. A new design method is proposed for DMD manufacturing. Experimental approaches include identification of laser beam characteristics during different laser-powder-substrate interaction conditions, an

  4. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  5. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Technical Reports Server (NTRS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-01-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  6. Axisymmetric and non-axisymmetric exhaust jet induced effects on a V/STOL vehicle design. Part 3: Experimental technique

    NASA Technical Reports Server (NTRS)

    Schnell, W. C.

    1982-01-01

    The jet induced effects of several exhaust nozzle configurations (axisymmetric, and vectoring/modulating varients) on the aeropropulsive performance of a twin engine V/STOL fighter design was determined. A 1/8 scale model was tested in an 11 ft transonic tunnel at static conditions and over a range of Mach Numbers from 0.4 to 1.4. The experimental aspects of the static and wind-on programs are discussed. Jet effects test techniques in general, fow through balance calibrations and tare force corrections, ASME nozzle thrust and mass flow calibrations, test problems and solutions are emphasized.

  7. Modern Experimental Techniques in Turbine Engine Testing

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; Bruckner, R. J.; Bencic, T. J.; Braunscheidel, E. P.

    1996-01-01

    The paper describes application of two modern experimental techniques, thin-film thermocouples and pressure sensitive paint, to measurement in turbine engine components. A growing trend of using computational codes in turbomachinery design and development requires experimental techniques to refocus from overall performance testing to acquisition of detailed data on flow and heat transfer physics to validate these codes for design applications. The discussed experimental techniques satisfy this shift in focus. Both techniques are nonintrusive in practical terms. The thin-film thermocouple technique improves accuracy of surface temperature and heat transfer measurements. The pressure sensitive paint technique supplies areal surface pressure data rather than discrete point values only. The paper summarizes our experience with these techniques and suggests improvements to ease the application of these techniques for future turbomachinery research and code verifications.

  8. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. PMID:25541547

  9. Experimental techniques for multiphase flows

    NASA Astrophysics Data System (ADS)

    Powell, Robert L.

    2008-04-01

    This review discusses experimental techniques that provide an accurate spatial and temporal measurement of the fields used to describe multiphase systems for a wide range of concentrations, velocities, and chemical constituents. Five methods are discussed: magnetic resonance imaging (MRI), ultrasonic pulsed Doppler velocimetry (UPDV), electrical impedance tomography (EIT), x-ray radiography, and neutron radiography. All of the techniques are capable of measuring the distribution of solids in suspensions. The most versatile technique is MRI, which can be used for spatially resolved measurements of concentration, velocity, chemical constituents, and diffusivity. The ability to measure concentration allows for the study of sedimentation and shear-induced migration. One-dimensional and two-dimensional velocity profiles have been measured with suspensions, emulsions, and a range of other complex liquids. Chemical shift MRI can discriminate between different constituents in an emulsion where diffusivity measurements allow the particle size to be determined. UPDV is an alternative technique for velocity measurement. There are some limitations regarding the ability to map complex flow fields as a result of the attenuation of the ultrasonic wave in concentrated systems that have high viscosities or where multiple scattering effects may be present. When combined with measurements of the pressure drop, both MRI and UPDV can provide local values of viscosity in pipe flow. EIT is a low cost means of measuring concentration profiles and has been used to study shear-induced migration in pipe flow. Both x-ray and neutron radiographes are used to image structures in flowing suspensions, but both require highly specialized facilities.

  10. Experimental and Quasi-Experimental Design.

    ERIC Educational Resources Information Center

    Cottrell, Edward B.

    With an emphasis on the problems of control of extraneous variables and threats to internal and external validity, the arrangement or design of experiments is discussed. The purpose of experimentation in an educational institution, and the principles governing true experimentation (randomization, replication, and control) are presented, as are…

  11. Model for Vaccine Design by Prediction of B-Epitopes of IEDB Given Perturbations in Peptide Sequence, In Vivo Process, Experimental Techniques, and Source or Host Organisms

    PubMed Central

    González-Díaz, Humberto; Pérez-Montoto, Lázaro G.; Ubeira, Florencio M.

    2014-01-01

    Perturbation methods add variation terms to a known experimental solution of one problem to approach a solution for a related problem without known exact solution. One problem of this type in immunology is the prediction of the possible action of epitope of one peptide after a perturbation or variation in the structure of a known peptide and/or other boundary conditions (host organism, biological process, and experimental assay). However, to the best of our knowledge, there are no reports of general-purpose perturbation models to solve this problem. In a recent work, we introduced a new quantitative structure-property relationship theory for the study of perturbations in complex biomolecular systems. In this work, we developed the first model able to classify more than 200,000 cases of perturbations with accuracy, sensitivity, and specificity >90% both in training and validation series. The perturbations include structural changes in >50000 peptides determined in experimental assays with boundary conditions involving >500 source organisms, >50 host organisms, >10 biological process, and >30 experimental techniques. The model may be useful for the prediction of new epitopes or the optimization of known peptides towards computational vaccine design. PMID:24741624

  12. Design and experimental demonstration of low-power CMOS magnetic cell manipulation platform using charge recycling technique

    NASA Astrophysics Data System (ADS)

    Niitsu, Kiichi; Yoshida, Kohei; Nakazato, Kazuo

    2016-03-01

    We present the world’s first charge-recycling-based low-power technique of complementary metal-oxide-semiconductor (CMOS) magnetic cell manipulation. CMOS magnetic cell manipulation associated with magnetic beads is a promissing tool for on-chip biomedical-analysis applications such as drug screening because CMOS can integrate control electronics and electro-chemical sensors. However, the conventional CMOS cell manipulation requires considerable power consumption. In this work, by concatenating multiple unit circuits and recycling electric charge among them, power consumption is reduced by a factor of the number of the concatenated unit circuits (1/N). For verifying the effectiveness, test chip was fabricated in a 0.6-µm CMOS. The chip successfully manipulates magnetic microbeads with achieving 49% power reduction (from 51 to 26.2 mW). Even considering the additional serial resistance of the concatenated inductors, nearly theoretical power reduction effect can be confirmed.

  13. Designing an Experimental "Accident"

    ERIC Educational Resources Information Center

    Picker, Lester

    1974-01-01

    Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)

  14. Experimental design of a waste glass study

    SciTech Connect

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150{degrees}C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases.

  15. Experimental Techniques for Thermodynamic Measurements of Ceramics

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Putnam, Robert L.; Navrotsky, Alexandra

    1999-01-01

    Experimental techniques for thermodynamic measurements on ceramic materials are reviewed. For total molar quantities, calorimetry is used. Total enthalpies are determined with combustion calorimetry or solution calorimetry. Heat capacities and entropies are determined with drop calorimetry, differential thermal methods, and adiabatic calorimetry . Three major techniques for determining partial molar quantities are discussed. These are gas equilibration techniques, Knudsen cell methods, and electrochemical techniques. Throughout this report, issues unique to ceramics are emphasized. Ceramic materials encompass a wide range of stabilities and this must be considered. In general data at high temperatures is required and the need for inert container materials presents a particular challenge.

  16. New experimental techniques for solar cells

    NASA Technical Reports Server (NTRS)

    Lenk, R.

    1993-01-01

    Solar cell capacitance has special importance for an array controlled by shunting. Experimental measurements of solar cell capacitance in the past have shown disagreements of orders of magnitude. Correct measurement technique depends on maintaining the excitation voltage less than the thermal voltage. Two different experimental methods are shown to match theory well, and two effective capacitances are defined for quantifying the effect of the solar cell capacitance on the shunting system.

  17. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  18. Linear-dichroic infrared spectroscopy—Validation and experimental design of the new orientation technique of solid samples as suspension in nematic liquid crystal

    NASA Astrophysics Data System (ADS)

    Ivanova, B. B.; Simeonov, V. D.; Arnaudov, M. G.; Tsalev, D. L.

    2007-05-01

    A validation of the developed new orientation method of solid samples as suspension in nematic liquid crystal (NLC), applied in linear-dichroic infrared (IR-LD) spectroscopy has been carried out using a model system DL-isoleucine ( DL-isoleu). Accuracy, precision and the influence of the liquid crystal medium on peak positions and integral absorbances of guest molecules have been presented. Optimization of experimental conditions has been performed as well. An experimental design for quantitative evaluation of the impact of four input factors: the number of scans, the rubbing-out of KBr-pellets, the amount of studied compounds included in the liquid crystal medium and the ratios of Lorentzian to Gaussian peak functions in the curve fitting procedure on the spectroscopic signal at five different frequencies, indicating important specifities of the system has been studied.

  19. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims. PMID:16161223

  20. Teaching experimental design to biologists.

    PubMed

    Zolman, J F

    1999-12-01

    The teaching of research design and data analysis to our graduate students has been a persistent problem. A course is described in which students, early in their graduate training, obtain extensive practice in designing experiments and interpreting data. Lecture-discussions on the essentials of biostatistics are given, and then these essentials are repeatedly reviewed by illustrating their applications and misapplications in numerous research design problems. Students critique these designs and prepare similar problems for peer evaluation. In most problems the treatments are confounded by extraneous variables, proper controls may be absent, or data analysis may be incorrect. For each problem, students must decide whether the researchers' conclusions are valid and, if not, must identify a fatal experimental flaw. Students learn that an experiment is a well-conceived plan for data collection, analysis, and interpretation. They enjoy the interactive evaluations of research designs and appreciate the repetitive review of common flaws in different experiments. They also benefit from their practice in scientific writing and in critically evaluating their peers' designs. PMID:10644236

  1. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment. PMID:25541541

  2. Heat capacity measurements - Progress in experimental techniques

    NASA Astrophysics Data System (ADS)

    Lakshmikumar, S. T.; Gopal, E. S. R.

    1981-11-01

    The heat capacity of a substance is related to the structure and constitution of the material and its measurement is a standard technique of physical investigation. In this review, the classical methods are first analyzed briefly and their recent extensions are summarized. The merits and demerits of these methods are pointed out. The newer techniques such as the a.c. method, the relaxation method, the pulse methods, the laser flash calorimetry and other methods developed to extend the heat capacity measurements to newer classes of materials and to extreme conditions of sample geometry, pressure and temperature are comprehensively reviewed. Examples of recent work and details of the experimental systems are provided for each method. The introduction of automation in control systems for the monitoring of the experiments and for data processing is also discussed. Two hundred and eight references and 18 figures are used to illustrate the various techniques.

  3. Single-crystal nickel-based superalloys developed by numerical multi-criteria optimization techniques: design based on thermodynamic calculations and experimental validation

    NASA Astrophysics Data System (ADS)

    Rettig, Ralf; Ritter, Nils C.; Helmer, Harald E.; Neumeier, Steffen; Singer, Robert F.

    2015-04-01

    A method for finding the optimum alloy compositions considering a large number of property requirements and constraints by systematic exploration of large composition spaces is proposed. It is based on a numerical multi-criteria global optimization algorithm (multistart solver using Sequential Quadratic Programming), which delivers the exact optimum considering all constraints. The CALPHAD method is used to provide the thermodynamic equilibrium properties, and the creep strength of the alloys is predicted based on a qualitative numerical model considering the solid solution strengthening of the matrix by the elements Re, Mo and W and the optimum morphology and fraction of the γ‧-phase. The calculated alloy properties which are required as an input for the optimization algorithm are provided via very fast Kriging surrogate models. This greatly reduces the total calculation time of the optimization to the order of minutes on a personal computer. The capability of the multi-criteria optimization method developed was experimentally verified with two new single crystal superalloys. Their compositions were designed such that the content of expensive elements was reduced. One of the newly designed alloys, termed ERBO/13, is found to possess creep strength of only 14 K below CMSX-4 in the high-temperature/low-stress regime although it is a Re-free alloy.

  4. Design for reliability of BEoL and 3-D TSV structures - A joint effort of FEA and innovative experimental techniques

    NASA Astrophysics Data System (ADS)

    Auersperg, Jürgen; Vogel, Dietmar; Auerswald, Ellen; Rzepka, Sven; Michel, Bernd

    2014-06-01

    Copper-TSVs for 3D-IC-integration generate novel challenges for reliability analysis and prediction, e.g. the need to master multiple failure criteria for combined loading including residual stress, interface delamination, cracking and fatigue issues. So, the thermal expansion mismatch between copper and silicon leads to a stress situation in silicon surrounding the TSVs which is influencing the electron mobility and as a result the transient behavior of transistors. Furthermore, pumping and protrusion of copper is a challenge for Back-end of Line (BEoL) layers of advanced CMOS technologies already during manufacturing. These effects depend highly on the temperature dependent elastic-plastic behavior of the TSV-copper and the residual stresses determined by the electro deposition chemistry and annealing conditions. That's why the authors pushed combined simulative/experimental approaches to extract the Young's-modulus, initial yield stress and hardening coefficients in copper-TSVs from nanoindentation experiments, as well as the temperature dependent initial yield stress and hardening coefficients from bow measurements due to electroplated thin copper films on silicon under thermal cycling conditions. A FIB trench technique combined with digital image correlation is furthermore used to capture the residual stress state near the surface of TSVs. The extracted properties are discussed and used accordingly to investigate the pumping and protrusion of copper-TSVs during thermal cycling. Moreover, the cracking and delamination risks caused by the elevated temperature variation during BEoL ILD deposition are investigated with the help of fracture mechanics approaches.

  5. Orbit determination error analysis and comparison of station-keeping costs for Lissajous and halo-type libration point orbits and sensitivity analysis using experimental design techniques

    NASA Technical Reports Server (NTRS)

    Gordon, Steven C.

    1993-01-01

    Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.

  6. Design for reliability of BEoL and 3-D TSV structures – A joint effort of FEA and innovative experimental techniques

    SciTech Connect

    Auersperg, Jürgen; Vogel, Dietmar; Auerswald, Ellen; Rzepka, Sven; Michel, Bernd

    2014-06-19

    Copper-TSVs for 3D-IC-integration generate novel challenges for reliability analysis and prediction, e.g. the need to master multiple failure criteria for combined loading including residual stress, interface delamination, cracking and fatigue issues. So, the thermal expansion mismatch between copper and silicon leads to a stress situation in silicon surrounding the TSVs which is influencing the electron mobility and as a result the transient behavior of transistors. Furthermore, pumping and protrusion of copper is a challenge for Back-end of Line (BEoL) layers of advanced CMOS technologies already during manufacturing. These effects depend highly on the temperature dependent elastic-plastic behavior of the TSV-copper and the residual stresses determined by the electro deposition chemistry and annealing conditions. That’s why the authors pushed combined simulative/experimental approaches to extract the Young’s-modulus, initial yield stress and hardening coefficients in copper-TSVs from nanoindentation experiments, as well as the temperature dependent initial yield stress and hardening coefficients from bow measurements due to electroplated thin copper films on silicon under thermal cycling conditions. A FIB trench technique combined with digital image correlation is furthermore used to capture the residual stress state near the surface of TSVs. The extracted properties are discussed and used accordingly to investigate the pumping and protrusion of copper-TSVs during thermal cycling. Moreover, the cracking and delamination risks caused by the elevated temperature variation during BEoL ILD deposition are investigated with the help of fracture mechanics approaches.

  7. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  8. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  9. Design techniques for mutlivariable flight control systems

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Techniques which address the multi-input closely coupled nature of advanced flight control applications and digital implementation issues are described and illustrated through flight control examples. The techniques described seek to exploit the advantages of traditional techniques in treating conventional feedback control design specifications and the simplicity of modern approaches for multivariable control system design.

  10. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  11. Telecommunications Systems Design Techniques Handbook

    NASA Technical Reports Server (NTRS)

    Edelson, R. E. (Editor)

    1972-01-01

    The Deep Space Network (DSN) increasingly supports deep space missions sponsored and managed by organizations without long experience in DSN design and operation. The document is intended as a textbook for those DSN users inexperienced in the design and specification of a DSN-compatible spacecraft telecommunications system. For experienced DSN users, the document provides a reference source of telecommunication information which summarizes knowledge previously available only in a multitude of sources. Extensive references are quoted for those who wish to explore specific areas more deeply.

  12. Quasi experimental designs in pharmacist intervention research.

    PubMed

    Krass, Ines

    2016-06-01

    Background In the field of pharmacist intervention research it is often difficult to conform to the rigorous requirements of the "true experimental" models, especially the requirement of randomization. When randomization is not feasible, a practice based researcher can choose from a range of "quasi-experimental designs" i.e., non-randomised and at time non controlled. Objective The aim of this article was to provide an overview of quasi-experimental designs, discuss their strengths and weaknesses and to investigate their application in pharmacist intervention research over the previous decade. Results In the literature quasi experimental studies may be classified into five broad categories: quasi-experimental design without control groups; quasi-experimental design that use control groups with no pre-test; quasi-experimental design that use control groups and pre-tests; interrupted time series and stepped wedge designs. Quasi-experimental study design has consistently featured in the evolution of pharmacist intervention research. The most commonly applied of all quasi experimental designs in the practice based research literature are the one group pre-post-test design and the non-equivalent control group design i.e., (untreated control group with dependent pre-tests and post-tests) and have been used to test the impact of pharmacist interventions in general medications management as well as in specific disease states. Conclusion Quasi experimental studies have a role to play as proof of concept, in the pilot phases of interventions when testing different intervention components, especially in complex interventions. They serve to develop an understanding of possible intervention effects: while in isolation they yield weak evidence of clinical efficacy, taken collectively, they help build a body of evidence in support of the value of pharmacist interventions across different practice settings and countries. However, when a traditional RCT is not feasible for

  13. Experimental Investigation of Centrifugal Compressor Stabilization Techniques

    NASA Technical Reports Server (NTRS)

    Skoch, Gary J.

    2003-01-01

    Results from a series of experiments to investigate techniques for extending the stable flow range of a centrifugal compressor are reported. The research was conducted in a high-speed centrifugal compressor at the NASA Glenn Research Center. The stabilizing effect of steadily flowing air-streams injected into the vaneless region of a vane-island diffuser through the shroud surface is described. Parametric variations of injection angle, injection flow rate, number of injectors, injector spacing, and injection versus bleed were investigated for a range of impeller speeds and tip clearances. Both the compressor discharge and an external source were used for the injection air supply. The stabilizing effect of flow obstructions created by tubes that were inserted into the diffuser vaneless space through the shroud was also investigated. Tube immersion into the vaneless space was varied in the flow obstruction experiments. Results from testing done at impeller design speed and tip clearance are presented. Surge margin improved by 1.7 points using injection air that was supplied from within the compressor. Externally supplied injection air was used to return the compressor to stable operation after being throttled into surge. The tubes, which were capped to prevent mass flux, provided 9.3 points of additional surge margin over the baseline surge margin of 11.7 points.

  14. GCFR shielding design and supporting experimental programs

    SciTech Connect

    Perkins, R.G.; Hamilton, C.J.; Bartine, D.

    1980-05-01

    The shielding for the conceptual design of the gas-cooled fast breeder reactor (GCFR) is described, and the component exposure design criteria which determine the shield design are presented. The experimental programs for validating the GCFR shielding design methods and data (which have been in existence since 1976) are also discussed.

  15. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  16. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  17. Experimental Design for the Evaluation of Detection Techniques of Hidden Corrosion Beneath the Thermal Protective System of the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Kemmerer, Catherine C.; Jacoby, Joseph A.; Lomness, Janice K.; Hintze, Paul E.; Russell, Richard W.

    2007-01-01

    The detection of corrosion beneath Space Shuttle Orbiter thermal protective system is traditionally accomplished by removing the Reusable Surface Insulation tiles and performing a visual inspection of the aluminum substrate and corrosion protection system. This process is time consuming and has the potential to damage high cost tiles. To evaluate non-intrusive NDE methods, a Proof of Concept (PoC) experiment was designed and test panels were manufactured. The objective of the test plan was three-fold: establish the ability to detect corrosion hidden from view by tiles; determine the key factor affecting detectability; roughly quantify the detection threshold. The plan consisted of artificially inducing dimensionally controlled corrosion spots in two panels and rebonding tile over the spots to model the thermal protective system of the orbiter. The corrosion spot diameter ranged from 0.100" to 0.600" inches and the depth ranged from 0.003" to 0.020". One panel consisted of a complete factorial array of corrosion spots with and without tile coverage. The second panel consisted of randomized factorial points replicated and hidden by tile. Conventional methods such as ultrasonics, infrared, eddy current and microwave methods have shortcomings. Ultrasonics and IR cannot sufficiently penetrate the tiles, while eddy current and microwaves have inadequate resolution. As such, the panels were interrogated using Backscatter Radiography and Terahertz Imaging. The terahertz system successfully detected artificially induced corrosion spots under orbiter tile and functional testing is in-work in preparation for implementation.

  18. AN EXPERIMENTALLY ROBUST TECHNIQUE FOR HALO MEASUREMENT

    SciTech Connect

    Amundson, J.; Pellico, W.; Spentzouris, P.; Sullivan, T.; Spentzouris, Linda; /IIT, Chicago

    2006-03-01

    We propose a model-independent quantity, L/G, to characterize non-Gaussian tails in beam profiles observed with the Fermilab Booster Ion Profile Monitor. This quantity can be considered a measure of beam halo in the Booster. We use beam dynamics and detector simulations to demonstrate that L/G is superior to kurtosis as an experimental measurement of beam halo when realistic beam shapes, detector effects and uncertainties are taken into account. We include the rationale and method of calculation for L/G in addition to results of the experimental studies in the Booster where we show that L/G is a useful halo discriminator.

  19. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  20. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism. PMID:27090148

  1. Helioseismology in a bottle: an experimental technique

    NASA Astrophysics Data System (ADS)

    Triana, S. A.; Zimmerman, D. S.; Nataf, H.; Thorette, A.; Cabanes, S.; Roux, P.; Lekic, V.; Lathrop, D. P.

    2013-12-01

    Measurement of the differential rotation of the Sun's interior is one of the great achievements of helioseismology, providing important constraints for stellar physics. The technique relies on observing and analyzing rotationally-induced splittings of p-modes in the star. Here we demonstrate the first use of the technique in a laboratory setting. We apply it in a spherical cavity with a spinning central core (spherical Couette flow) to determine the azimuthal velocity of the air filling the cavity. We excite a number of acoustic resonances (analogous to p-modes in the Sun) using a speaker and record the response with an array of small microphones and/or accelerometers on the outer sphere. Many observed acoustic modes show rotationally-induced splittings which allow us to perform an inversion to determine the air's azimuthal velocity as a function of both radius and latitude. We validate the method by comparing the velocity field obtained through inversion against the velocity profile measured with a calibrated hot film anemometer. The technique has great potential for laboratory setups involving rotating fluids in axisymmetric cavities, and we hope it will be especially useful in liquid metals. Acoustic spectra showing rotationally induced splittings. Top figure is the spectra recorded from a microphone near the equator and lower figure from a microphone at high latitude. Color indicates core's rotation rate in Hz.

  2. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  3. Experimental and numerical techniques to assess catalysis

    NASA Astrophysics Data System (ADS)

    Herdrich, G.; Fertig, M.; Petkow, D.; Steinbeck, A.; Fasoulas, S.

    2012-01-01

    Catalytic heating can be a significant portion of the thermal load experienced by a body during re-entry. Under the auspices of the NATO Research and Technology Organisation Applied Vehicle Technologies Panel Task Group AVT-136 an assessment of the current state-of-the-art in the experimental characterization and numerical simulation of catalysis on high-temperature material surfaces has been conducted. This paper gives an extraction of the final report for this effort, showing the facilities and capabilities worldwide to assess catalysis data. A corresponding summary for the modeling activities is referenced in this article.

  4. GAS CURTAIN EXPERIMENTAL TECHNIQUE AND ANALYSIS METHODOLOGIES

    SciTech Connect

    J. R. KAMM; ET AL

    2001-01-01

    The qualitative and quantitative relationship of numerical simulation to the physical phenomena being modeled is of paramount importance in computational physics. If the phenomena are dominated by irregular (i. e., nonsmooth or disordered) behavior, then pointwise comparisons cannot be made and statistical measures are required. The problem we consider is the gas curtain Richtmyer-Meshkov (RM) instability experiments of Rightley et al. (13), which exhibit complicated, disordered motion. We examine four spectral analysis methods for quantifying the experimental data and computed results: Fourier analysis, structure functions, fractal analysis, and continuous wavelet transforms. We investigate the applicability of these methods for quantifying the details of fluid mixing.

  5. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities. PMID:23949665

  6. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  7. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469]. PMID:19786177

  8. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  9. New experimental techniques with the split Hopkinson pressure bar

    SciTech Connect

    Frantz, C.E.; Follansbee, P.S.; Wright, W.J.

    1984-01-01

    The split Hopkinson pressure bar or Kolsky bar has provided for many years a technique for performing compression tests at strain rates approaching 10/sup 4/ s/sup -1/. At these strain rates, the small dimensions possible in a compression test specimen give an advantage over a dynamic tensile test by allowing the stress within the specimen to equilibrate within the shortest possible time. The maximum strain rates possible with this technique are limited by stress wave propagation in the elastic pressure bars as well as in the deforming specimen. This subject is reviewed in this paper, and it is emphasized that a slowly rising excitation is preferred to one that rises steeply. Experimental techniques for pulse shaping and a numerical procedure for correcting the raw data for wave dispersion in the pressure bars are presented. For tests at elevated temperature a bar mover apparatus has been developed which effectively brings the cold pressure bars into contact with the specimen, which is heated with a specially designed furnace, shortly before the pressure wave arrives. This procedure has been used successfully in tests at temperatures as high as 1000/sup 0/C.

  10. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  11. FPGAs in Space Environment and Design Techniques

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.; Day, John H. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of Field Programmable Gate Arrays (FPGA) in the space environment and design techniques. Details are given on the effects of the space radiation environment, total radiation dose, single event upset, single event latchup, single event transient, antifuse technology and gate rupture, proton upsets and sensitivity, and loss of functionality.

  12. Austin chalk stimulation techniques and design

    SciTech Connect

    Parker, C.D.; Weber, D.; Garza, D.; Swaner, S.

    1982-01-01

    This study presents design completion techniques being used to stimulate the Austin Chalk Formation in the Giddings field and Gonzales County, Texas. As background information, a history of the Giddings field and development of the Austin Chalk is discussed. The main purpose is to consider factors affecting fracture treatment design, including fracture height, pump rates, types of fracturing fluids, proppant concentrations, and leak-off controls. This is to insure effective and successful stimulation treatment. Possible alternative design considerations for future fracture treatments also are discussed.

  13. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  14. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  15. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  16. Evaluation of Advanced Retrieval Techniques in an Experimental Online Catalog.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1992-01-01

    Discusses subject searching problems in online library catalogs; explains advanced information retrieval (IR) techniques; and describes experiments conducted on a test collection database, CHESHIRE (California Hybrid Extended SMART for Hypertext and Information Retrieval Experimentation), which was created to evaluate IR techniques in online…

  17. A Novel Technique for Experimental Flow Visualization of Mechanical Valves.

    PubMed

    Huang Zhang, Pablo S; Dalal, Alex R; Kresh, J Yasha; Laub, Glenn W

    2016-01-01

    The geometry of the hinge region in mechanical heart valves has been postulated to play an important role in the development of thromboembolic events (TEs). This study describes a novel technique developed to visualize washout characteristics in mechanical valve hinge areas. A dairy-based colloidal suspension (DBCS) was used as a high-contrast tracer. It was introduced directly into the hinge-containing sections of two commercially available valves mounted in laser-milled fluidic channels and subsequently washed out at several flow rates. Time-lapse images were analyzed to determine the average washout rate and generate intensity topography maps of the DBCS clearance. As flow increased, washout improved and clearance times were shorter in all cases. Significantly different washout rate time constants were observed between valves, average >40% faster clearance (p < 0.01). The topographic maps revealed that each valve had a characteristic pattern of washout. The technique proved reproducible with a maximum recorded standard error of mean (SEM) of ±3.9. Although the experimental washout dynamics have yet to be correlated with in vivo visualization studies, the methodology may help identify key flow features influencing TEs. This visualization methodology can be a useful tool to help evaluate stagnation zones in new and existing heart valve hinge designs. PMID:26554553

  18. An Experimental Study for Effectiveness of Super-Learning Technique at Elementary Level in Pakistan

    ERIC Educational Resources Information Center

    Shafqat, Hussain; Muhammad, Sarwar; Imran, Yousaf; Naemullah; Inamullah

    2010-01-01

    The objective of the study was to experience the effectiveness of super-learning technique of teaching at elementary level. The study was conducted with 8th grade students at a public sector school. Pre-test and post-test control group designs were used. Experimental and control groups were formed randomly, the experimental group (N = 62),…

  19. Rational Experimental Design for Electrical Resistivity Imaging

    NASA Astrophysics Data System (ADS)

    Mitchell, V.; Pidlisecky, A.; Knight, R.

    2008-12-01

    Over the past several decades advances in the acquisition and processing of electrical resistivity data, through multi-channel acquisition systems and new inversion algorithms, have greatly increased the value of these data to near-surface environmental and hydrological problems. There has, however, been relatively little advancement in the design of actual surveys. Data acquisition still typically involves using a small number of traditional arrays (e.g. Wenner, Schlumberger) despite a demonstrated improvement in data quality from the use of non-standard arrays. While optimized experimental design has been widely studied in applied mathematics and the physical and biological sciences, it is rarely implemented for non-linear problems, such as electrical resistivity imaging (ERI). We focus specifically on using ERI in the field for monitoring changes in the subsurface electrical resistivity structure. For this application we seek an experimental design method that can be used in the field to modify the data acquisition scheme (spatial and temporal sampling) based on prior knowledge of the site and/or knowledge gained during the imaging experiment. Some recent studies have investigated optimized design of electrical resistivity surveys by linearizing the problem or with computationally-intensive search algorithms. We propose a method for rational experimental design based on the concept of informed imaging, the use of prior information regarding subsurface properties and processes to develop problem-specific data acquisition and inversion schemes. Specifically, we use realistic subsurface resistivity models to aid in choosing source configurations that maximize the information content of our data. Our approach is based on first assessing the current density within a region of interest, in order to provide sufficient energy to the region of interest to overcome a noise threshold, and then evaluating the direction of current vectors, in order to maximize the

  20. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  1. Simulation as an aid to experimental design

    SciTech Connect

    Frazer, J.W.; Balaban, D.J.; Wang, J.L.

    1983-05-01

    A simulator of chemical reactions can aid the scientist in the design of experimentation. They are of great value when studying enzymatic kinetic reactions. One such simulator is a numerical ordinary differential equation solver which uses interactive graphics to provide the user with the capability to simulate an extremely wide range of enzyme reaction conditions for many types of single substrate reactions. The concentration vs. time profiles of any subset or all nine states of a complex reaction can be displayed with and without simulated instrumental noise. Thus the user can estimate the practicality of any proposed experimentation given known instrumental noise. The experimenter can readily determine which state provides the most information related to the proposed kinetic parameters and mechanism. A general discussion of the program including the nondimensionalization of the set of differential equations is included. Finally, several simulation examples are shown and the results discussed.

  2. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  3. Parameter estimation and optimal experimental design.

    PubMed

    Banga, Julio R; Balsa-Canto, Eva

    2008-01-01

    Mathematical models are central in systems biology and provide new ways to understand the function of biological systems, helping in the generation of novel and testable hypotheses, and supporting a rational framework for possible ways of intervention, like in e.g. genetic engineering, drug development or treatment of diseases. Since the amount and quality of experimental 'omics' data continue to increase rapidly, there is great need for methods for proper model building which can handle this complexity. In the present chapter we review two key steps of the model building process, namely parameter estimation (model calibration) and optimal experimental design. Parameter estimation aims to find the unknown parameters of the model which give the best fit to a set of experimental data. Optimal experimental design aims to devise the dynamic experiments which provide the maximum information content for subsequent non-linear model identification, estimation and/or discrimination. We place emphasis on the need for robust global optimization methods for proper solution of these problems, and we present a motivating example considering a cell signalling model. PMID:18793133

  4. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  5. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  6. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  7. Aeroshell Design Techniques for Aerocapture Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Dyke, R. Eric; Hrinda, Glenn A.

    2004-01-01

    A major goal of NASA s In-Space Propulsion Program is to shorten trip times for scientific planetary missions. To meet this challenge arrival speeds will increase, requiring significant braking for orbit insertion, and thus increased deceleration propellant mass that may exceed launch lift capabilities. A technology called aerocapture has been developed to expand the mission potential of exploratory probes destined for planets with suitable atmospheres. Aerocapture inserts a probe into planetary orbit via a single pass through the atmosphere using the probe s aeroshell drag to reduce velocity. The benefit of an aerocapture maneuver is a large reduction in propellant mass that may result in smaller, less costly missions and reduced mission cruise times. The methodology used to design rigid aerocapture aeroshells will be presented with an emphasis on a new systems tool under development. Current methods for fast, efficient evaluations of structural systems for exploratory vehicles to planets and moons within our solar system have been under development within NASA having limited success. Many systems tools that have been attempted applied structural mass estimation techniques based on historical data and curve fitting techniques that are difficult and cumbersome to apply to new vehicle concepts and missions. The resulting vehicle aeroshell mass may be incorrectly estimated or have high margins included to account for uncertainty. This new tool will reduce the guesswork previously found in conceptual aeroshell mass estimations.

  8. Pipelining and dataflow techniques for designing supercomputers

    SciTech Connect

    Su, S.P.

    1982-01-01

    Extensive research has been conducted over the last two decades in developing supercomputers to meet the demand of high computational performance. This thesis investigates some pipelining and dataflow techniques for designing supercomputers. In the pipelining area, new techniques are developed for scheduling vector instructions in a multi-pipeline supercomputer and for constructing VLSI matrix arithmetic pipelines for large-scale matrix computations. In the dataflow area, a new approach is proposed to dispatch high-level functions for dependence-driven computations. A parallel task scheduling model is proposed for multi-pipeline vector supercomputers. This model can be applied to explore maximal concurrencies in vector supercomputers with a structure generalized from the CRAY-1, CYBER-205, and TI-ASC. The optimization problem of simultaneously scheduling multiple pipelines is proved to be MP-complete. Thus, heuristic scheduling algorithms for some restricted classes of vector task systems are developed. Nearly optimal performance can be achieved with the proposed parallel pipeline scheduling method. Simulation results on randomly generated task systems are presented to verify the analytical performance bounds. For dependence-driven computations, a dataflow controller is used to perform run-time scheduling of compound functions. The scheduling problem is shown to be NP-complete. Several heuristic scheduling strategies are proposed based on the time and resource demands of compound functions.

  9. Automatic Molecular Design using Evolutionary Techniques

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    Molecular nanotechnology is the precise, three-dimensional control of materials and devices at the atomic scale. An important part of nanotechnology is the design of molecules for specific purposes. This paper describes early results using genetic software techniques to automatically design molecules under the control of a fitness function. The fitness function must be capable of determining which of two arbitrary molecules is better for a specific task. The software begins by generating a population of random molecules. The population is then evolved towards greater fitness by randomly combining parts of the better individuals to create new molecules. These new molecules then replace some of the worst molecules in the population. The unique aspect of our approach is that we apply genetic crossover to molecules represented by graphs, i.e., sets of atoms and the bonds that connect them. We present evidence suggesting that crossover alone, operating on graphs, can evolve any possible molecule given an appropriate fitness function and a population containing both rings and chains. Prior work evolved strings or trees that were subsequently processed to generate molecular graphs. In principle, genetic graph software should be able to evolve other graph representable systems such as circuits, transportation networks, metabolic pathways, computer networks, etc.

  10. Nonlinear potential analysis techniques for supersonic-hypersonic aerodynamic design

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Clever, W. C.

    1984-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at supersonic and moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to conceptual configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with experimental results for a variety of wing, body, and wing-body shapes.

  11. An Experimental Investigation of a Technique for Predicting Gains from a Special Reading Program.

    ERIC Educational Resources Information Center

    Gill, Patrick Ralston

    This study was an experimental investigation designed to ascertain the effectiveness of a technique for predicting student success in a special reading program. The disparity between a student's score on a reading test taken silently and his score on an equivalent form which was read orally by the investigator as the student read it silently was…

  12. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  13. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  14. Computational procedures for optimal experimental design in biological systems.

    PubMed

    Balsa-Canto, E; Alonso, A A; Banga, J R

    2008-07-01

    Mathematical models of complex biological systems, such as metabolic or cell-signalling pathways, usually consist of sets of nonlinear ordinary differential equations which depend on several non-measurable parameters that can be hopefully estimated by fitting the model to experimental data. However, the success of this fitting is largely conditioned by the quantity and quality of data. Optimal experimental design (OED) aims to design the scheme of actuations and measurements which will result in data sets with the maximum amount and/or quality of information for the subsequent model calibration. New methods and computational procedures for OED in the context of biological systems are presented. The OED problem is formulated as a general dynamic optimisation problem where the time-dependent stimuli profiles, the location of sampling times, the duration of the experiments and the initial conditions are regarded as design variables. Its solution is approached using the control vector parameterisation method. Since the resultant nonlinear optimisation problem is in most of the cases non-convex, the use of a robust global nonlinear programming solver is proposed. For the sake of comparing among different experimental schemes, a Monte-Carlo-based identifiability analysis is then suggested. The applicability and advantages of the proposed techniques are illustrated by considering an example related to a cell-signalling pathway. PMID:18681746

  15. [Design and experimentation of marine optical buoy].

    PubMed

    Yang, Yue-Zhong; Sun, Zhao-Hua; Cao, Wen-Xi; Li, Cai; Zhao, Jun; Zhou, Wen; Lu, Gui-Xin; Ke, Tian-Cun; Guo, Chao-Ying

    2009-02-01

    Marine optical buoy is of important value in terms of calibration and validation of ocean color remote sensing, scientific observation, coastal environment monitoring, etc. A marine optical buoy system was designed which consists of a main and a slave buoy. The system can measure the distribution of irradiance and radiance over the sea surface, in the layer near sea surface and in the euphotic zone synchronously, during which some other parameters are also acquired such as spectral absorption and scattering coefficients of the water column, the velocity and direction of the wind, and so on. The buoy was positioned by GPS. The low-power integrated PC104 computer was used as the control core to collect data automatically. The data and commands were real-timely transmitted by CDMA/GPRS wireless networks or by the maritime satellite. The coastal marine experimentation demonstrated that the buoy has small pitch and roll rates in high sea state conditions and thus can meet the needs of underwater radiometric measurements, the data collection and remote transmission are reliable, and the auto-operated anti-biofouling devices can ensure that the optical sensors work effectively for a period of several months. PMID:19445253

  16. Experimental investigation of slope flows via image analysis techniques

    NASA Astrophysics Data System (ADS)

    Moroni, Monica; Giorgilli, Marco; Cenedese, Antonio

    2014-02-01

    A vessel filled with distilled water is used to simulate the local circulation in the surroundings of an urban area that is situated in a mountain valley. The purpose of this study is to establish if the experimental setup is suitable for the investigation of katabatic and anabatic flows and their interaction with an urban heat island. Flow fields are derived by means of Feature Tracking and temperature fields are directly measured with thermocouples. The technique employed allows obtaining a high spatio-temporal resolution, providing robust statistics for the characterization of the fluid-dynamic field. General qualitative comparisons are made with expectations from analytical models. It appeared that the experimental setup as used in this study can be used for reproducing the phenomena occurring in the atmospheric boundary layer.

  17. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  18. Video/Computer Techniques for Static and Dynamic Experimental Mechanics

    NASA Astrophysics Data System (ADS)

    Maddux, Gene E.

    1987-09-01

    Recent advances in video camera and processing technology, coupled with the development of relatively inexpensive but powerful mini- and micro-computers are providing new capabilities for the experimentalist. This paper will present an overview of current areas of application and an insight into the selection of video/computer systems. The application of optical techniques for most experimental mechanics efforts involves the generation of fringe patterns that can be related to the response of an object to some loading condition. The data reduction process may be characterized as a search for fringe position information. These techniques include methods such as holographic interferometry, speckle metrology, moire, and photoelasticity. Although considerable effort has been expended in developing specialized techniques to convert these patterns to useful engineering data, there are particular advantages to the video approach. Other optical techniques are used which do not produce fringe patterns. Among these is a relatively new area of video application; that of determining the time-history of the response of a structure to dynamic excitation. In particular, these systems have been used to perform modal surveys of large, flexible space structures which make the use of conventional test instrumentation difficult, if not impossible. Video recordings of discrete targets distributed on a vibrating structure can be processed to obtain displacement, velocity, and acceleration data.

  19. An infrared technique for evaluating turbine airfoil cooling designs

    SciTech Connect

    Sweeney, P.C.; Rhodes, J.F.

    2000-01-01

    An experimental approach is used to evaluate turbine airfoil cooling designs for advanced gas turbine engine applications by incorporating double-wall film-cooled design features into large-scale flat plate specimens. An infrared (IR) imaging system is used to make detailed, two-dimensional steady-state measurements of flat plate surface temperature with spatial resolution on the order of 0.4 mm. The technique employs a cooled zinc selenide window transparent to infrared radiation and calibrates the IR temperature readings to reference thermocouples embedded in each specimen, yielding a surface temperature measurement accuracy of {+-} 4 C. With minimal thermocouple installation required, the flat plate/IR approach is cost effective, essentially nonintrusive, and produces abundant results quickly. Design concepts can proceed from art to part to data in a manner consistent with aggressive development schedules. The infrared technique is demonstrated here by considering the effect of film hole injection angle for a staggered array of film cooling holes integrated with a highly effective internal cooling pattern. Heated free stream air and room temperature cooling air are used to produce a nominal temperature ratio of 2 over a range of blowing ratios from 0.7 to 1.5. Results were obtained at hole angles of 90 and 30 deg for two different hole spacings and are presented in terms of overall cooling effectiveness.

  20. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  1. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  2. Circular machine design techniques and tools

    SciTech Connect

    Servranckx, R.V.; Brown, K.L.

    1986-04-01

    Some of the basic optics principles involved in the design of circular accelerators such as Alternating Gradient Synchrotrons, Storage and Collision Rings, and Pulse Stretcher Rings are outlined. Typical problems facing a designer are defined, and the main references and computational tools are reviewed that are presently available. Two particular classes of problems that occur typically in accelerator design are listed - global value problems, which affect the control of parameters which are characteristic of the complete closed circular machine, and local value problems. Basic mathematical formulae are given that are considered useful for a first draft of a design. The basic optics building blocks that can be used to formulate an initial machine design are introduced, giving only the elementary properties and transfer matrices only in one transverse plane. Solutions are presented for some first-order and second-order design problems. (LEW)

  3. Experimental Technique for Studying Aerosols of Lyophilized Bacteria

    PubMed Central

    Cox, Christopher S.; Derr, John S.; Flurie, Eugene G.; Roderick, Roger C.

    1970-01-01

    An experimental technique is presented for studying aerosols generated from lyophilized bacteria by using Escherichia coli B, Bacillus subtilis var. niger, Enterobacter aerogenes, and Pasteurella tularensis. An aerosol generator capable of creating fine particle aerosols of small quantities (10 mg) of lyophilized powder under controlled conditions of exposure to the atmosphere is described. The physical properties of the aerosols are investigated as to the distribution of number of aerosol particles with particle size as well as to the distribution of number of bacteria with particle size. Biologically unstable vegetative cells were quantitated physically by using 14C and Europium chelate stain as tracers, whereas the stable heat-shocked B. subtilis spores were assayed biologically. The physical persistence of the lyophilized B. subtilis aerosol is investigated as a function of size of spore-containing particles. The experimental result that physical persistence of the aerosol in a closed aerosol chamber increases as particle size is decreased is satisfactorily explained on the bases of electrostatic, gravitational, inertial, and diffusion forces operating to remove particles from the particular aerosol system. The net effect of these various forces is to provide, after a short time interval in the system (about 2 min), an aerosol of fine particles with enhanced physical stability. The dependence of physical stability of the aerosol on the species of organism and the nature of the suspending medium for lyophilization is indicated. Also, limitations and general applicability of both the technique and results are discussed. PMID:4992657

  4. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Levine, William S.; Barlow, Jewel

    1993-01-01

    This report summarizes the work that was done on the project from 1 Apr. 1992 to 31 Mar. 1993. The main goal of this research is to develop a practical tool for rotorcraft control system design based on interactive optimization tools (CONSOL-OPTCAD) and classical rotorcraft design considerations (ADOCS). This approach enables the designer to combine engineering intuition and experience with parametric optimization. The combination should make it possible to produce a better design faster than would be possible using either pure optimization or pure intuition and experience. We emphasize that the goal of this project is not to develop an algorithm. It is to develop a tool. We want to keep the human designer in the design process to take advantage of his or her experience and creativity. The role of the computer is to perform the calculation necessary to improve and to display the performance of the nominal design. Briefly, during the first year we have connected CONSOL-OPTCAD, an existing software package for optimizing parameters with respect to multiple performance criteria, to a simplified nonlinear simulation of the UH-60 rotorcraft. We have also created mathematical approximations to the Mil-specs for rotorcraft handling qualities and input them into CONSOL-OPTCAD. Finally, we have developed the additional software necessary to use CONSOL-OPTCAD for the design of rotorcraft controllers.

  5. Comparison of deaerator performance using experimental and numerical techniques

    NASA Astrophysics Data System (ADS)

    Majji, Sri Harsha

    Deaerator is a component of integrated drive generator (IDG), which is used to separate air from oil. Integrated drive generator is the main power generation unit used in aircrafts to generate electric-power and must be cooled to give maximum efficiency. Mob Jet Oil II is used in these IDGs as a lubricant and coolant. So, in order to get high-quality oil, a deaerator is used to remove trapped air from this Mob Jet Oil II using the centrifugal principle. The reason for entrapment of air may be due to operation of vacuum and high-pressure pumps. In this study, 75/90 IDG generic and A320 classic deaerator performance evaluation was done based on both experimental and numerical techniques. Experimental data was collected from deaerator test rig and numerical data was attained using CFD simulations (software used for CFD simulation is ANSYS CFX). Both experimental and numerical results were compared and also deaerator 75/90 generic and A320 classic was compared in this study. A parametric study on deaerators flow separation and inner geometry was also done in this study. This work also includes a comparison study of different multiphase models and different meshes applied on deaerator numerical test methodology.

  6. Implementation of high throughput experimentation techniques for kinetic reaction testing.

    PubMed

    Nagy, Anton J

    2012-02-01

    Successful implementation of High throughput Experimentation (EE) tools has resulted in their increased acceptance as essential tools in chemical, petrochemical and polymer R&D laboratories. This article provides a number of concrete examples of EE systems, which have been designed and successfully implemented in studies, which focus on deriving reaction kinetic data. The implementation of high throughput EE tools for performing kinetic studies of both catalytic and non-catalytic systems results in a significantly faster acquisition of high-quality kinetic modeling data, required to quantitatively predict the behavior of complex, multistep reactions. PMID:21902639

  7. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  8. Experimental Methods Using Photogrammetric Techniques for Parachute Canopy Shape Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Downey, James M.; Lunsford, Charles B.; Desabrais, Kenneth J.; Noetscher, Gregory

    2007-01-01

    NASA Langley Research Center in partnership with the U.S. Army Natick Soldier Center has collaborated on the development of a payload instrumentation package to record the physical parameters observed during parachute air drop tests. The instrumentation package records a variety of parameters including canopy shape, suspension line loads, payload 3-axis acceleration, and payload velocity. This report discusses the instrumentation design and development process, as well as the photogrammetric measurement technique used to provide shape measurements. The scaled model tests were conducted in the NASA Glenn Plum Brook Space Propulsion Facility, OH.

  9. CMOS-array design-automation techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.; Lombardt, T.

    1979-01-01

    Thirty four page report discusses design of 4,096-bit complementary metal oxide semiconductor (CMOS) read-only memory (ROM). CMOSROM is either mask or laser programable. Report is divided into six sections; section one describes background of ROM chips; section two presents design goals for chip; section three discusses chip implementation and chip statistics; conclusions and recommendations are given in sections four thru six.

  10. [Early Diagnosis of Osteoarthritis: Clinical Reality and Promising Experimental Techniques].

    PubMed

    Arnscheidt, C; Meder, A; Rolauffs, B

    2016-06-01

    It is considered that the structural damage in early osteoarthritis (OA) is potentially reversible. It is therefore particularly important for orthopaedic and trauma surgery to develop strategies and technologies for diagnosing early OA processes. This review presents 3 case reports to illustrate the current clinical diagnostic procedure for OA. Experimental techniques with translational character are discussed in the context of the detection of early degenerative processes relevant to OA. Non-invasive imaging methods such as quantitative MRI, ultrasound, optical coherence tomography (OCT), scintigraphy and diffraction-enhanced synchrotron imaging (DEI), as well as biochemical methods and proteomics, are considered. Early detection of OA is reviewed with minimally invasive techniques, such as arthroscopy, as well as the combination of arthroscopic techniques with indentation, spectrometry, and multiphoton microscopy. In addition, a brief summary of macroscopic and histologic scores is presented. Finally, the spatial organisation of joint surface chondrocytes as an image-based biomarker is used to illustrate an early OA detection strategy that focusses on early changes in tissue architecture potentially prior to damage. In summary, multiple translational techniques are able to detect early OA processes but we do not know whether they truly represent the initial events. Moreover, at this point it is difficult to judge the future clinical relevance of these procedures and to compare their efficacy, as there have been comparative studies. However, the expected gain in knowledge will hopefully help us top attain a more comprehensive understanding of early OA and to develop novel methods for its early diagnosis, therapy, and prevention. Overall, the clinical diagnosis of early OA remains one of the greatest challenges of our field. We still face uncharted territory. PMID:26894867

  11. EXPERIMENTAL DESIGN: STATISTICAL CONSIDERATIONS AND ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this book chapter, information on how field experiments in invertebrate pathology are designed and the data collected, analyzed, and interpreted is presented. The practical and statistical issues that need to be considered and the rationale and assumptions behind different designs or procedures ...

  12. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Yudilevitch, Gil; Levine, William S.

    1994-01-01

    Over the last two and a half years we have been demonstrating a new methodology for the design of rotorcraft flight control systems (FCS) to meet handling qualities requirements. This method is based on multicriterion optimization as implemented in the optimization package CONSOL-OPTCAD (C-O). This package has been developed at the Institute for Systems Research (ISR) at the University of Maryland at College Park. This design methodology has been applied to the design of a FCS for the UH-60A helicopter in hover having the ADOCS control structure. The controller parameters have been optimized to meet the ADS-33C specifications. Furthermore, using this approach, an optimal (minimum control energy) controller has been obtained and trade-off studies have been performed.

  13. Nonlinear potential analysis techniques for supersonic-hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    Clever, W. C.; Shankar, V.

    1983-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical pilot codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one. Case computational times of a minute were achieved for practical aircraft arrangements.

  14. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  15. Advanced experimental techniques for transonic wind tunnels - Final lecture

    NASA Technical Reports Server (NTRS)

    Kilgore, Robert A.

    1987-01-01

    A philosophy of experimental techniques is presented, suggesting that in order to be successful, one should like what one does, have the right tools, stick to the job, avoid diversions, work hard, interact with people, be informed, keep it simple, be self sufficient, and strive for perfection. Sources of information, such as bibliographies, newsletters, technical reports, and technical contacts and meetings are recommended. It is pointed out that adaptive-wall test sections eliminate or reduce wall interference effects, and magnetic suspension and balance systems eliminate support-interference effects, while the problem of flow quality remains with all wind tunnels. It is predicted that in the future it will be possible to obtain wind tunnel results at the proper Reynolds number, and the effects of flow unsteadiness, wall interference, and support interference will be eliminated or greatly reduced.

  16. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  17. A Novel Experimental Technique to Simulate Pillar Burst in Laboratory

    NASA Astrophysics Data System (ADS)

    He, M. C.; Zhao, F.; Cai, M.; Du, S.

    2015-09-01

    Pillar burst is one type of rockburst that occurs in underground mines. Simulating the stress change and obtaining insight into the pillar burst phenomenon under laboratory conditions are essential for studying the rock behavior during pillar burst in situ. To study the failure mechanism, a novel experimental technique was proposed and a series of tests were conducted on some granite specimens using a true-triaxial strainburst test system. Acoustic emission (AE) sensors were used to monitor the rock fracturing process. The damage evolution process was investigated using techniques such as macro and micro fracture characteristics observation, AE energy evolution, and b value analysis and fractal dimension analysis of cracks on fragments. The obtained results indicate that stepped loading and unloading simulated the pillar burst phenomenon well. Four deformation stages are divided as initial stress state, unloading step I, unloading step II, and final burst. It is observed that AE energy has a sharp increase at the initial stress state, accumulates slowly at unloading steps I and II, and increases dramatically at peak stress. Meanwhile, the mean b values fluctuate around 3.50 for the first three deformation stages and then decrease to 2.86 at the final stage, indicating the generation of a large amount of macro fractures. Before the test, the fractal dimension values are discrete and mainly vary between 1.10 and 1.25, whereas after failure the values concentrate around 1.25-1.35.

  18. Experimental techniques for studying the structure of foams and froths.

    PubMed

    Pugh, R J

    2005-06-30

    Several techniques are described in this review to study the structure and the stability of froths and foams. Image analysis proved useful for detecting structure changes in 2-D foams and has enabled the drainage process and the gradients in bubble size distribution to be determined. However, studies on 3-D foams require more complex techniques such as Multiple-Light Scattering Methods, Microphones and Optical Tomography. Under dynamic foaming conditions, the Foam Scan Column enables the water content of foams to be determined by conductivity analysis. It is clear that the same factors, which play a role in foam stability (film thickness, elasticity, etc.) also have a decisive influence on the stability of isolated froth or foam films. Therefore, the experimental thin film balance (developed by the Bulgarian Researchers) to study thinning of microfilms formed by a concave liquid drop suspended in a short vertical capillary tube has proved useful. Direct measurement of the thickness of the aqueous microfilm is determined by a micro-reflectance method and can give fundamental information on drainage and thin film stability. It is also important to consider the influence of the mineral particles on the stability of the froth and it have been shown that particles of well defined size and hydrophobicity can be introduced into the thin film enabling stabilization/destabilization mechanisms to be proposed. It has also been shown that the dynamic and static stability can be increased by a reduction in particle size and an increase in particle concentration. PMID:15913531

  19. Skin Microbiome Surveys Are Strongly Influenced by Experimental Design.

    PubMed

    Meisel, Jacquelyn S; Hannigan, Geoffrey D; Tyldsley, Amanda S; SanMiguel, Adam J; Hodkinson, Brendan P; Zheng, Qi; Grice, Elizabeth A

    2016-05-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provides more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e., gastrointestinal) and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource and cost intensive, provides evidence of a community's functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This study highlights the importance of experimental design for downstream results in skin microbiome surveys. PMID:26829039

  20. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  1. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  2. Creative Conceptual Design Based on Evolutionary DNA Computing Technique

    NASA Astrophysics Data System (ADS)

    Liu, Xiyu; Liu, Hong; Zheng, Yangyang

    Creative conceptual design is an important area in computer aided innovation. Typical design methodology includes exploration and optimization by evolutionary techniques such as EC and swarm intelligence. Although there are many proposed algorithms and applications for creative design by these techniques, the computing models are implemented mostly by traditional von Neumann’s architecture. On the other hand, the possibility of using DNA as a computing technique arouses wide interests in recent years with huge built-in parallel computing nature and ability to solve NP complete problems. This new computing technique is performed by biological operations on DNA molecules rather than chips. The purpose of this paper is to propose a simulated evolutionary DNA computing model and integrate DNA computing with creative conceptual design. The proposed technique will apply for large scale, high parallel design problems potentially.

  3. Experimental measurements of the thermal conductivity of ash deposits: Part 1. Measurement technique

    SciTech Connect

    A. L. Robinson; S. G. Buckley; N. Yang; L. L. Baxter

    2000-04-01

    This paper describes a technique developed to make in situ, time-resolved measurements of the effective thermal conductivity of ash deposits formed under conditions that closely replicate those found in the convective pass of a commercial boiler. Since ash deposit thermal conductivity is thought to be strongly dependent on deposit microstructure, the technique is designed to minimize the disturbance of the natural deposit microstructure. Traditional techniques for measuring deposit thermal conductivity generally do not preserve the sample microstructure. Experiments are described that demonstrate the technique, quantify experimental uncertainty, and determine the thermal conductivity of highly porous, unsintered deposits. The average measured conductivity of loose, unsintered deposits is 0.14 {+-} 0.03 W/(m K), approximately midway between rational theoretical limits for deposit thermal conductivity.

  4. Verification of Experimental Techniques for Flow Surface Determination

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.; Lerch, Bradley A.; Ellis, John R.; Robinson, David N.

    1996-01-01

    The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory. However, at elevated temperatures, material response can be highly time-dependent, which is beyond the realm of classical plasticity. Viscoplastic theories have been developed for just such conditions. In viscoplastic theories, the flow law is given in terms of inelastic strain rate rather than the inelastic strain increment used in time-independent plasticity. Thus, surfaces of constant inelastic strain rate or flow surfaces are to viscoplastic theories what yield surfaces are to classical plasticity. The purpose of the work reported herein was to validate experimental procedures for determining flow surfaces at elevated temperatures. Since experimental procedures for determining yield surfaces in axial/torsional stress space are well established, they were employed -- except inelastic strain rates were used rather than total inelastic strains. In yield-surface determinations, the use of small-offset definitions of yield minimizes the change of material state and allows multiple loadings to be applied to a single specimen. The key to the experiments reported here was precise, decoupled measurement of axial and torsional strain. With this requirement in mind, the performance of a high-temperature multi-axial extensometer was evaluated by comparing its results with strain gauge results at room temperature. Both the extensometer and strain gauges gave nearly identical yield surfaces (both initial and subsequent) for type 316 stainless steel (316 SS). The extensometer also successfully determined flow surfaces for 316 SS at 650 C. Furthermore, to judge the applicability of the technique for composite materials, yield surfaces were determined for unidirectional tungsten/Kanthal (Fe-Cr-Al).

  5. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  6. Experimental design for improved ceramic processing, emphasizing the Taguchi Method

    SciTech Connect

    Weiser, M.W. . Mechanical Engineering Dept.); Fong, K.B. )

    1993-12-01

    Ceramic processing often requires substantial experimentation to produce acceptable product quality and performance. This is a consequence of ceramic processes depending upon a multitude of factors, some of which can be controlled and others that are beyond the control of the manufacturer. Statistical design of experiments is a procedure that allows quick, economical, and accurate evaluation of processes and products that depend upon several variables. Designed experiments are sets of tests in which the variables are adjusted methodically. A well-designed experiment yields unambiguous results at minimal cost. A poorly designed experiment may reveal little information of value even with complex analysis, wasting valuable time and resources. This article will review the most common experimental designs. This will include both nonstatistical designs and the much more powerful statistical experimental designs. The Taguchi Method developed by Grenichi Taguchi will be discussed in some detail. The Taguchi method, based upon fractional factorial experiments, is a powerful tool for optimizing product and process performance.

  7. Vibration control of piezoelectric smart structures based on system identification technique: Numerical simulation and experimental study

    NASA Astrophysics Data System (ADS)

    Dong, Xing-Jian; Meng, Guang; Peng, Juan-Chun

    2006-11-01

    The aim of this study is to investigate the efficiency of a system identification technique known as observer/Kalman filter identification (OKID) technique in the numerical simulation and experimental study of active vibration control of piezoelectric smart structures. Based on the structure responses determined by finite element method, an explicit state space model of the equivalent linear system is developed by employing OKID approach. The linear quadratic Gaussian (LQG) algorithm is employed for controller design. The control law is then incorporated into the ANSYS finite element model to perform closed loop simulations. Therefore, the control law performance can be evaluated in the context of a finite element environment. Furthermore, a complete active vibration control system comprising the cantilever plate, the piezoelectric actuators, the accelerometers and the digital signal processor (DSP) board is set up to conduct the experimental investigation. A state space model characterizing the dynamics of the physical system is developed from experimental results using OKID approach for the purpose of control law design. The controller is then implemented by using a floating point TMS320VC33 DSP. Numerical examples by employing the proposed numerical simulation method, together with the experimental results obtained by using the active vibration control system, have demonstrated the validity and efficiency of OKID method in application of active vibration control of piezoelectric smart structures.

  8. Experimental Design for Composite Face Transplantation.

    PubMed

    Park, Jihoon; Yim, Sangjun; Eun, Seok-Chan

    2016-06-01

    Face allotransplantation represents a novel frontier in complex human facial defect reconstruction. To develop more refined surgical techniques and yield fine results, it is first imperative to make a suitable animal model. The development of a composite facial allograft model in swine is more appealing: the facial anatomy, including facial nerve and vascular anatomy, is similar to that of humans. Two operative teams performed simultaneously, one assigned to harvest the donor and the other to prepare the recipient in efforts to shorten operative time. The flap was harvested with the common carotid artery and external jugular vein, and it was transferred to the recipient. After insetting the maxilla, mandible, muscles, and skins, the anastomosis of the external jugular vein, external carotid artery, and facial nerve were performed. The total mean time of transplantation was 7 hours, and most allografts survived without vascular problems. The authors documented that this model is well qualified to be used as a standard transplantation training model and future research work, in every aspect. PMID:27244198

  9. Experimental investigation of design parameters on dry powder inhaler performance.

    PubMed

    Ngoc, Nguyen Thi Quynh; Chang, Lusi; Jia, Xinli; Lau, Raymond

    2013-11-30

    The study aims to investigate the impact of various design parameters of a dry powder inhaler on the turbulence intensities generated and the performance of the dry powder inhaler. The flow fields and turbulence intensities in the dry powder inhaler are measured using particle image velocimetry (PIV) techniques. In vitro aerosolization and deposition a blend of budesonide and lactose are measured using an Andersen Cascade Impactor. Design parameters such as inhaler grid hole diameter, grid voidage and chamber length are considered. The experimental results reveal that the hole diameter on the grid has negligible impact on the turbulence intensity generated in the chamber. On the other hand, hole diameters smaller than a critical size can lead to performance degradation due to excessive particle-grid collisions. An increase in grid voidage can improve the inhaler performance but the effect diminishes at high grid voidage. An increase in the chamber length can enhance the turbulence intensity generated but also increases the powder adhesion on the inhaler wall. PMID:24055597

  10. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    NASA Astrophysics Data System (ADS)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  11. An investigation of radiometer design using digital processing techniques

    NASA Technical Reports Server (NTRS)

    Lawrence, R. W.

    1981-01-01

    The use of digital signal processing techniques in Dicke switching radiometer design was investigated. The general approach was to develop an analytical model of the existing analog radiometer and identify factors which adversly affect its performance. A digital processor was then proposed to verify the feasibility of using digital techniques to minimize these adverse effects and improve the radiometer performance. Analysis and preliminary test results comparing the digital and analog processing approaches in radiometers design were analyzed.

  12. Advanced airfoil design empirically based transonic aircraft drag buildup technique

    NASA Technical Reports Server (NTRS)

    Morrison, W. D., Jr.

    1976-01-01

    To systematically investigate the potential of advanced airfoils in advance preliminary design studies, empirical relationships were derived, based on available wind tunnel test data, through which total drag is determined recognizing all major aircraft geometric variables. This technique recognizes a single design lift coefficient and Mach number for each aircraft. Using this technique drag polars are derived for all Mach numbers up to MDesign + 0.05 and lift coefficients -0.40 to +0.20 from CLDesign.

  13. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    PubMed

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  14. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  15. Evaluation with an Experimental Design: The Emergency School Assistance Program.

    ERIC Educational Resources Information Center

    Crain, Robert L.; York, Robert L.

    The Evaluation of the Emergency School Assistance Program (ESAP) for the 1971-72 school year is the first application of full-blown experimental design with randomized experimental and control cases in a federal evaluation of a large scale program. It is also one of the very few evaluations which has shown that federal programs can raise tested…

  16. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  17. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  18. Experimental Techniques Verified for Determining Yield and Flow Surfaces

    NASA Technical Reports Server (NTRS)

    Lerch, Brad A.; Ellis, Rod; Lissenden, Cliff J.

    1998-01-01

    Structural components in aircraft engines are subjected to multiaxial loads when in service. For such components, life prediction methodologies are dependent on the accuracy of the constitutive models that determine the elastic and inelastic portions of a loading cycle. A threshold surface (such as a yield surface) is customarily used to differentiate between reversible and irreversible flow. For elastoplastic materials, a yield surface can be used to delimit the elastic region in a given stress space. The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory, but at elevated temperatures, material response can be highly time dependent. Thus, viscoplastic theories have been developed to account for this time dependency. Since the key to many of these theories is experimental validation, the objective of this work (refs. 1 and 2) at the NASA Lewis Research Center was to verify that current laboratory techniques and equipment are sufficient to determine flow surfaces at elevated temperatures. By probing many times in the axial-torsional stress space, we could define the yield and flow surfaces. A small offset definition of yield (10 me) was used to delineate the boundary between reversible and irreversible behavior so that the material state remained essentially unchanged and multiple probes could be done on the same specimen. The strain was measured with an off-the-shelf multiaxial extensometer that could measure the axial and torsional strains over a wide range of temperatures. The accuracy and resolution of this extensometer was verified by comparing its data with strain gauge data at room temperature. The extensometer was found to have sufficient resolution for these experiments. In addition, the amount of crosstalk (i.e., the accumulation of apparent strain in one direction when strain in the other direction is applied) was found to be negligible. Tubular specimens were induction heated to determine the flow

  19. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use. PMID:24083108

  20. Extended mapping and characteristics techniques for inverse aerodynamic design

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Qian, Y. J.

    1991-01-01

    Some ideas for using hodograph theory, mapping techniques and methods of characteristics to formulate typical aerodynamic design boundary value problems are developed. The inverse method of characteristics is shown to be a fast tool for design of transonic flow elements as well as supersonic flows with given shock waves.

  1. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    Automated validation of flight-critical embedded systems is being done at ARC Dryden Flight Research Facility. The automated testing techniques are being used to perform closed-loop validation of man-rated flight control systems. The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 High Alpha Research Vehicle (HARV) automated test systems are discussed. Operationally applying automated testing techniques has accentuated flight control system features that either help or hinder the application of these techniques. The paper also discusses flight control system features which foster the use of automated testing techniques.

  2. Designing modulators of monoamine transporters using virtual screening techniques

    PubMed Central

    Mortensen, Ole V.; Kortagere, Sandhya

    2015-01-01

    The plasma-membrane monoamine transporters (MATs), including the serotonin (SERT), norepinephrine (NET) and dopamine (DAT) transporters, serve a pivotal role in limiting monoamine-mediated neurotransmission through the reuptake of their respective monoamine neurotransmitters. The transporters are the main target of clinically used psychostimulants and antidepressants. Despite the availability of several potent and selective MAT substrates and inhibitors the continuing need for therapeutic drugs to treat brain disorders involving aberrant monoamine signaling provides a compelling reason to identify novel ways of targeting and modulating the MATs. Designing novel modulators of MAT function have been limited by the lack of three dimensional structure information of the individual MATs. However, crystal structures of LeuT, a bacterial homolog of MATs, in a substrate-bound occluded, substrate-free outward-open, and an apo inward-open state and also with competitive and non-competitive inhibitors have been determined. In addition, several structures of the Drosophila DAT have also been resolved. Together with computational modeling and experimental data gathered over the past decade, these structures have dramatically advanced our understanding of several aspects of SERT, NET, and DAT transporter function, including some of the molecular determinants of ligand interaction at orthosteric substrate and inhibitor binding pockets. In addition progress has been made in the understanding of how allosteric modulation of MAT function can be achieved. Here we will review all the efforts up to date that has been made through computational approaches employing structural models of MATs to design small molecule modulators to the orthosteric and allosteric sites using virtual screening techniques. PMID:26483692

  3. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  4. Fundamentals of experimental design: lessons from beyond the textbook world

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  5. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  6. Study/Experimental/Research Design: Much More Than Statistics

    PubMed Central

    Knight, Kenneth L.

    2010-01-01

    Abstract Context: The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand. Objective: To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. Description: The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Advantages: Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. PMID:20064054

  7. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  8. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  9. Design and Experimental Study on Spinning Solid Rocket Motor

    NASA Astrophysics Data System (ADS)

    Xue, Heng; Jiang, Chunlan; Wang, Zaicheng

    The study on spinning solid rocket motor (SRM) which used as power plant of twice throwing structure of aerial submunition was introduced. This kind of SRM which with the structure of tangential multi-nozzle consists of a combustion chamber, propellant charge, 4 tangential nozzles, ignition device, etc. Grain design, structure design and prediction of interior ballistic performance were described, and problem which need mainly considered in design were analyzed comprehensively. Finally, in order to research working performance of the SRM, measure pressure-time curve and its speed, static test and dynamic test were conducted respectively. And then calculated values and experimental data were compared and analyzed. The results indicate that the designed motor operates normally, and the stable performance of interior ballistic meet demands. And experimental results have the guidance meaning for the pre-research design of SRM.

  10. Decision-oriented Optimal Experimental Design and Data Collection

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Classical optimal experimental design is a branch of statistics that seeks to construct ("design") a data collection effort ("experiment") that minimizes ("optimal") the uncertainty associated with some quantity of interest. In many real world problems, we are interested in these quantities to help us make a decision. Minimizing the uncertainty associated with the quantity can help inform the decision, but a more holistic approach is possible where the experiment is designed to maximize the information that it provides to the decision-making process. The difference is subtle, but it amounts to focusing on the end-goal (the decision) rather than an intermediary (the quantity). We describe one approach to decision-oriented optimal experimental design that utilizes Bayesian-Information-Gap Decision Theory which combines probabilistic and non-probabilistic methods for uncertainty quantification. In this approach, experimental designs that have a high probability of altering the decision are deemed worthwhile. On the other hand, experimental designs that have little chance or no of altering the decision need not be performed.

  11. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  12. Combining Usability Techniques to Design Geovisualization Tools for Epidemiology

    PubMed Central

    Robinson, Anthony C.; Chen, Jin; Lengerich, Eugene J.; Meyer, Hans G.; MacEachren, Alan M.

    2009-01-01

    Designing usable geovisualization tools is an emerging problem in GIScience software development. We are often satisfied that a new method provides an innovative window on our data, but functionality alone is insufficient assurance that a tool is applicable to a problem in situ. As extensions of the static methods they evolved from, geovisualization tools are bound to enable new knowledge creation. We have yet to learn how to adapt techniques from interaction designers and usability experts toward our tools in order to maximize this ability. This is especially challenging because there is limited existing guidance for the design of usable geovisualization tools. Their design requires knowledge about the context of work within which they will be used, and should involve user input at all stages, as is the practice in any human-centered design effort. Toward that goal, we have employed a wide range of techniques in the design of ESTAT, an exploratory geovisualization toolkit for epidemiology. These techniques include; verbal protocol analysis, card-sorting, focus groups, and an in-depth case study. This paper reports the design process and evaluation results from our experience with the ESTAT toolkit. PMID:19960106

  13. Techniques for analyzing lens manufacturing data with optical design applications

    NASA Astrophysics Data System (ADS)

    Kaufman, Morris I.; Light, Brandon B.; Malone, Robert M.; Gregory, Michael K.; Frayer, Daniel K.

    2015-09-01

    Optical designers assume a mathematically derived statistical distribution of the relevant design parameters for their Monte Carlo tolerancing simulations. However, there may be significant differences between the assumed distributions and the likely outcomes from manufacturing. Of particular interest for this study are the data analysis techniques and how they may be applied to optical and mechanical tolerance decisions. The effect of geometric factors and mechanical glass properties on lens manufacturability will be also be presented. Although the present work concerns lens grinding and polishing, some of the concepts and analysis techniques could also be applied to other processes such molding and single-point diamond turning.

  14. A new experimental flight research technique: The remotely piloted airplane

    NASA Technical Reports Server (NTRS)

    Layton, G. P.

    1976-01-01

    The results obtained so far with a remotely piloted research vehicle (RPRV) using a 3/8 scale model of an F-15 airplane, to determine the usefulness of the RPRV testing technique in high risk flight testing, including spin testing, were presented. The program showed that the RPRV technique, including the use of a digital control system, is a practical method for obtaining flight research data. The spin, stability, and control data obtained with the 3/8-scale model also showed that predictions based on wind-tunnel tests were generally reasonable.

  15. Phylogenetic information and experimental design in molecular systematics.

    PubMed Central

    Goldman, N

    1998-01-01

    Despite the widespread perception that evolutionary inference from molecular sequences is a statistical problem, there has been very little attention paid to questions of experimental design. Previous consideration of this topic has led to little more than an empirical folklore regarding the choice of suitable genes for analysis, and to dispute over the best choice of taxa for inclusion in data sets. I introduce what I believe are new methods that permit the quantification of phylogenetic information in a sequence alignment. The methods use likelihood calculations based on Markov-process models of nucleotide substitution allied with phylogenetic trees, and allow a general approach to optimal experimental design. Two examples are given, illustrating realistic problems in experimental design in molecular phylogenetics and suggesting more general conclusions about the choice of genomic regions, sequence lengths and taxa for evolutionary studies. PMID:9787470

  16. A comparison of controller designs for an experimental flexible structure

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Maghami, P. G.; Joshi, S. M.

    1991-01-01

    Control systems design and hardware testing are addressed for an experimental structure that displays the characteristics of a typical flexible spacecraft. The results of designing and implementing various control design methodologies are described. The design methodologies under investigation include linear quadratic Gaussian control, static and dynamic dissipative controls, and H-infinity optimal control. Among the three controllers considered, it is shown, through computer simulation and laboratory experiments on the evolutionary structure, that the dynamic dissipative controller gave the best results in terms of vibration suppression and robustness with respect to modeling errors.

  17. Inverse boundary-layer technique for airfoil design

    NASA Technical Reports Server (NTRS)

    Henderson, M. L.

    1979-01-01

    A description is presented of a technique for the optimization of airfoil pressure distributions using an interactive inverse boundary-layer program. This program allows the user to determine quickly a near-optimum subsonic pressure distribution which meets his requirements for lift, drag, and pitching moment at the desired flow conditions. The method employs an inverse turbulent boundary-layer scheme for definition of the turbulent recovery portion of the pressure distribution. Two levels of pressure-distribution architecture are used - a simple roof top for preliminary studies and a more complex four-region architecture for a more refined design. A technique is employed to avoid the specification of pressure distributions which result in unrealistic airfoils, that is, those with negative thickness. The program allows rapid evaluation of a designed pressure distribution off-design in Reynolds number, transition location, and angle of attack, and will compute an airfoil contour for the designed pressure distribution using linear theory.

  18. Comparison of experimental rotor damping data-reduction techniques

    NASA Technical Reports Server (NTRS)

    Warmbrodt, William

    1988-01-01

    The ability of existing data reduction techniques to determine frequency and damping from transient time-history records was evaluated. Analog data records representative of small-scale helicopter aeroelastic stability tests were analyzed. The data records were selected to provide information on the accuracy of reduced frequency and decay coefficients as a function of modal damping level, modal frequency, number of modes present in the time history record, proximity to other modes with different frequencies, steady offset in time history, and signal-to-noise ratio. The study utilized the results from each of the major U.S. helicopter manufacturers, the U.S. Army Aeroflightdynamics Directorate, and NASA Ames Research Center using their inhouse data reduction and analysis techniques. Consequently, the accuracy of different data analysis techniques and the manner in which they were implemented were also evaluated. It was found that modal frequencies can be accurately determined even in the presence of significant random and periodic noise. Identified decay coefficients do, however, show considerable variation, particularly for highly damped modes. The manner in which the data are reduced and the role of the data analyst was shown to be important. Although several different damping determination methods were used, no clear trends were evident for the observed differences between the individual analysis techniques. It is concluded that the data reduction of modal-damping characteristics from transient time histories results in a range of damping values.

  19. Determination of dynamic fracture toughness using a new experimental technique

    NASA Astrophysics Data System (ADS)

    Cady, Carl M.; Liu, Cheng; Lovato, Manuel L.

    2015-09-01

    In other studies dynamic fracture toughness has been measured using Charpy impact and modified Hopkinson Bar techniques. In this paper results will be shown for the measurement of fracture toughness using a new test geometry. The crack propagation velocities range from ˜0.15 mm/s to 2.5 m/s. Digital image correlation (DIC) will be the technique used to measure both the strain and the crack growth rates. The boundary of the crack is determined using the correlation coefficient generated during image analysis and with interframe timing the crack growth rate and crack opening can be determined. A comparison of static and dynamic loading experiments will be made for brittle polymeric materials. The analysis technique presented by Sammis et al. [1] is a semi-empirical solution, however, additional Linear Elastic Fracture Mechanics analysis of the strain fields generated as part of the DIC analysis allow for the more commonly used method resembling the crack tip opening displacement (CTOD) experiment. It should be noted that this technique was developed because limited amounts of material were available and crack growth rates were to fast for a standard CTOD method.

  20. Optimizing experimental design for comparing models of brain function.

    PubMed

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-11-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  1. The design of aircraft using the decision support problem technique

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Marinopoulos, Stergios; Jackson, David M.; Shupe, Jon A.

    1988-01-01

    The Decision Support Problem Technique for unified design, manufacturing and maintenance is being developed at the Systems Design Laboratory at the University of Houston. This involves the development of a domain-independent method (and the associated software) that can be used to process domain-dependent information and thereby provide support for human judgment. In a computer assisted environment, this support is provided in the form of optimal solutions to Decision Support Problems.

  2. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER. PMID:27008024

  3. The use of experimental designs for corrosive oilfield systems

    SciTech Connect

    Biagiotti, S.F. Jr.; Frost, R.H.

    1997-08-01

    A Design of Experiment approach was used to investigate the effect of hydrogen sulfide, carbon dioxide and brine composition on the corrosion rate of carbon steel. Three of the most common experimental design approaches (Full Factorial, Taguchi L{sub 4}, and Alternate Fractional) were used to evaluate the results. This work concluded that: CO{sub 2} and brine both have significant main and two-factor effects on corrosion rate, H{sub 2}S concentration has a moderate effect on corrosion rate, and higher total dissolved solids (TDS) brine compositions appear to force gases out of solution, thereby decreasing the corrosion rate of carbon steel. The Full Factorial Design correctly identified all independent variables and the significant interactions between CO{sub 2}/H{sub 2}S and CO{sub 2}/Brine on corrosion rate. The two fractional factorial experimental methods resulted in incorrect conclusions. The Taguchi L{sub 4} method gave misleading results as it did not identify H{sub 2}S as having a positive effect on corrosion rate, and only identified the strong interactions in the experimental matrix. The Alternative Fractional design also yielded incorrect interpretations with regard to the effect of brine on corrosion. This study has shown that reduced experimental designs (e.g., half fractional) may be inappropriate for distinguishing the synergistic interactions likely to form in chemically reactive systems. Therefore, based upon the size of the data set collected in this work, the authors recommend that full factorial designs be used for corrosion evaluations. When the number of experimental variables make it impractical to perform a full factorial design, the aliasing relationships should be carefully evaluated.

  4. Application of optimization techniques to vehicle design: A review

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Magee, C. L.

    1984-01-01

    The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.

  5. Experimental Validation of Simulations Using Full-field Measurement Techniques

    SciTech Connect

    Hack, Erwin

    2010-05-28

    The calibration by reference materials of dynamic full-field measurement systems is discussed together with their use to validate numerical simulations of structural mechanics. The discussion addresses three challenges that are faced in these processes, i.e. how to calibrate a measuring instrument that (i) provides full-field data, and (ii) is dynamic; (iii) how to compare data from simulation and experimentation.

  6. Low Cost Gas Turbine Off-Design Prediction Technique

    NASA Astrophysics Data System (ADS)

    Martinjako, Jeremy

    This thesis seeks to further explore off-design point operation of gas turbines and to examine the capabilities of GasTurb 12 as a tool for off-design analysis. It is a continuation of previous thesis work which initially explored the capabilities of GasTurb 12. The research is conducted in order to: 1) validate GasTurb 12 and, 2) predict off-design performance of the Garrett GTCP85-98D located at the Arizona State University Tempe campus. GasTurb 12 is validated as an off-design point tool by using the program to predict performance of an LM2500+ marine gas turbine. Haglind and Elmegaard (2009) published a paper detailing a second off-design point method and it includes the manufacturer's off-design point data for the LM2500+. GasTurb 12 is used to predict off-design point performance of the LM2500+ and compared to the manufacturer's data. The GasTurb 12 predictions show good correlation. Garrett has published specification data for the GTCP85-98D. This specification data is analyzed to determine the design point and to comment on off-design trends. Arizona State University GTCP85-98D off-design experimental data is evaluated. Trends presented in the data are commented on and explained. The trends match the expected behavior demonstrated in the specification data for the same gas turbine system. It was originally intended that a model of the GTCP85-98D be constructed in GasTurb 12 and used to predict off-design performance. The prediction would be compared to collected experimental data. This is not possible because the free version of GasTurb 12 used in this research does not have a module to model a single spool turboshaft. This module needs to be purchased for this analysis.

  7. Experimental study of digital image processing techniques for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  8. Theory and experimental technique for nondestructive evaluation of ceramic composites

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    1990-01-01

    The important ultrasonic scattering mechanisms for SiC and Si3N4 ceramic composites were identified by examining the interaction of ultrasound with individual fibers, pores, and grains. The dominant scattering mechanisms were identified as asymmetric refractive scattering due to porosity gradients in the matrix material, and symmetric diffractive scattering at the fiber-to-matrix interface and at individual pores. The effect of the ultrasonic reflection coefficient and surface roughness in the ultrasonic evaluation was highlighted. A new nonintrusive ultrasonic evaluation technique, angular power spectrum scanning (APSS), was presented that is sensitive to microstructural variations in composites. Preliminary results indicate that APSS will yield information on the composite microstructure that is not available by any other nondestructive technique.

  9. CMOS array design automation techniques. [metal oxide semiconductors

    NASA Technical Reports Server (NTRS)

    Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.

    1975-01-01

    A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.

  10. Design and experimental characterization of a multifrequency flexural ultrasonic actuator.

    PubMed

    Iula, Antonio

    2009-08-01

    In this work, a multifrequency flexural ultrasonic actuator is proposed, designed, and experimentally characterized. The actuator is composed of a Langevin transducer and of a displacement amplifier. The displacement amplifier is able to transform the almost flat axial displacement provided by the Langevin transducer at its back end into a flexural deformation that produces the maximum axial displacement at the center of its front end. Design and analysis of the actuator have been performed by using finite element method software. In analogy to classical power actuators that use sectional concentrators, the design criterion that has been followed was to design the Langevin transducer and the flexural amplifier separately at the same working frequency. As opposed to sectional concentrators, the flexural amplifier has several design parameters that allow a wide flexibility in the design. The flexural amplifier has been designed to produce a very high displacement amplification. It has also been designed in such a way that the whole actuator has 2 close working frequencies (17.4 kHz and 19.2 kHz), with similar flexural deformations of the front surface. A first prototype of the actuator has been manufactured and experimentally characterized to validate the numerical analysis. PMID:19686988

  11. Application of hazard assessment techniques in the CISF design process

    SciTech Connect

    Thornton, J.R.; Henry, T.

    1997-10-29

    The Department of Energy has submitted to the NRC staff for review a topical safety analysis report (TSAR) for a Centralized Interim Storage Facility (CISF). The TSAR will be used in licensing the CISF when and if a site is designated. CISF1 design events are identified based on thorough review of design basis events (DBEs) previously identified by dry storage system suppliers and licensees and through the application of hazard assessment techniques. A Preliminary Hazards Assessment (PHA) is performed to identify design events applicable to a Phase 1 non site specific CISF. A PHA is deemed necessary since the Phase 1 CISF is distinguishable from previous dry store applications in several significant operational scope and design basis aspects. In addition to assuring all design events applicable to the Phase 1 CISF are identified, the PHA served as an integral part of the CISF design process by identifying potential important to safety and defense in depth facility design and administrative control features. This paper describes the Phase 1 CISF design event identification process and summarizes significant PHA contributions to the CISF design.

  12. A technique for optimizing the design of power semiconductor devices

    NASA Technical Reports Server (NTRS)

    Schlegel, E. S.

    1976-01-01

    A technique is described that provides a basis for predicting whether any device design change will improve or degrade the unavoidable trade-off that must be made between the conduction loss and the turn-off speed of fast-switching high-power thyristors. The technique makes use of a previously reported method by which, for a given design, this trade-off was determined for a wide range of carrier lifetimes. It is shown that by extending this technique, one can predict how other design variables affect this trade-off. The results show that for relatively slow devices the design can be changed to decrease the current gains to improve the turn-off time without significantly degrading the losses. On the other hand, for devices having fast turn-off times design changes can be made to increase the current gain to decrease the losses without a proportionate increase in the turn-off time. Physical explanations for these results are proposed.

  13. Experimental validation of tilt measurement technique with a laser beacon

    NASA Astrophysics Data System (ADS)

    Belen'kii, Mikhail S.; Karis, Stephen J.; Brown, James M.; Fugate, Robert Q.

    1999-09-01

    We have experimentally demonstrated for the first time a method for sensing wavefront tilt with a laser guide star (LGS). The tilt components of wavefronts were measured synchronously from the LGS using a telescope with 0.75 m effective aperture and from Polaris using a 1.5 m telescope. The Rayleigh guide star was formed at the altitude of 6 km and at a corresponding range of 10.5 km by projecting a focused beam at Polaris from the full aperture at the 1.5 m telescope. Both telescope mounts were unpowered and bottled down in place allowing us to substantially reduce the telescope vibration. The maximum value of the measured cross-correlation coefficient between the tilt for Polaris and the LGS is 0.71. The variations of the measured cross- correlation coefficient in the range from 0.22 to 0.71 are caused by turbulence at altitudes above 6 km, which was not sampled by the laser beacon, but affected the tilt for Polaris. It is also caused by the cone effect for turbulence below 6 km, residual mount jitter of the telescopes, and variations of the S/N. The experimental results support our concept of sensing atmospheric tilt by observing a LGS with an auxiliary telescope and indicate that this method is a possible solution for the tip-tilt problem.

  14. High-Throughput Computational and Experimental Techniques in Structural Genomics

    PubMed Central

    Chance, Mark R.; Fiser, Andras; Sali, Andrej; Pieper, Ursula; Eswar, Narayanan; Xu, Guiping; Fajardo, J. Eduardo; Radhakannan, Thirumuruhan; Marinkovic, Nebojsa

    2004-01-01

    Structural genomics has as its goal the provision of structural information for all possible ORF sequences through a combination of experimental and computational approaches. The access to genome sequences and cloning resources from an ever-widening array of organisms is driving high-throughput structural studies by the New York Structural Genomics Research Consortium. In this report, we outline the progress of the Consortium in establishing its pipeline for structural genomics, and some of the experimental and bioinformatics efforts leading to structural annotation of proteins. The Consortium has established a pipeline for structural biology studies, automated modeling of ORF sequences using solved (template) structures, and a novel high-throughput approach (metallomics) to examining the metal binding to purified protein targets. The Consortium has so far produced 493 purified proteins from >1077 expression vectors. A total of 95 have resulted in crystal structures, and 81 are deposited in the Protein Data Bank (PDB). Comparative modeling of these structures has generated >40,000 structural models. We also initiated a high-throughput metal analysis of the purified proteins; this has determined that 10%-15% of the targets contain a stoichiometric structural or catalytic transition metal atom. The progress of the structural genomics centers in the U.S. and around the world suggests that the goal of providing useful structural information on most all ORF domains will be realized. This projected resource will provide structural biology information important to understanding the function of most proteins of the cell. PMID:15489337

  15. Active Flow Control: Instrumentation Automation and Experimental Technique

    NASA Technical Reports Server (NTRS)

    Gimbert, N. Wes

    1995-01-01

    In investigating the potential of a new actuator for use in an active flow control system, several objectives had to be accomplished, the largest of which was the experimental setup. The work was conducted at the NASA Langley 20x28 Shear Flow Control Tunnel. The actuator named Thunder, is a high deflection piezo device recently developed at Langley Research Center. This research involved setting up the instrumentation, the lighting, the smoke, and the recording devices. The instrumentation was automated by means of a Power Macintosh running LabVIEW, a graphical instrumentation package developed by National Instruments. Routines were written to allow the tunnel conditions to be determined at a given instant at the push of a button. This included determination of tunnel pressures, speed, density, temperature, and viscosity. Other aspects of the experimental equipment included the set up of a CCD video camera with a video frame grabber, monitor, and VCR to capture the motion. A strobe light was used to highlight the smoke that was used to visualize the flow. Additional effort was put into creating a scale drawing of another tunnel on site and a limited literature search in the area of active flow control.

  16. Experimental techniques in ultrasonics for NDE and material characterization

    NASA Astrophysics Data System (ADS)

    Tittmann, B. R.

    A development status evaluation is presented for ultrasonics NDE characterization of aerospace alloys and composites in such application as the Space Shuttle, Space Station Freedom, and hypersonic aircraft. The use of such NDE techniques extends to composite-cure monitoring, postmanufacturing quality assurance, and in-space service inspection of such materials as graphite/epoxy, Ti alloys, and Al honeycomb. Attention is here given to the spectroscopy of elastically scattered wave pulses from flaws, the acoustical imaging of flaws in honeycomb structures, and laser-based ultrasonics for the noncontact inspection of composite structures.

  17. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  18. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  19. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  20. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  1. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  2. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  3. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  4. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  5. An experimental modal testing/identification technique for personal computers

    NASA Technical Reports Server (NTRS)

    Roemer, Michael J.; Schlonski, Steven T.; Mook, D. Joseph

    1990-01-01

    A PC-based system for mode shape identification is evaluated. A time-domain modal identification procedure is utilized to identify the mode shapes of a beam apparatus from discrete time-domain measurements. The apparatus includes a cantilevered aluminum beam, four accelerometers, four low-pass filters, and the computer. The method's algorithm is comprised of an identification algorithm: the Eigensystem Realization Algorithm (ERA) and an estimation algorithm called Minimum Model Error (MME). The identification ability of this algorithm is compared with ERA alone, a frequency-response-function technique, and an Euler-Bernoulli beam model. Detection of modal parameters and mode shapes by the PC-based time-domain system is shown to be accurate in an application with an aluminum beam, while mode shapes identified by the frequency-domain technique are not as accurate as predicted. The new method is shown to be significantly less sensitive to noise and poorly excited modes than other leading methods. The results support the use of time-domain identification systems for mode shape prediction.

  6. Super-smooth surface fabrication technique and experimental research.

    PubMed

    Zhang, Linghua; Wang, Junlin; Zhang, Jian

    2012-09-20

    Wheel polishing, a new optical fabrication technique, is proposed for super-smooth surface fabrication of optical components in high-precision optical instruments. The machining mechanism and the removal function contours are investigated in detail. The elastohydrodynamic lubrication theory is adopted to analyze the deformation of the wheel head, the pressure distribution, and the fluid film thickness distribution in the narrow machining zone. The pressure and the shear stress distributions at the interface between the slurry and the sample are numerically simulated. Practical polishing experiments are arranged to analyze the relationship between the wheel-sample distance and the machining rate. It is demonstrated in this paper that the wheel-sample distance will directly influence the removal function contours. Moreover, ripples on the wheel surface will eventually induce the transverse prints on the removal function contours. The surface roughness of fused silicon is reduced to less than 0.5 nm (rms) from initial 1.267 nm (rms). The wheel polishing technique is feasible for super-smooth surface fabrication. PMID:23033032

  7. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  8. Experimental techniques for in-ring reaction experiments

    NASA Astrophysics Data System (ADS)

    Mutterer, M.; Egelhof, P.; Eremin, V.; Ilieva, S.; Kalantar-Nayestanaki, N.; Kiselev, O.; Kollmus, H.; Kröll, T.; Kuilman, M.; Chung, L. X.; Najafi, M. A.; Popp, U.; Rigollet, C.; Roy, S.; von Schmid, M.; Streicher, B.; Träger, M.; Yue, K.; Zamora, J. C.; the EXL Collaboration

    2015-11-01

    As a first step of the EXL project scheduled for the New Experimental Storage Ring at FAIR a precursor experiment (E105) was performed at the ESR at GSI. For this experiment, an innovative differential pumping concept, originally proposed for the EXL recoil detector ESPA, was successfully applied. The implementation and essential features of this novel technical concept will be discussed, as well as details on the detectors and the infrastructure around the internal gas-jet target. With 56Ni(p, p)56Ni elastic scattering at 400 MeV u-1, a nuclear reaction experiment with stored radioactive beams was realized for the first time. Finally, perspectives for a next-generation EXL-type setup are briefly discussed.

  9. Thermoelastic Femoral Stress Imaging for Experimental Evaluation of Hip Prosthesis Design

    NASA Astrophysics Data System (ADS)

    Hyodo, Koji; Inomoto, Masayoshi; Ma, Wenxiao; Miyakawa, Syunpei; Tateishi, Tetsuya

    An experimental system using the thermoelastic stress analysis method and a synthetic femur was utilized to perform reliable and convenient mechanical biocompatibility evaluation of hip prosthesis design. Unlike the conventional technique, the unique advantage of the thermoelastic stress analysis method is its ability to image whole-surface stress (Δ(σ1+σ2)) distribution in specimens. The mechanical properties of synthetic femurs agreed well with those of cadaveric femurs with little variability between specimens. We applied this experimental system for stress distribution visualization of the intact femur, and the femurs implanted with an artificial joint. The surface stress distribution of the femurs sensitively reflected the prosthesis design and the contact condition between the stem and the bone. By analyzing the relationship between the stress distribution and the clinical results of the artificial joint, this technique can be used in mechanical biocompatibility evaluation and pre-clinical performance prediction of new artificial joint design.

  10. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  11. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  12. Uncertainty analysis on the design of thermal conductivity measurement by a guarded cut-bar technique

    NASA Astrophysics Data System (ADS)

    Xing, Changhu; Jensen, Colby; Ban, Heng; Phillips, Jeffrey

    2011-07-01

    A technique adapted from the guarded-comparative-longitudinal heat flow method was selected for the measurement of the thermal conductivity of a nuclear fuel compact over a temperature range characteristic of its usage. This technique fulfills the requirement for non-destructive measurement of the composite compact. Although numerous measurement systems have been created based on the guarded-comparative method, comprehensive systematic (bias) and measurement (precision) uncertainty associated with this technique have not been fully analyzed. In addition to the geometric effect in the bias error, which has been analyzed previously, this paper studies the working condition which is another potential error source. Using finite element analysis, this study showed the effect of these two types of error sources in the thermal conductivity measurement process and the limitations in the design selection of various parameters by considering their effect on the precision error. The results and conclusions provide valuable reference for designing and operating an experimental measurement system using this technique.

  13. Advanced Computational and Experimental Techniques for Nacelle Liner Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Jones, Michael G.; Brown, Martha C.; Nark, Douglas

    2009-01-01

    The Curved Duct Test Rig (CDTR) has been developed to investigate sound propagation through a duct of size comparable to the aft bypass duct of typical aircraft engines. The axial dimension of the bypass duct is often curved and this geometric characteristic is captured in the CDTR. The semiannular bypass duct is simulated by a rectangular test section in which the height corresponds to the circumferential dimension and the width corresponds to the radial dimension. The liner samples are perforate over honeycomb core and are installed on the side walls of the test section. The top and bottom surfaces of the test section are acoustically rigid to simulate a hard wall bifurcation or pylon. A unique feature of the CDTR is the control system that generates sound incident on the liner test section in specific modes. Uniform air flow, at ambient temperature and flow speed Mach 0.275, is introduced through the duct. Experiments to investigate configuration effects such as curvature along the flow path on the acoustic performance of a sample liner are performed in the CDTR and reported in this paper. Combinations of treated and acoustically rigid side walls are investigated. The scattering of modes of the incident wave, both by the curvature and by the asymmetry of wall treatment, is demonstrated in the experimental results. The effect that mode scattering has on total acoustic effectiveness of the liner treatment is also shown. Comparisons of measured liner attenuation with numerical results predicted by an analytic model based on the parabolic approximation to the convected Helmholtz equation are reported. The spectra of attenuation produced by the analytic model are similar to experimental results for both walls treated, straight and curved flow path, with plane wave and higher order modes incident. The numerical model is used to define the optimized resistance and reactance of a liner that significantly improves liner attenuation in the frequency range 1900-2400 Hz. A

  14. Time-Dependent Reversible-Irreversible Deformation Threshold Determined Explicitly by Experimental Technique

    NASA Technical Reports Server (NTRS)

    Castelli, Michael G.; Arnold, Steven M.

    2000-01-01

    Structural materials for the design of advanced aeropropulsion components are usually subject to loading under elevated temperatures, where a material's viscosity (resistance to flow) is greatly reduced in comparison to its viscosity under low-temperature conditions. As a result, the propensity for the material to exhibit time-dependent deformation is significantly enhanced, even when loading is limited to a quasi-linear stress-strain regime as an effort to avoid permanent (irreversible) nonlinear deformation. An understanding and assessment of such time-dependent effects in the context of combined reversible and irreversible deformation is critical to the development of constitutive models that can accurately predict the general hereditary behavior of material deformation. To this end, researchers at the NASA Glenn Research Center at Lewis Field developed a unique experimental technique that identifies the existence of and explicitly determines a threshold stress k, below which the time-dependent material deformation is wholly reversible, and above which irreversible deformation is incurred. This technique is unique in the sense that it allows, for the first time, an objective, explicit, experimental measurement of k. The underlying concept for the experiment is based on the assumption that the material s time-dependent reversible response is invariable, even in the presence of irreversible deformation.

  15. A Constrainted Design Approach for NLF Airfoils by Coupling Inverse Design and Optimal Techniques

    NASA Astrophysics Data System (ADS)

    Deng, L.; Gao, Y. W.; Qiao, Z. D.

    2011-09-01

    In present paper, a design method for natural laminar flow (NLF) airfoils with a substantial amount of natural laminar flow on both surfaces by coupling inverse design method and optimal technique is developed. The N-factor method is used to design the target pressure distributions before pressure recovery region with desired transition locations while maintaining aerodynamics constraints. The pressure in recovery region is designed according to Stratford separation criteria to prevent the laminar separation. In order to improve the off-design performance in inverse design, a multi-point inverse design is performed. An optimal technique based on response surface methodology (RSM) is used to calculate the target airfoil shapes according to the designed target pressure distributions. The set of design points is selected to satisfy the D-optimality and the reduced quadratic polynomial RS models without the 2nd-order cross items are constructed to reduce the computational cost. The design cases indicated that by the coupling-method developed in present paper, the inverse design method can be used in multi-point design to improve the off-design performance and the airfoils designed have the desired transition locations and maintain the aerodynamics constraints while the thickness constraint is difficult to meet in this design procedure.

  16. New head gradient coil design and construction techniques

    PubMed Central

    Handler, William B; Harris, Chad T; Scholl, Timothy J; Parker, Dennis L; Goodrich, K Craig; Dalrymple, Brian; Van Sass, Frank; Chronik, Blaine A

    2013-01-01

    Purpose To design and build a head insert gradient coil to use in conjunction with body gradients for superior imaging. Materials and Methods The use of the Boundary Element Method to solve for a gradient coil wire pattern on an arbitrary surface has allowed us to incorporate engineering changes into the electromagnetic design of a gradient coil directly. Improved wire pattern design has been combined with robust manufacturing techniques and novel cooling methods. Results The finished coil had an efficiency of 0.15 mT/m/A in all three axes and allowed the imaging region to extend across the entire head and upper part of the neck. Conclusion The ability to adapt your electromagnetic design to necessary changes from an engineering perspective leads to superior coil performance. PMID:24123485

  17. Experimental investigation of iterative reconstruction techniques for high resolution mammography

    NASA Astrophysics Data System (ADS)

    Vengrinovich, Valery L.; Zolotarev, Sergei A.; Linev, Vladimir N.

    2014-02-01

    The further development of the new iterative reconstruction algorithms to improve three-dimensional breast images quality restored from incomplete and noisy mammograms, is provided. The algebraic reconstruction method with simultaneous iterations - Simultaneous Algebraic Reconstruction Technique (SART) and the iterative method of statistical reconstruction Bayesian Iterative Reconstruction (BIR) are referred here as the preferable iterative methods suitable to improve the image quality. For better processing we use the Graphics Processing Unit (GPU). Method of minimizing the Total Variation (TV) is used as a priori support for regularization of iteration process and to reduce the level of noise in the reconstructed image. Preliminary results with physical phantoms show that all examined methods are capable to reconstruct structures layer-by-layer and to separate layers which images are overlapped in the Z- direction. It was found that the method of traditional Shift-And-Add tomosynthesis (SAA) is worse than iterative methods SART and BIR in terms of suppression of the anatomical noise and image blurring in between the adjacent layers. Despite of the fact that the measured contrast/noise ratio in the presence of low contrast internal structures is higher for the method of tomosynthesis SAA than for SART and BIR methods, its effectiveness in the presence of structured background is rather poor. In our opinion the optimal results can be achieved using Bayesian iterative reconstruction BIR.

  18. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  19. Design and experimental results for the S805 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    An airfoil for horizontal-axis wind-turbine applications, the S805, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  20. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D M

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  1. Design and Implementation of an Experimental Segway Model

    NASA Astrophysics Data System (ADS)

    Younis, Wael; Abdelati, Mohammed

    2009-03-01

    The segway is the first transportation product to stand, balance, and move in the same way we do. It is a truly 21st-century idea. The aim of this research is to study the theory behind building segway vehicles based on the stabilization of an inverted pendulum. An experimental model has been designed and implemented through this study. The model has been tested for its balance by running a Proportional Derivative (PD) algorithm on a microprocessor chip. The model has been identified in order to serve as an educational experimental platform for segways.

  2. New Design of Control and Experimental System of Windy Flap

    NASA Astrophysics Data System (ADS)

    Yu, Shanen; Wang, Jiajun; Chen, Zhangping; Sun, Weihua

    Experiments associated with control principle for automation major generally are based on MATLAB simulation, and they are not combined very well with the control objects. The experimental system aims to meets the teaching and studying requirements, provide experimental platform for learning the principle of automatic control, MCU, embedded system, etc. The main research contents contains design of angular surveying, control & drive module, and PC software. MPU6050 was used for angular surveying, PID control algorithm was used to control the flap go to the target angular, PC software was used for display, analysis, and processing.

  3. A 1-Dimensional Chaotic IC Designed by SI Techniques

    NASA Astrophysics Data System (ADS)

    Eguchi, Kei; Zhu, Hongbing; Tabata, Toru; Ueno, Fumio; Inoue, Takahiro

    In this paper, a VLSI chip of a discrete-time chaos circuit realizing a tent map is reported. The VLSI chip is fabricated in the chip fabrication program of VLSI Design and Education Center (VDEC). A simple structure enables us to realize the circuit with 10 MOSFET’s and 2 capacitors. Furthermore, the circuit which is designed by switched-current (SI) techniques can operate at 3V power supply. The experiment concerning the VLSI chip shows that the proposed circuit is integrable by a standard 1.2 μm CMOS technology.

  4. Translocations of amphibians: Proven management method or experimental technique

    USGS Publications Warehouse

    Seigel, Richard A.; Dodd, C. Kenneth, Jr.

    2002-01-01

    In an otherwise excellent review of metapopulation dynamics in amphibians, Marsh and Trenham (2001) make the following provocative statements (emphasis added): If isolation effects occur primarily in highly disturbed habitats, species translocations may be necessary to promote local and regional population persistence. Because most amphibians lack parental care, they areprime candidates for egg and larval translocations. Indeed, translocations have already proven successful for several species of amphibians. Where populations are severely isolated, translocations into extinct subpopulations may be the best strategy to promote regional population persistence. We take issue with these statements for a number of reasons. First, the authors fail to cite much of the relevant literature on species translocations in general and for amphibians in particular. Second, to those unfamiliar with current research in amphibian conservation biology, these comments might suggest that translocations are a proven management method. This is not the case, at least in most instances where translocations have been evaluated for an appropriate period of time. Finally, the authors fail to point out some of the negative aspects of species translocation as a management method. We realize that Marsh and Trenham's paper was not concerned primarily with translocations. However, because Marsh and Trenham (2001) made specific recommendations for conservation planners and managers (many of whom are not herpetologists or may not be familiar with the pertinent literature on amphibians), we believe that it is essential to point out that not all amphibian biologists are as comfortable with translocations as these authors appear to be. We especially urge caution about advocating potentially unproven techniques without a thorough review of available options.

  5. Evaluation of CFD Turbulent Heating Prediction Techniques and Comparison With Hypersonic Experimental Data

    NASA Technical Reports Server (NTRS)

    Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)

    2001-01-01

    Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) < 0.3, with the wall damping heating and skin friction 10-30% above the local damping results. Furthermore, the local damping predictions have reasonable or good agreement with the experimental heating data for all cases. The impact of the two formulations on the van Driest damping function and the turbulent eddy viscosity distribution for a cold wall case indicate the importance of including temperature gradient effects. Grid requirements for accurate turbulent heating predictions are also studied. These results indicate that a cell Reynolds number of 1 is required for grid converged heating predictions, but coarser grids with a y(sup +) less than 2 are adequate for design of hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.

  6. Photographic-assisted prosthetic design technique for the anterior teeth.

    PubMed

    Zaccaria, Massimiliano; Squadrito, Nino

    2015-01-01

    The aim of this article is to propose a standardized protocol for treating all inesthetic anterior maxillary situations using a well-planned clinical and photographic technique. As inesthetic aspects should be treated as a pathology, instruments to make a diagnosis are necessary. The prosthetic design to resolve inesthetic aspects, in respect of the function, should be considered a therapy, and, as such, instruments to make a prognosis are necessary. A perspective study was conducted to compare the involvement of patients with regard to the alterations to be made, initially with only a graphic esthetic previsualization, and later with an intraoral functional and esthetic previsualization. Significantly different results were shown for the two techniques. The instruments and steps necessary for the intraoral functional and esthetic previsualization technique are explained in detail in this article. PMID:25625127

  7. Development of a complex experimental system for controlled ecological life support technique

    NASA Astrophysics Data System (ADS)

    Guo, S.; Tang, Y.; Zhu, J.; Wang, X.; Feng, H.; Ai, W.; Qin, L.; Deng, Y.

    A complex experimental system for controlled ecological life support technique can be used as a test platform for plant-man integrated experiments and material close-loop experiments of the controlled ecological life support system CELSS Based on lots of plan investigation plan design and drawing design the system was built through the steps of processing installation and joined debugging The system contains a volume of about 40 0m 3 its interior atmospheric parameters such as temperature relative humidity oxygen concentration carbon dioxide concentration total pressure lighting intensity photoperiod water content in the growing-matrix and ethylene concentration are all monitored and controlled automatically and effectively Its growing system consists of two rows of racks along its left-and-right sides separately and each of which holds two up-and-down layers eight growing beds hold a total area of about 8 4m 2 and their vertical distance can be adjusted automatically and independently lighting sources consist of both red and blue light-emitting diodes Successful development of the test platform will necessarily create an essential condition for next large-scale integrated study of controlled ecological life support technique

  8. Unique considerations in the design and experimental evaluation of tailored wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen

    1992-01-01

    Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.

  9. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  10. Experimental techniques for evaluating steady-state jet engine performance in an altitude facility

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Young, C. Y.; Antl, R. J.

    1971-01-01

    Jet engine calibration tests were conducted in an altitude facility using a contoured bellmouth inlet duct, four fixed-area water-cooled exhaust nozzles, and an accurately calibrated thrust measuring system. Accurate determination of the airflow measuring station flow coefficient, the flow and thrust coefficients of the exhaust nozzles, and the experimental and theoretical terms in the nozzle gross thrust equation were some of the objectives of the tests. A primary objective was to develop a technique to determine gross thrust for the turbojet engine used in this test that could also be used for future engine and nozzle evaluation tests. The probable error in airflow measurement was found to be approximately 0.6 percent at the bellmouth throat design Mach number of 0.6. The probable error in nozzle gross thrust measurement was approximated 0.6 percent at the load cell full-scale reading.

  11. Optimal active vibration absorber - Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1993-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  12. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    SciTech Connect

    Henning, C.D.; Logan, B.G.; Barr, W.L.; Bulmer, R.H.; Doggett, J.N.; Johnson, B.M.; Lee, J.D.; Hoard, R.W.; Miller, J.R.; Slack, D.S.

    1985-11-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs. (WRF)

  13. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  14. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  15. Wireless Body Area Network (WBAN) design techniques and performance evaluation.

    PubMed

    Khan, Jamil Yusuf; Yuce, Mehmet R; Bulger, Garrick; Harding, Benjamin

    2012-06-01

    In recent years interest in the application of Wireless Body Area Network (WBAN) for patient monitoring applications has grown significantly. A WBAN can be used to develop patient monitoring systems which offer flexibility to medical staff and mobility to patients. Patients monitoring could involve a range of activities including data collection from various body sensors for storage and diagnosis, transmitting data to remote medical databases, and controlling medical appliances, etc. Also, WBANs could operate in an interconnected mode to enable remote patient monitoring using telehealth/e-health applications. A WBAN can also be used to monitor athletes' performance and assist them in training activities. For such applications it is very important that a WBAN collects and transmits data reliably, and in a timely manner to a monitoring entity. In order to address these issues, this paper presents WBAN design techniques for medical applications. We examine the WBAN design issues with particular emphasis on the design of MAC protocols and power consumption profiles of WBAN. Some simulation results are presented to further illustrate the performances of various WBAN design techniques. PMID:20953680

  16. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  17. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  18. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation. PMID:25155192

  19. Validation of erythromycin microbiological assay using an alternative experimental design.

    PubMed

    Lourenço, Felipe Rebello; Kaneko, Telma Mary; Pinto, Terezinha de Jesus Andreoli

    2007-01-01

    The agar diffusion method, widely used in antibiotic dosage, relates the diameter of the inhibition zone to the dose of the substance assayed. An experimental plan is proposed that may provide better results and an indication of the assay validity. The symmetric or balanced assays (2 x 2) as well as those with interpolation in standard curve (5 x 1) are the main designs used in the dosage of antibiotics. This study proposes an alternative experimental design for erythromycin microbiological assay with the evaluation of the validation parameters of the method referring to linearity, precision, and accuracy. The design proposed (3 x 1) uses 3 doses of standard and 1 dose of sample applied in a unique plate, aggregating the characteristics of the 2 x 2 and 5 x 1 assays. The method was validated for erythromycin microbiological assay through agar diffusion, revealing its adequacy to linearity, precision, and accuracy standards. Likewise, the statistical methods used demonstrated their accordance with the method concerning the parameters evaluated. The 3 x 1 design proved to be adequate for the dosage of erythromycin and thus a good alternative for erythromycin assay. PMID:17760348

  20. A Hierarchical Adaptive Approach to Optimal Experimental Design

    PubMed Central

    Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.

    2014-01-01

    Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697

  1. Design and experimental validation of looped-tube thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Abduljalil, Abdulrahman S.; Yu, Zhibin; Jaworski, Artur J.

    2011-10-01

    The aim of this paper is to present the design and experimental validation process for a thermoacoustic looped-tube engine. The design procedure consists of numerical modelling of the system using DELTA EC tool, Design Environment for Low-amplitude ThermoAcoustic Energy Conversion, in particular the effects of mean pressure and regenerator configuration on the pressure amplitude and acoustic power generated. This is followed by the construction of a practical engine system equipped with a ceramic regenerator — a substrate used in automotive catalytic converters with fine square channels. The preliminary testing results are obtained and compared with the simulations in detail. The measurement results agree very well on the qualitative level and are reasonably close in the quantitative sense.

  2. Techniques for Conducting Effective Concept Design and Design-to-Cost Trade Studies

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2015-01-01

    Concept design plays a central role in project success as its product effectively locks the majority of system life cycle cost. Such extraordinary leverage presents a business case for conducting concept design in a credible fashion, particularly for first-of-a-kind systems that advance the state of the art and that have high design uncertainty. A key challenge, however, is to know when credible design convergence has been achieved in such systems. Using a space system example, this paper characterizes the level of convergence needed for concept design in the context of technical and programmatic resource margins available in preliminary design and highlights the importance of design and cost evaluation learning curves in determining credible convergence. It also provides techniques for selecting trade study cases that promote objective concept evaluation, help reveal unknowns, and expedite convergence within the trade space and conveys general practices for conducting effective concept design-to-cost studies.

  3. A Course in Heterogeneous Catalysis: Principles, Practice, and Modern Experimental Techniques.

    ERIC Educational Resources Information Center

    Wolf, Eduardo E.

    1981-01-01

    Outlines a multidisciplinary course which comprises fundamental, practical, and experimental aspects of heterogeneous catalysis. The course structure is a combination of lectures and demonstrations dealing with the use of spectroscopic techniques for surface analysis. (SK)

  4. Computational design and experimental verification of a symmetric protein homodimer.

    PubMed

    Mou, Yun; Huang, Po-Ssu; Hsu, Fang-Ciao; Huang, Shing-Jong; Mayo, Stephen L

    2015-08-25

    Homodimers are the most common type of protein assembly in nature and have distinct features compared with heterodimers and higher order oligomers. Understanding homodimer interactions at the atomic level is critical both for elucidating their biological mechanisms of action and for accurate modeling of complexes of unknown structure. Computation-based design of novel protein-protein interfaces can serve as a bottom-up method to further our understanding of protein interactions. Previous studies have demonstrated that the de novo design of homodimers can be achieved to atomic-level accuracy by β-strand assembly or through metal-mediated interactions. Here, we report the design and experimental characterization of a α-helix-mediated homodimer with C2 symmetry based on a monomeric Drosophila engrailed homeodomain scaffold. A solution NMR structure shows that the homodimer exhibits parallel helical packing similar to the design model. Because the mutations leading to dimer formation resulted in poor thermostability of the system, design success was facilitated by the introduction of independent thermostabilizing mutations into the scaffold. This two-step design approach, function and stabilization, is likely to be generally applicable, especially if the desired scaffold is of low thermostability. PMID:26269568

  5. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 high alpha research vehicle (HARV) automated test systems are discussed. It is noted that operational experiences in developing and using these automated testing techniques have highlighted the need for incorporating target system features to improve testability. Improved target system testability can be accomplished with the addition of nonreal-time and real-time features. Online access to target system implementation details, unobtrusive real-time access to internal user-selectable variables, and proper software instrumentation are all desirable features of the target system. Also, test system and target system design issues must be addressed during the early stages of the target system development. Processing speeds of up to 20 million instructions/s and the development of high-bandwidth reflective memory systems have improved the ability to integrate the target system and test system for the application of automated testing techniques. It is concluded that new methods of designing testability into the target systems are required.

  6. Study of automatic designing of line heating technique parameters

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Jun; Guo, Pei-Jun; Deng, Yan-Ping; Ji, Zhuo-Shang; Wang, Ji; Zhou, Bo; Yang, Hong; Zhao, Pi-Dong

    2006-03-01

    Based on experimental data of line heating, the methods of vector mapping, plane projection, and coordinate converting are presented to establish the spectra for line heating distortion discipline which shows the relationship between process parameters and distortion parameters of line heating. Back-propagation network (BP-net) is used to modify the spectra. Mathematical models for optimizing line heating techniques parameters, which include two-objective functions, are constructed. To convert the multi-objective optimization into a single-objective one, the method of changing weight coefficient is used, and then the individual fitness function is built up. Taking the number of heating lines, distance between the heating lines' border (line space), and shrink quantity of lines as three restrictive conditions, a hierarchy genetic algorithm (HGA) code is established by making use of information provided by the spectra, in which inner coding and outer coding adopt different heredity arithmetic operators in inherent operating. The numerical example shows that the spectra for line heating distortion discipline presented here can provide accurate information required by techniques parameter prediction of line heating process and the technique parameter optimization method based on HGA provided here can obtain good results for hull plate.

  7. Amplified energy harvester from footsteps: design, modeling, and experimental analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ya; Chen, Wusi; Guzman, Plinio; Zuo, Lei

    2014-04-01

    This paper presents the design, modeling and experimental analysis of an amplified footstep energy harvester. With the unique design of amplified piezoelectric stack harvester the kinetic energy generated by footsteps can be effectively captured and converted into usable DC power that could potentially be used to power many electric devices, such as smart phones, sensors, monitoring cameras, etc. This doormat-like energy harvester can be used in crowded places such as train stations, malls, concerts, airport escalator/elevator/stairs entrances, or anywhere large group of people walk. The harvested energy provides an alternative renewable green power to replace power requirement from grids, which run on highly polluting and global-warming-inducing fossil fuels. In this paper, two modeling approaches are compared to calculate power output. The first method is derived from the single degree of freedom (SDOF) constitutive equations, and then a correction factor is applied onto the resulting electromechanically coupled equations of motion. The second approach is to derive the coupled equations of motion with Hamilton's principle and the constitutive equations, and then formulate it with the finite element method (FEM). Experimental testing results are presented to validate modeling approaches. Simulation results from both approaches agree very well with experimental results where percentage errors are 2.09% for FEM and 4.31% for SDOF.

  8. Theoretical and experimental evaluation of broadband decoupling techniques for in vivo nuclear magnetic resonance spectroscopy.

    PubMed

    de Graaf, Robin A

    2005-06-01

    A theoretical and experimental evaluation of existing broadband decoupling methods with respect to their utility for in vivo (1)H-(13)C NMR spectroscopy is presented. Simulations are based on a modified product operator formalism, while an experimental evaluation is performed on in vitro samples and human leg and rat brain in vivo. The performance of broadband decoupling methods was evaluated with respect to the required peak and average RF powers, decoupling bandwidth, decoupling side bands, heteronuclear scalar coupling constant, and sensitivity toward B(2) inhomogeneity. In human applications only the WALTZ and MLEV decoupling methods provide adequate decoupling performance at RF power levels that satisfy the FDA guidelines on local tissue heating. For very low RF power levels (B(2max) < 300 Hz) one should verify empirically whether the experiment will benefit from broadband decoupling. At higher RF power levels acceptable for animal studies additional decoupling techniques become available and provide superior performance. Since the average RF power of adiabatic RF pulses is almost always significantly lower than the peak RF power, it can be stated that for average RF powers suitable for animal studies it is always possible to design an adiabatic decoupling scheme that outperforms all other schemes. B(2) inhomogeneity degrades the decoupling performance of all methods, but the decoupling bandwidths for WALTZ-16 and especially adiabatic methods are still satisfactory for useful in vivo decoupling with a surface coil. PMID:15906279

  9. Design of vibration isolation systems using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.

  10. LeRC rail accelerators - Test designs and diagnostic techniques

    NASA Technical Reports Server (NTRS)

    Zana, L. M.; Kerslake, W. R.; Sturman, J. C.; Wang, S. Y.; Terdan, F. F.

    1984-01-01

    The feasibility of using rail accelerators for various in-space and to-space propulsion applications was investigated. A 1 meter, 24 sq mm bore accelerator was designed with the goal of demonstrating projectile velocities of 15 km/sec using a peak current of 200 kA. A second rail accelerator, 1 meter long with a 156.25 sq mm bore, was designed with clear polycarbonate sidewalls to permit visual observation of the plasma arc. A study of available diagnostic techniques and their application to the rail accelerator is presented. Specific topics of discussion include the use of interferometry and spectroscopy to examine the plasma armature as well as the use of optical sensors to measure rail displacement during acceleration. Standard diagnostics such as current and voltage measurements are also discussed. Previously announced in STAR as N83-35053

  11. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  12. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  13. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  14. Acting like a physicist: Student approach study to experimental design

    NASA Astrophysics Data System (ADS)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  15. Optimization of experimental designs by incorporating NIF facility impacts

    NASA Astrophysics Data System (ADS)

    Eder, D. C.; Whitman, P. K.; Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T.; Parham, T. G.; Koerner, J. G.; Dixit, S. N.; Suratwala, T. I.; Blue, B. E.; Hansen, J. F.; Tobin, M. T.; Robey, H. F.; Spaeth, M. L.; MacGowan, B. J.

    2006-06-01

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) blocks the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, fast moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to assure that all NIF experimental campaigns meet the requirements on allowed level of debris and shrapnel generation.

  16. Design considerations for ITER (International Thermonuclear Experimental Reactor) magnet systems

    SciTech Connect

    Henning, C.D.; Miller, J.R.

    1988-10-09

    The International Thermonuclear Experimental Reactor (ITER) is now completing a definition phase as a beginning of a three-year design effort. Preliminary parameters for the superconducting magnet system have been established to guide further and more detailed design work. Radiation tolerance of the superconductors and insulators has been of prime importance, since it sets requirements for the neutron-shield dimension and sensitively influences reactor size. The major levels of mechanical stress in the structure appear in the cases of the inboard legs of the toroidal-field (TF) coils. The cases of the poloidal-field (PF) coils must be made thin or segmented to minimize eddy current heating during inductive plasma operation. As a result, the winding packs of both the TF and PF coils includes significant fractions of steel. The TF winding pack provides support against in-plane separating loads but offers little support against out-of-plane loads, unless shear-bonding of the conductors can be maintained. The removal of heat due to nuclear and ac loads has not been a fundamental limit to design, but certainly has non-negligible economic consequences. We present here preliminary ITER magnetic systems design parameters taken from trade studies, designs, and analyses performed by the Home Teams of the four ITER participants, by the ITER Magnet Design Unit in Garching, and by other participants at workshops organized by the Magnet Design Unit. The work presented here reflects the efforts of many, but the responsibility for the opinions expressed is the authors'. 4 refs., 3 figs., 4 tabs.

  17. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2015-11-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  18. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  19. Determining the extent of coarticulation: effects of experimental design.

    PubMed

    Gelfer, C E; Bell-Berti, F; Harris, K S

    1989-12-01

    The purpose of this letter is to explore some reasons for what appear to be conflicting reports regarding the nature and extent of anticipatory coarticulation, in general, and anticipatory lip rounding, in particular. Analyses of labial electromyographic and kinematic data using a minimal-pair paradigm allowed for the differentiation of consonantal and vocalic effects, supporting a frame versus a feature-spreading model of coarticulation. It is believed that the apparent conflicts of previous studies of anticipatory coarticulation might be resolved if experimental design made more use of contrastive minimal pairs and relied less on assumptions about feature specifications of phones. PMID:2600314

  20. On the proper study design applicable to experimental balneology.

    PubMed

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved. PMID:26597677

  1. Design optimum frac jobs using virtual intelligence techniques

    NASA Astrophysics Data System (ADS)

    Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam

    2000-10-01

    Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These

  2. Structural design and fabrication techniques of composite unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hunt, Daniel Stephen

    Popularity of unmanned aerial vehicles has grown substantially in recent years both in the private sector, as well as for government functions. This growth can be attributed largely to the increased performance of the technology that controls these vehicles, as well as decreasing cost and size of this technology. What is sometimes forgotten though, is that the research and advancement of the airframes themselves are equally as important as what is done with them. With current computer-aided design programs, the limits of design optimization can be pushed further than ever before, resulting in lighter and faster airframes that can achieve longer endurances, higher altitudes, and more complex missions. However, realization of a paper design is still limited by the physical restrictions of the real world and the structural constraints associated with it. The purpose of this paper is to not only step through current design and manufacturing processes of composite UAVs at Oklahoma State University, but to also focus on composite spars, utilizing and relating both calculated and empirical data. Most of the experience gained for this thesis was from the Cessna Longitude project. The Longitude is a 1/8 scale, flying demonstrator Oklahoma State University constructed for Cessna. For the project, Cessna required dynamic flight data for their design process in order to make their 2017 release date. Oklahoma State University was privileged enough to assist Cessna with the mission of supporting the validation of design of their largest business jet to date. This paper will detail the steps of the fabrication process used in construction of the Longitude, as well as several other projects, beginning with structural design, machining, molding, skin layup, and ending with final assembly. Also, attention will be paid specifically towards spar design and testing in effort to ease the design phase. This document is intended to act not only as a further development of current

  3. Experimental verification of photon angular momentum and vorticity with radio techniques

    NASA Astrophysics Data System (ADS)

    Tamburini, Fabrizio; Mari, Elettra; Thidé, Bo; Barbieri, Cesare; Romanato, Filippo

    2011-11-01

    The experimental evidence that radio techniques can be used for synthesizing and analyzing non-integer electromagnetic (EM) orbital angular momentum (OAM) of radiation is presented. The technique used amounts to sample, in space and time, the EM field vectors and digitally processing the data to calculate the vortex structure, the spatial phase distribution, and the OAM spectrum of the radiation. The experimental verification that OAM-carrying beams can be readily generated and exploited by using radio techniques paves the way to an entirely new paradigm of radar and radio communication protocols.

  4. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  5. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  6. Design preferences and cognitive styles: experimentation by automated website synthesis

    PubMed Central

    2012-01-01

    Background This article aims to demonstrate computational synthesis of Web-based experiments in undertaking experimentation on relationships among the participants' design preference, rationale, and cognitive test performance. The exemplified experiments were computationally synthesised, including the websites as materials, experiment protocols as methods, and cognitive tests as protocol modules. This work also exemplifies the use of a website synthesiser as an essential instrument enabling the participants to explore different possible designs, which were generated on the fly, before selection of preferred designs. Methods The participants were given interactive tree and table generators so that they could explore some different ways of presenting causality information in tables and trees as the visualisation formats. The participants gave their preference ratings for the available designs, as well as their rationale (criteria) for their design decisions. The participants were also asked to take four cognitive tests, which focus on the aspects of visualisation and analogy-making. The relationships among preference ratings, rationale, and the results of cognitive tests were analysed by conservative non-parametric statistics including Wilcoxon test, Krustal-Wallis test, and Kendall correlation. Results In the test, 41 of the total 64 participants preferred graphical (tree-form) to tabular presentation. Despite the popular preference for graphical presentation, the given tabular presentation was generally rated to be easier than graphical presentation to interpret, especially by those who were scored lower in the visualization and analogy-making tests. Conclusions This piece of evidence helps generate a hypothesis that design preferences are related to specific cognitive abilities. Without the use of computational synthesis, the experiment setup and scientific results would be impractical to obtain. PMID:22748000

  7. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    PubMed

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged. PMID:26065534

  8. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  9. Experimental Vertical Stability Studies for ITER Performance and Design Guidance

    SciTech Connect

    Humphreys, D A; Casper, T A; Eidietis, N; Ferrera, M; Gates, D A; Hutchinson, I H; Jackson, G L; Kolemen, E; Leuer, J A; Lister, J; LoDestro, L L; Meyer, W H; Pearlstein, L D; Sartori, F; Walker, M L; Welander, A S; Wolfe, S M

    2008-10-13

    Operating experimental devices have provided key inputs to the design process for ITER axisymmetric control. In particular, experiments have quantified controllability and robustness requirements in the presence of realistic noise and disturbance environments, which are difficult or impossible to characterize with modeling and simulation alone. This kind of information is particularly critical for ITER vertical control, which poses some of the highest demands on poloidal field system performance, since the consequences of loss of vertical control can be very severe. The present work describes results of multi-machine studies performed under a joint ITPA experiment on fundamental vertical control performance and controllability limits. We present experimental results from Alcator C-Mod, DIII-D, NSTX, TCV, and JET, along with analysis of these data to provide vertical control performance guidance to ITER. Useful metrics to quantify this control performance include the stability margin and maximum controllable vertical displacement. Theoretical analysis of the maximum controllable vertical displacement suggests effective approaches to improving performance in terms of this metric, with implications for ITER design modifications. Typical levels of noise in the vertical position measurement which can challenge the vertical control loop are assessed and analyzed.

  10. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  11. Reduction of animal use: experimental design and quality of experiments.

    PubMed

    Festing, M F

    1994-07-01

    Poorly designed and analysed experiments can lead to a waste of scientific resources, and may even reach the wrong conclusions. Surveys of published papers by a number of authors have shown that many experiments are poorly analysed statistically, and one survey suggested that about a third of experiments may be unnecessarily large. Few toxicologists attempted to control variability using blocking or covariance analysis. In this study experimental design and statistical methods in 3 papers published in toxicological journals were used as case studies and were examined in detail. The first used dogs to study the effects of ethanol on blood and hepatic parameters following chronic alcohol consumption in a 2 x 4 factorial experimental design. However, the authors used mongrel dogs of both sexes and different ages with a wide range of body weights without any attempt to control the variation. They had also attempted to analyse a factorial design using Student's t-test rather than the analysis of variance. Means of 2 blood parameters presented with one decimal place had apparently been rounded to the nearest 5 units. It is suggested that this experiment could equally well have been done in 3 blocks using 24 instead of 46 dogs. The second case study was an investigation of the response of 2 strains of mice to a toxic agent causing bladder injury. The first experiment involved 40 treatment combinations (2 strains x 4 doses x 5 days) with 3-6 mice per combination. There was no explanation of how the experiment involving approximately 180 mice had actually been done, but unequal subclass numbers suggest that the experiment may have been done on an ad hoc basis rather than being properly designed. It is suggested that the experiment could have been done as 2 blocks involving 80 instead of about 180 mice. The third study again involved a factorial design with 4 dose levels of a compound and 2 sexes, with a total of 80 mice. Open field behaviour was examined. The author

  12. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  13. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    SciTech Connect

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  14. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  15. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  16. A More Rigorous Quasi-Experimental Alternative to the One-Group Pretest-Posttest Design.

    ERIC Educational Resources Information Center

    Johnson, Craig W.

    1986-01-01

    A simple quasi-experimental design is described which may have utility in a variety of applied and laboratory research settings where ordinarily the one-group pretest-posttest pre-experimental design might otherwise be the procedure of choice. The design approaches the internal validity of true experimental designs while optimizing external…

  17. Comparing simulated emission from molecular clouds using experimental design

    SciTech Connect

    Yeremi, Miayan; Flynn, Mallory; Loeppky, Jason; Rosolowsky, Erik; Offner, Stella

    2014-03-10

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.

  18. Technique to model and design physical database systems

    SciTech Connect

    Wise, T.E.

    1983-12-01

    Database management systems (DBMSs) allow users to define and manipulate records at a logical level of abstraction. A logical record is not stored as users see it but is mapped into a collection of physical records. Physical records are stored in file structures managed by a DBMS. Likewise, DBMS commands which appear to be directed toward one or more logical records actually correspond to a series of operations on the file structures. The structures and operations of a DBMS (i.e., its physical architecture) are not visible to users at the logical level. Traditionally, logical records and DBMS commands are mapped to physical records and operations in one step. In this report, logical records are mapped to physical records in a series of steps over several levels of abstraction. Each level of abstraction is composed of one or more intermediate record types. A hierarchy of record types results that covers the gap between logical and physical records. The first step of our technique identifies the record types and levels of abstraction that describe a DBMS. The second step maps DBMS commands to physical operations in terms of these records and levels of abstraction. The third step encapsulates each record type and its operations into a programming construct called a module. The applications of our technique include modeling existing DBMSs and designing the physical architectures of new DBMSs. To illustrate one application, we describe in detail the architecture of the commercial DBMS INQUIRE.

  19. Interplanetary mission design techniques for flagship-class missions

    NASA Astrophysics Data System (ADS)

    Kloster, Kevin W.

    Trajectory design, given the current level of propulsive technology, requires knowledge of orbital mechanics, computational resources, extensive use of tools such as gravity-assist and V infinity leveraging, as well as insight and finesse. Designing missions that deliver a capable science package to a celestial body of interest that are robust and affordable is a difficult task. Techniques are presented here that assist the mission designer in constructing trajectories for flagship-class missions in the outer Solar System. These techniques are applied in this work to spacecraft that are currently in flight or in the planning stages. By escaping the Saturnian system, the Cassini spacecraft can reach other destinations in the Solar System while satisfying planetary quarantine. The patched-conic method was used to search for trajectories that depart Saturn via gravity assist at Titan. Trajectories were found that fly by Jupiter to reach Uranus or Neptune, capture at Jupiter or Neptune, escape the Solar System, fly by Uranus during its 2049 equinox, or encounter Centaurs. A "grand tour," which visits Jupiter, Uranus, and Neptune, departs Saturn in 2014. New tools were built to search for encounters with Centaurs, small Solar System bodies between the orbits of Jupiter and Neptune, and to minimize the DeltaV to target these encounters. Cassini could reach Chiron, the first-discovered Centaur, in 10.5 years after a 2022 Saturn departure. For a Europa Orbiter mission, the strategy for designing Jovian System tours that include Io flybys differs significantly from schemes developed for previous versions of the mission. Assuming that the closest approach distance of the incoming hyperbola at Jupiter is below the orbit of Io, then an Io gravity assist gives the greatest energy pump-down for the least decrease in perijove radius. Using Io to help capture the spacecraft can increase the savings in Jupiter orbit insertion DeltaV over a Ganymede-aided capture. The tour design is

  20. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands. PMID:22020491

  1. Design and experimental results of coaxial circuits for gyroklystron amplifiers

    SciTech Connect

    Flaherty, M.K.E.; Lawson, W.; Cheng, J.; Calame, J.P.; Hogan, B.; Latham, P.E.; Granatstein, V.L.

    1994-12-31

    At the University of Maryland high power microwave source development for use in linear accelerator applications continues with the design and testing of coaxial circuits for gyroklystron amplifiers. This presentation will include experimental results from a coaxial gyroklystron that was tested on the current microwave test bed, and designs for second harmonic coaxial circuits for use in the next generation of the gyroklystron program. The authors present test results for a second harmonic coaxial circuit. Similar to previous second harmonic experiments the input cavity resonated at 9.886 GHz and the output frequency was 19.772 GHz. The coaxial insert was positioned in the input cavity and drift region. The inner conductor consisted of a tungsten rod with copper and ceramic cylinders covering its length. Two tungsten rods that bridged the space between the inner and outer conductors supported the whole assembly. The tube produced over 20 MW of output power with 17% efficiency. Beam interception by the tungsten rods resulted in minor damage. Comparisons with previous non-coaxial circuits showed that the coaxial configuration increased the parameter space over which stable operation was possible. Future experiments will feature an upgraded modulator and beam formation system capable of producing 300 MW of beam power. The fundamental frequency of operation is 8.568 GHz. A second harmonic coaxial gyroklystron circuit was designed for use in the new system. A scattering matrix code predicts a resonant frequency of 17.136 GHz and Q of 260 for the cavity with 95% of the outgoing microwaves in the desired TE032 mode. Efficiency studies of this second harmonic output cavity show 20% expected efficiency. Shorter second harmonic output cavity designs are also being investigated with expected efficiencies near 34%.

  2. Focusing Kinoform Lenses: Optical Design and Experimental Validation

    SciTech Connect

    Alianelli, Lucia; Sawhney, Kawal J. S.; Snigireva, Irina; Snigirev, Anatoly

    2010-06-23

    X-ray focusing lenses with a kinoform profile are high brilliance optics that can produce nano-sized beams on 3rd generation synchrotron beamlines. The lenses are fabricated with sidewalls of micrometer lateral size. They are virtually non-absorbing and therefore can deliver a high flux over a good aperture. We are developing silicon and germanium lenses that will focus hard x-ray beams to less than 0.5 {mu}m size using a single refractive element. In this contribution, we present preliminary optical design and experimental test carried out on ID06 ESRF: the lenses were used to image directly the undulator source, providing a beam with fwhm of about 0.7 {mu}m.

  3. A rationally designed CD4 analogue inhibits experimental allergic encephalomyelitis

    NASA Astrophysics Data System (ADS)

    Jameson, Bradford A.; McDonnell, James M.; Marini, Joseph C.; Korngold, Robert

    1994-04-01

    EXPERIMENTAL allergic encephalomyelitis (EAE) is an acute inflammatory autoimmune disease of the central nervous system that can be elicited in rodents and is the major animal model for the study of multiple sclerosis (MS)1,2. The pathogenesis of both EAE and MS directly involves the CD4+ helper T-cell subset3-5. Anti-CD4 monoclonal antibodies inhibit the development of EAE in rodents6-9, and are currently being used in human clinical trials for MS. We report here that similar therapeutic effects can be achieved in mice using a small (rationally designed) synthetic analogue of the CD4 protein surface. It greatly inhibits both clinical incidence and severity of EAE with a single injection, but does so without depletion of the CD4+ subset and without the inherent immunogenicity of an antibody. Furthermore, this analogue is capable of exerting its effects on disease even after the onset of symptoms.

  4. Validation of Design and Analysis Techniques of Tailored Composite Structures

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C. (Technical Monitor); Wijayratne, Dulnath D.

    2004-01-01

    Aeroelasticity is the relationship between the elasticity of an aircraft structure and its aerodynamics. This relationship can cause instabilities such as flutter in a wing. Engineers have long studied aeroelasticity to ensure such instabilities do not become a problem within normal operating conditions. In recent decades structural tailoring has been used to take advantage of aeroelasticity. It is possible to tailor an aircraft structure to respond favorably to multiple different flight regimes such as takeoff, landing, cruise, 2-g pull up, etc. Structures can be designed so that these responses provide an aerodynamic advantage. This research investigates the ability to design and analyze tailored structures made from filamentary composites. Specifically the accuracy of tailored composite analysis must be verified if this design technique is to become feasible. To pursue this idea, a validation experiment has been performed on a small-scale filamentary composite wing box. The box is tailored such that its cover panels induce a global bend-twist coupling under an applied load. Two types of analysis were chosen for the experiment. The first is a closed form analysis based on a theoretical model of a single cell tailored box beam and the second is a finite element analysis. The predicted results are compared with the measured data to validate the analyses. The comparison of results show that the finite element analysis is capable of predicting displacements and strains to within 10% on the small-scale structure. The closed form code is consistently able to predict the wing box bending to 25% of the measured value. This error is expected due to simplifying assumptions in the closed form analysis. Differences between the closed form code representation and the wing box specimen caused large errors in the twist prediction. The closed form analysis prediction of twist has not been validated from this test.

  5. Simulation-based optimal Bayesian experimental design for nonlinear systems

    SciTech Connect

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics.

  6. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  7. Multidisciplinary Design Techniques Applied to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Olds, John Robert; Walberg, Gerald D.

    1993-01-01

    Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are

  8. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  9. Case study 1. Practical considerations with experimental design and interpretation.

    PubMed

    Barr, John T; Flora, Darcy R; Iwuchukwu, Otito F

    2014-01-01

    At some point, anyone with knowledge of drug metabolism and enzyme kinetics started out knowing little about these topics. This chapter was specifically written with the novice in mind. Regardless of the enzyme one is working with or the goal of the experiment itself, there are fundamental components and concepts of every experiment using drug metabolism enzymes. The following case studies provide practical tips, techniques, and answers to questions that may arise in the course of conducting such experiments. Issues ranging from assay design and development to data interpretation are addressed. The goal of this section is to act as a starting point to provide the reader with key questions and guidance while attempting his/her own work. PMID:24523122

  10. Experimental design considerations in microbiota/inflammation studies.

    PubMed

    Moore, Robert J; Stanley, Dragana

    2016-07-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  11. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  12. Experimental design considerations in microbiota/inflammation studies

    PubMed Central

    Moore, Robert J; Stanley, Dragana

    2016-01-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  13. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  14. Quiet Clean Short-Haul Experimental Engine (QSCEE). Preliminary analyses and design report, volume 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental propulsion systems to be built and tested in the 'quiet, clean, short-haul experimental engine' program are presented. The flight propulsion systems are also presented. The following areas are discussed: acoustic design; emissions control; engine cycle and performance; fan aerodynamic design; variable-pitch actuation systems; fan rotor mechanical design; fan frame mechanical design; and reduction gear design.

  15. BOLD-based Techniques for Quantifying Brain Hemodynamic and Metabolic Properties – Theoretical Models and Experimental Approaches

    PubMed Central

    Yablonskiy, Dmitriy A.; Sukstanskii, Alexander L.; He, Xiang

    2012-01-01

    Quantitative evaluation of brain hemodynamics and metabolism, particularly the relationship between brain function and oxygen utilization, is important for understanding normal human brain operation as well as pathophysiology of neurological disorders. It can also be of great importance for evaluation of hypoxia within tumors of the brain and other organs. A fundamental discovery by Ogawa and co-workers of the BOLD (Blood Oxygenation Level Dependent) contrast opened a possibility to use this effect to study brain hemodynamic and metabolic properties by means of MRI measurements. Such measurements require developing theoretical models connecting MRI signal to brain structure and functioning and designing experimental techniques allowing MR measurements of salient features of theoretical models. In our review we discuss several such theoretical models and experimental methods for quantification brain hemodynamic and metabolic properties. Our review aims mostly at methods for measuring oxygen extraction fraction, OEF, based on measuring blood oxygenation level. Combining measurement of OEF with measurement of CBF allows evaluation of oxygen consumption, CMRO2. We first consider in detail magnetic properties of blood – magnetic susceptibility, MR relaxation and theoretical models of intravascular contribution to MR signal under different experimental conditions. Then, we describe a “through-space” effect – the influence of inhomogeneous magnetic fields, created in the extravascular space by intravascular deoxygenated blood, on the MR signal formation. Further we describe several experimental techniques taking advantage of these theoretical models. Some of these techniques - MR susceptometry, and T2-based quantification of oxygen OEF – utilize intravascular MR signal. Another technique – qBOLD – evaluates OEF by making use of through-space effects. In this review we targeted both scientists just entering the MR field and more experienced MR researchers

  16. Combined FPPE-PTR Calorimetry Involving TWRC Technique II. Experimental: Application to Thermal Effusivity Measurements of Solids

    NASA Astrophysics Data System (ADS)

    Dadarlat, Dorin; Pop, Mircea Nicolae; Streza, Mihaela; Longuemart, Stephane; Depriester, Michael; Sahraoui, Abdelhak Hadj; Simon, Viorica

    2011-10-01

    Photopyroelectric calorimetry in the front detection configuration (FPPE) and photothermal radiometry (PTR) were simultaneously used, together with the thermal-wave resonator cavity method (TWRC), in order to investigate the thermal effusivity of solids inserted as backing layers in a detection cell. A new combined FPPE-PTR-TWRC setup was designed. It was demonstrated experimentally that the PTR technique, combined with the TWRC method, is able to provide calorimetric information about the third layer of a detection cell. Applications on solids with different values of the thermal effusivity (starting from metals, down to thermal isolators) are presented. The values of the thermal effusivity obtained with the PTR technique are similar to those obtained with the PPE technique, and in agreement with literature values; the two methods reciprocally support each other. The accuracy of both methods is higher when the values of the thermal effusivity of the backing layer and coupling fluid are close.

  17. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  18. Low-Field Accelerator Structure Couplers and Design Techniques

    SciTech Connect

    Nantista, C

    2004-07-29

    Recent experience with X-band accelerator structure development has shown the rf input coupler to be the region most prone to rf breakdown and degradation, effectively limiting the operating gradient. A major factor in this appears to be high magnetic fields at the sharp edges of the coupling irises. As a first response to this problem, couplers with rounded and thickened iris horns have been employed and successfully tested at high power. To further reduce fields for higher power flow, conceptually new coupler designs have been developed, in which power is coupled through the broadwall of the feed waveguide, rather than through terminating irises. A 'mode launcher' coupler, which launches the TM{sub 01} mode in circular waveguide before coupling through a matching cell into the main structure, has been tested with great success. With peak surface fields below those in the body of the structure, this coupler represented a break-through in the NLC structure program. The design of this coupler and of variations which use beamline space more efficiently are described here. The latter include a coupler in which power passes directly through an iris in the broad wall of the rectangular waveguide into a matching cell, also successfully implemented, and a variation which makes the waveguide itself an accelerating cell. The authors also discuss in some detail a couple of techniques for matching such couplers to travelling-wave structures using a field solver. The first exploits the cell number independence of a travelling-wave match, and the second optimizes using the fields of an internally driven structure.

  19. Design, Evaluation and Experimental Effort Toward Development of a High Strain Composite Wing for Navy Aircraft

    NASA Technical Reports Server (NTRS)

    Bruno, Joseph; Libeskind, Mark

    1990-01-01

    This design development effort addressed significant technical issues concerning the use and benefits of high strain composite wing structures (Epsilon(sub ult) = 6000 micro-in/in) for future Navy aircraft. These issues were concerned primarily with the structural integrity and durability of the innovative design concepts and manufacturing techniques which permitted a 50 percent increase in design ultimate strain level (while maintaining the same fiber/resin system) as well as damage tolerance and survivability requirements. An extensive test effort consisting of a progressive series of coupon and major element tests was an integral part of this development effort, and culminated in the design, fabrication and test of a major full-scale wing box component. The successful completion of the tests demonstrated the structural integrity, durability and benefits of the design. Low energy impact testing followed by fatigue cycling verified the damage tolerance concepts incorporated within the structure. Finally, live fire ballistic testing confirmed the survivability of the design. The potential benefits of combining newer/emerging composite materials and new or previously developed high strain wing design to maximize structural efficiency and reduce fabrication costs was the subject of subsequent preliminary design and experimental evaluation effort.

  20. Game Design Narrative for Learning: Appropriating Adventure Game Design Narrative Devices and Techniques for the Design of Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2006-01-01

    The purpose of this conceptual analysis is to investigate how contemporary video and computer games might inform instructional design by looking at how narrative devices and techniques support problem solving within complex, multimodal environments. Specifically, this analysis presents a brief overview of game genres and the role of narrative in…

  1. Recent Progress in x3-Related Optical Process Experimental Technique. Raman Lasing

    NASA Technical Reports Server (NTRS)

    Matsko, A. B.; Savchenkov, Anatoliy A.; Strekalov, Dmitry; Maleki, Lute

    2006-01-01

    We describe theoretically and verify experimentally a simple technique for analyzing conversion efficiency and threshold of ail-resonant intracavity Raman lasers. The method is based on a dependence of the ring-down time of the pump cavity mode on the energy, accumulated in the cavity.

  2. Studies of exotic nuclei: state-of-the-art experimental tools and techniques

    NASA Astrophysics Data System (ADS)

    Paschalis, Stefanos

    2015-04-01

    As new radioactive-ion beam facilities are coming online there is an even growing need for advanced experimental apparatuses that offer unprecedented resolution and efficiency and can fully exploit the physics opportunities that open up in this new for the nuclear physics community era. In this contribution state-of-the-art equipment and techniques for nuclear physics experiments are presented.

  3. Synthetic tracked aperture ultrasound imaging: design, simulation, and experimental evaluation.

    PubMed

    Zhang, Haichong K; Cheng, Alexis; Bottenus, Nick; Guo, Xiaoyu; Trahey, Gregg E; Boctor, Emad M

    2016-04-01

    Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA-called synthetic tracked aperture ultrasound (STRATUS) imaging-by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality. PMID:27088108

  4. System design and verification of the precession electron diffraction technique

    NASA Astrophysics Data System (ADS)

    Own, Christopher Su-Yan

    2005-07-01

    Bulk structural crystallography is generally a two-part process wherein a rough starting structure model is first derived, then later refined to give an accurate model of the structure. The critical step is the determination of the initial model. As materials problems decrease in length scale, the electron microscope has proven to be a versatile and effective tool for studying many problems. However, study of complex bulk structures by electron diffraction has been hindered by the problem of dynamical diffraction. This phenomenon makes bulk electron diffraction very sensitive to specimen thickness, and expensive equipment such as aberration-corrected scanning transmission microscopes or elaborate methodology such as high resolution imaging combined with diffraction and simulation are often required to generate good starting structures. The precession electron diffraction technique (PED), which has the ability to significantly reduce dynamical effects in diffraction patterns, has shown promise as being a "philosopher's stone" for bulk electron diffraction. However, a comprehensive understanding of its abilities and limitations is necessary before it can be put into widespread use as a standalone technique. This thesis aims to bridge the gaps in understanding and utilizing precession so that practical application might be realized. Two new PED systems have been built, and optimal operating parameters have been elucidated. The role of lens aberrations is described in detail, and an alignment procedure is given that shows how to circumvent aberration in order to obtain high-quality patterns. Multislice simulation is used for investigating the errors inherent in precession, and is also used as a reference for comparison to simple models and to experimental PED data. General trends over a large sampling of parameter space are determined. In particular, we show that the primary reflection intensity errors occur near the transmitted beam and decay with increasing angle and

  5. Logical Graphics Design Technique for Drawing Distribution Networks

    NASA Astrophysics Data System (ADS)

    Al-A`Ali, Mansoor

    Electricity distribution networks normally consist of tens of primary feeders, thousands of substations and switching stations spread over large geographical areas and thus require a complex system in order to manage them properly from within the distribution control centre. We show techniques for using Delphi Object Oriented components to automatically generate, display and manage graphically and logically the circuits of the network. The graphics components are dynamically interactive and thus the system allows switching operations as well as displays. The object oriented approach was developed to replace an older system, which used Microstation with MDL as the programming language and ORACLE as the DBMS. Before this, the circuits could only be displayed schematically, which has many inherent problems in speed and readability of large displays. Schematic graphics displays were cumbersome when adding or deleting stations; this problem is now resolved using our approach by logically generating the graphics from the database connectivity information. This paper demonstrates the method of designing these Object Oriented components and how they can be used in specially created algorithms to generate the necessary interactive graphics. Four different logical display algorithms were created and in this study we present samples of the four different outputs of these algorithms which prove that distribution engineers can work with logical display of the circuits which are aimed to speed up the switching operations and for better clarity of the display.

  6. Experimental technique for studying high-temperature phase equilibria in reactive molten metal based systems

    NASA Astrophysics Data System (ADS)

    Ermoline, Alexandre

    The general objective of this work is to develop an experimental technique for studying the high-temperature phase compositions and phase equilibria in molten metal-based binary and ternary systems, such as Zr-O-N, B-N-O, Al-O, and others. A specific material system of Zr-O-N was selected for studying and testing this technique. The information about the high-temperature phase equilibria in reactive metal-based systems is scarce and their studying is difficult because of chemical reactions occurring between samples and essentially any container materials, and causing contamination of the system. Containerless microgravity experiments for studying equilibria in molten metal-gas systems were designed to be conducted onboard of a NASA KC-135 aircraft flying parabolic trajectories. A uniaxial apparatus suitable for acoustic levitation, laser heating, and splat quenching of small samples was developed and equipped with computer-based controller and optical diagnostics. Normal-gravity tests were conducted to determine the most suitable operating parameters of the levitator by direct observations of the levitated samples, as opposed to more traditional pressure mapping of the acoustic field. The size range of samples that could be reliably heated and quenched in this setup was determined to be on the order of 1--3 mm. In microgravity experiments, small spherical specimens (1--2 mm diameter), prepared as pressed, premixed solid components, ZrO2, ZrN, and Zr powders, were acoustically levitated inside an argon-filled chamber at one atmosphere and heated by a CO2 laser. The levitating samples could be continuously laser heated for about 1 sec, resulting in local sample melting. The sample stability in the vertical direction was undisturbed by simultaneous laser heating. Oscillations of the levitating sample in the horizontal direction increased while it was heated, which eventually resulted in the movement of the sample away from its stable levitation position and the laser

  7. A New Tour Design Technique to Enable an Enceladus Orbiter

    NASA Astrophysics Data System (ADS)

    Strange, N.; Campagnola, S.; Russell, R.

    2009-12-01

    As a result of discoveries made by the Cassini spacecraft, Saturn's moon Enceladus has emerged as a high science-value target for a future orbiter mission. [1] However, past studies of an Enceladus orbiter mission [2] found that entering Enceladus orbit either requires a prohibitively large orbit insertion ΔV (> 3.5 km/s) or a prohibitively long flight time. In order to reach Enceladus with a reasonable flight time and ΔV budget, a new tour design method has been developed that uses gravity-assists of the low-mass moons Rhea, Dione, and Tethys combined with v-infinity leveraging maneuvers. This new method can achieve Enceladus orbit with a combined leveraging and insertion ΔV of ~1 km/s and a 2.5 year Saturn tour. Among many challenges in designing a trajectory for an Enceladus mission, the two most prominent arise because Enceladus is a low mass moon (its GM is only ~7 km^2/s^2), deep within Saturn's gravity well (its orbit is at 4 Saturn radii). Designing ΔV-efficient rendezvous with Enceladus is the first challenge, while the second involves finding a stable orbit which can achieve the desired science measurements. A paper by Russell and Lara [3] has recently addressed the second problem, and a paper this past August by Strange, Campagnola, and Russell [4] has adressed the first. This method developed to solve the second problem, the leveraging tour, and the science possibilities of this trajectory will be the subject of this presentation. the new methods in [4], a leveraging tour with Titan, Rhea, Dione, and Tethys can reach Enceladus orbit with less than half of the ΔV of a direct Titan-Enceladus transfer. Starting from the TSSM Saturn arrival conditions [5], with a chemical bi-prop system, this new tour design technique could place into Enceladus orbit ~2800 kg compared to ~1100 kg from a direct Titan-Enceladus transfer. Moreover, the 2.5 year leveraging tour provides many low-speed and high science value flybys of Rhea, Dione, and Tethys. This exciting

  8. Advanced Laboratory at Texas State University: Error Analysis, Experimental Design, and Research Experience for Undergraduates

    NASA Astrophysics Data System (ADS)

    Ventrice, Carl

    2009-04-01

    Physics is an experimental science. In other words, all physical laws are based on experimentally observable phenomena. Therefore, it is important that all physics students have an understanding of the limitations of certain experimental techniques and the associated errors associated with a particular measurement. The students in the Advanced Laboratory class at Texas State perform three detailed laboratory experiments during the semester and give an oral presentation at the end of the semester on a scientific topic of their choosing. The laboratory reports are written in the format of a ``Physical Review'' journal article. The experiments are chosen to give the students a detailed background in error analysis and experimental design. For instance, the first experiment performed in the spring 2009 semester is entitled Measurement of the local acceleration due to gravity in the RFM Technology and Physics Building. The goal of this experiment is to design and construct an instrument that is to be used to measure the local gravitational field in the Physics Building to an accuracy of ±0.005 m/s^2. In addition, at least one of the experiments chosen each semester involves the use of the research facilities within the physics department (e.g., microfabrication clean room, surface science lab, thin films lab, etc.), which gives the students experience working in a research environment.

  9. Design review of the Brazilian Experimental Solar Telescope

    NASA Astrophysics Data System (ADS)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  10. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-31

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  11. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  12. Plackett-Burman experimental design to facilitate syntactic foam development

    DOE PAGESBeta

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less

  13. Plackett-Burman experimental design to facilitate syntactic foam development

    SciTech Connect

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix and the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.

  14. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  15. Experimental source characterization techniques for studying the acoustic properties of perforates under high level acoustic excitation.

    PubMed

    Bodén, Hans

    2011-11-01

    This paper discusses experimental techniques for obtaining the acoustic properties of in-duct samples with non-linear acoustic characteristic. The methods developed are intended both for studies of non-linear energy transfer to higher harmonics for samples only accessible from one side such as wall treatment in aircraft engine ducts or automotive exhaust systems and for samples accessible from both sides such as perforates or other top sheets. When harmonic sound waves are incident on the sample nonlinear energy transfer results in sound generation at higher harmonics at the sample (perforate) surface. The idea is that these sources can be characterized using linear system identification techniques similar to one-port or two-port techniques which are traditionally used for obtaining source data for in-duct sources such as IC-engines or fans. The starting point will be so called polyharmonic distortion modeling which is used for characterization of nonlinear properties of microwave systems. It will be shown how acoustic source data models can be expressed using this theory. Source models of different complexity are developed and experimentally tested. The results of the experimental tests show that these techniques can give results which are useful for understanding non-linear energy transfer to higher harmonics. PMID:22087890

  16. Improved Titanium Billet Inspection Sensitivity through Optimized Phased Array Design, Part I: Design Technique, Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Lupien, Vincent; Hassan, Waled; Dumas, Philippe

    2006-03-01

    Reductions in the beam diameter and pulse duration of focused ultrasound for titanium inspections are believed to result in a signal-to-noise ratio improvement for embedded defect detection. It has been inferred from this result that detection limits could be extended to smaller defects through a larger diameter, higher frequency transducer resulting in a reduced beamwidth and pulse duration. Using Continuum Probe Designer™ (Pat. Pending), a transducer array was developed for full coverage inspection of 8 inch titanium billets. The main challenge in realizing a large aperture phased array transducer for billet inspection is ensuring that the number of elements remains within the budget allotted by the driving electronics. The optimization technique implemented by Continuum Probe Designer™ yields an array with twice the aperture but the same number of elements as existing phased arrays for the same application. The unequal area element design was successfully manufactured and validated both numerically and experimentally. Part I of this two-part series presents the design, simulation and modeling steps, while Part II presents the experimental validation and comparative study to multizone.

  17. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation. PMID:26689874

  18. Experimental generation of longitudinally-modulated electron beams using an emittance exchange technique

    SciTech Connect

    Sun, Y.-E; Piot, P.; Johnson, A.; Lumpkin, A.; Maxwell, T.; Ruan, J.; Thurman-Keup, R.; /FERMILAB

    2010-08-01

    We report our experimental demonstration of longitudinal phase space modulation using a transverse-to-longitudinal emittance exchange technique. The experiment is carried out at the A0 photoinjector at Fermi National Accelerator Lab. A vertical multi-slit plate is inserted into the beamline prior to the emittance exchange, thus introducing beam horizontal profile modulation. After the emittance exchange, the longitudinal phase space coordinates (energy and time structures) of the beam are modulated accordingly. This is a clear demonstration of the transverse-to-longitudinal phase space exchange. In this paper, we present our experimental results on the measurement of energy profile as well as numerical simulations of the experiment.

  19. Study of Influence of Experimental Technique on Measured Particle Velocity Distributions in Fluidized Bed

    NASA Astrophysics Data System (ADS)

    Gopalan, Balaji; Shaffer, Frank

    2013-11-01

    Fluid flows that are loaded with high concentration of solid particles are common in oil and chemical processing industries. However, the opaque nature of the flow fields and the complex nature of the flow have hampered the experimental and computational study of these processes. This has led to the development of a number of customized experimental techniques for high concentration particle flows for evaluation and improvement of CFD models. This includes techniques that track few individual particles, measures average particle velocity over a small sample volume and those over a large sample volume. In this work novel high speed PIV (HsPIV), with individual particle tracking, was utilized to measure velocities of individual particles in gas-particle flow fields at the walls circulating and bubbling fluidized bed. The HsPIV measurement technique has the ability to simultaneously recognize and track thousands of individual particles in flows of high particle concentration. To determine the effect of the size of the sample volume on particle velocity measurements, the PDF of Lagrangian particle velocity was compared with the PDF of Eulerian for different domain sizes over a range of flow conditions. The results will show that measured particle velocity distribution can vary from technique to technique and this bias has to be accounted in comparison with CFD simulations.

  20. Virtual techniques for designing and fabricating a retainer.

    PubMed

    Nasef, Ahmed A; El-Beialy, Amr R; Mostafa, Yehya A

    2014-09-01

    The purpose of this article was to report a procedure for using 3-dimensional cone-beam computed tomography imaging, computer-aided design, computer-aided manufacturing, and rapid prototyping to design and produce a retainer. PMID:25172262

  1. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  2. Automated measurement of birefringence - Development and experimental evaluation of the techniques

    NASA Technical Reports Server (NTRS)

    Voloshin, A. S.; Redner, A. S.

    1989-01-01

    Traditional photoelasticity has started to lose its appeal since it requires a well-trained specialist to acquire and interpret results. A spectral-contents-analysis approach may help to revive this old, but still useful technique. Light intensity of the beam passed through the stressed specimen contains all the information necessary to automatically extract the value of retardation. This is done by using a photodiode array to investigate the spectral contents of the light beam. Three different techniques to extract the value of retardation from the spectral contents of the light are discussed and evaluated. An experimental system was built which demonstrates the ability to evaluate retardation values in real time.

  3. Analytical and experimental evaluation of techniques for the fabrication of thermoplastic hologram storage devices

    NASA Technical Reports Server (NTRS)

    Rogers, J. W.

    1975-01-01

    The results of an experimental investigation on recording information on thermoplastic are given. A description was given of a typical fabrication configuration, the recording sequence, and the samples which were examined. There are basically three configurations which can be used for the recording of information on thermoplastic. The most popular technique uses corona which furnishes free charge. The necessary energy for deformation is derived from a charge layer atop the thermoplastic. The other two techniques simply use a dc potential in place of the corona for deformation energy.

  4. Solar Ion Sputter Deposition in the Lunar Regolith: Experimental Simulation Using Focused-Ion Beam Techniques

    NASA Technical Reports Server (NTRS)

    Christoffersen, R.; Rahman, Z.; Keller, L. P.

    2012-01-01

    As regions of the lunar regolith undergo space weathering, their component grains develop compositionally and microstructurally complex outer coatings or "rims" ranging in thickness from a few 10 s to a few 100's of nm. Rims on grains in the finest size fractions (e.g., <20 m) of mature lunar regoliths contain optically-active concentrations of nm size metallic Fe spherules, or "nanophase Fe(sup o)" that redden and attenuate optical reflectance spectral features important in lunar remote sensing. Understanding the mechanisms for rim formation is therefore a key part of connecting the drivers of mineralogical and chemical changes in the lunar regolith with how lunar terrains are observed to become space weathered from a remotely-sensed point of view. As interpreted based on analytical transmission electron microscope (TEM) studies, rims are produced from varying relative contributions from: 1) direct solar ion irradiation effects that amorphize or otherwise modify the outer surface of the original host grain, and 2) nanoscale, layer-like, deposition of extrinsic material processed from the surrounding soil. This extrinsic/deposited material is the dominant physical host for nanophase Fe(sup o) in the rims. An important lingering uncertainty is whether this deposited material condensed from regolith components locally vaporized in micrometeorite or larger impacts, or whether it formed as solar wind ions sputtered exposed soil and re-deposited the sputtered ions on less exposed areas. Deciding which of these mechanisms is dominant, or possibility exclusive, has been hampered because there is an insufficient library of chemical and microstructural "fingerprints" to distinguish deposits produced by the two processes. Experimental sputter deposition / characterization studies relevant to rim formation have particularly lagged since the early post-Apollo experiments of Hapke and others, especially with regard to application of TEM-based characterization techniques. Here

  5. Experimental Design on Laminated Veneer Lumber Fiber Composite: Surface Enhancement

    NASA Astrophysics Data System (ADS)

    Meekum, U.; Mingmongkol, Y.

    2010-06-01

    Thick laminate veneer lumber(LVL) fibre reinforced composites were constructed from the alternated perpendicularly arrayed of peeled rubber woods. Glass woven was laid in between the layers. Native golden teak veneers were used as faces. In house formulae epoxy was employed as wood adhesive. The hand lay-up laminate was cured at 150° C for 45 mins. The cut specimen was post cured at 80° C for at least 5 hours. The 2k factorial design of experimental(DOE) was used to verify the parameters. Three parameters by mean of silane content in epoxy formulation(A), smoke treatment of rubber wood surface(B) and anti-termite application(C) on the wood surface were analysed. Both low and high levels were further subcategorised into 2 sub-levels. Flexural properties were the main respond obtained. ANOVA analysis of the Pareto chart was engaged. The main effect plot was also testified. The results showed that the interaction between silane quantity and termite treatment is negative effect at high level(AC+). Vice versa, the interaction between silane and smoke treatment was positive significant effect at high level(AB+). According to this research work, the optimal setting to improve the surface adhesion and hence flexural properties enhancement were high level of silane quantity, 15% by weight, high level of smoked wood layers, 8 out of 14 layers, and low anti termite applied wood. The further testes also revealed that the LVL composite had superior properties that the solid woods but slightly inferior in flexibility. The screw withdrawn strength of LVL showed the higher figure than solid wood. It is also better resistance to moisture and termite attack than the rubber wood.

  6. Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Cheng, W.; Yeh, W. W.

    2012-12-01

    An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.

  7. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  8. Columbus meteoroid/debris protection study - Experimental simulation techniques and results

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Kitta, K.; Stilp, A.; Lambert, M.; Reimerdes, H. G.

    1992-08-01

    The methods and measurement techniques used in experimental simulations of micrometeoroid and space debris impacts with the ESA's laboratory module Columbus are described. Experiments were carried out at the two-stage light gas gun acceleration facilities of the Ernst-Mach Institute. Results are presented on simulations of normal impacts on bumper systems, oblique impacts on dual bumper systems, impacts into cooled targets, impacts into pressurized targets, and planar impacts of low-density projectiles.

  9. Techniques used in the alignment of TJNAF's accelerators and experimental halls

    SciTech Connect

    C.J. Curtis; J.C. Dahlberg; W.A. Oren; K.J. Tremblay

    1997-10-13

    With the successful completion of the main accelerator in 1994 the alignment emphasis at the Thomas Jefferson National Accelerator Facility (formerly CEBAF) switched to the continuing installation and upgrades in the three experimental halls. This presentation examines the techniques used in completing the CEBAF machine and also gives an update on the alignment of the new accelerator, a 1 kW free-electron laser, currently being built at the facility.

  10. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  11. Design of a digital compression technique for shuttle television

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Fultz, G.

    1976-01-01

    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power.

  12. Simulation verification techniques study: Simulation self test hardware design and techniques report

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The final results are presented of the hardware verification task. The basic objectives of the various subtasks are reviewed along with the ground rules under which the overall task was conducted and which impacted the approach taken in deriving techniques for hardware self test. The results of the first subtask and the definition of simulation hardware are presented. The hardware definition is based primarily on a brief review of the simulator configurations anticipated for the shuttle training program. The results of the survey of current self test techniques are presented. The data sources that were considered in the search for current techniques are reviewed, and results of the survey are presented in terms of the specific types of tests that are of interest for training simulator applications. Specifically, these types of tests are readiness tests, fault isolation tests and incipient fault detection techniques. The most applicable techniques were structured into software flows that are then referenced in discussions of techniques for specific subsystems.

  13. Experimental study of liquid level gauge for liquid hydrogen using Helmholtz resonance technique

    NASA Astrophysics Data System (ADS)

    Nakano, Akihiro; Nishizu, Takahisa

    2016-07-01

    The Helmholtz resonance technique was applied to a liquid level gauge for liquid hydrogen to confirm the applicability of the technique in the cryogenic industrial field. A specially designed liquid level gauge that has a Helmholtz resonator with a small loudspeaker was installed in a glass cryostat. A swept frequency signal was supplied to the loudspeaker, and the acoustic response was detected by measuring the electrical impedance of the loudspeaker's voice coil. The penetration depth obtained from the Helmholtz resonance frequency was compared with the true value, which was read from a scale. In principle, the Helmholtz resonance technique is available for use with liquid hydrogen, however there are certain problems as regards practical applications. The applicability of the Helmholtz resonance technique to liquid hydrogen is discussed in this study.

  14. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  15. Experimental comparison between speckle and grating-based imaging technique using synchrotron radiation X-rays.

    PubMed

    Kashyap, Yogesh; Wang, Hongchang; Sawhney, Kawal

    2016-08-01

    X-ray phase contrast and dark-field imaging techniques provide important and complementary information that is inaccessible to the conventional absorption contrast imaging. Both grating-based imaging (GBI) and speckle-based imaging (SBI) are able to retrieve multi-modal images using synchrotron as well as lab-based sources. However, no systematic comparison has been made between the two techniques so far. We present an experimental comparison between GBI and SBI techniques with synchrotron radiation X-ray source. Apart from the simple experimental setup, we find SBI does not suffer from the issue of phase unwrapping, which can often be problematic for GBI. In addition, SBI is also superior to GBI since two orthogonal differential phase gradients can be simultaneously extracted by one dimensional scan. The GBI has less stringent requirements for detector pixel size and transverse coherence length when a second or third grating can be used. This study provides the reference for choosing the most suitable technique for diverse imaging applications at synchrotron facility. PMID:27505829

  16. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.

  17. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  18. Sequential experimental design approaches to helicopter rotor tuning

    NASA Astrophysics Data System (ADS)

    Wang, Shengda

    2005-07-01

    Two different approaches based on sequential experimental design concepts have been studied for helicopter rotor tuning, which is the process of adjusting the rotor blades so as to reduce the aircraft vibration and the spread of rotors. One uses an interval model adapted sequentially to improve the search for the blade adjustments. The other uses a probability model to search for the blade adjustments with the maximal probability of success. In the first approach, an interval model is used to represent the range of effect of blade adjustments on helicopter vibration, so as to cope with the nonlinear and stochastic nature of aircraft vibration. The coefficients of the model are initially defined according to sensitivity coefficients between the blade adjustments and helicopter vibration, to include the expert knowledge of the process. The model coefficients are subsequently transformed into intervals and updated after each tuning iteration to improve the model's estimation accuracy. The search for the blade adjustments is performed according to this model by considering the vibration estimates of all of the flight regimes so as to provide a comprehensive solution for rotor tuning. The second approach studied uses a probability model to maximize the likelihood of success of the selected blade adjustments. The underlying model in this approach consists of two segments: a deterministic segment to include a linear regression model representing the relationships between the blade adjustments and helicopter vibration, and a stochastic segment to comprise probability densities of the vibration components. The blade adjustments with the maximal probability of generating acceptable vibration are selected as recommended adjustments. The effectiveness of the proposed approaches is evaluated in simulation based on a series of neural networks trained with actual vibration data. To incorporate the stochastic behavior of the helicopter vibration and better simulate the tuning

  19. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    NASA Astrophysics Data System (ADS)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  20. The estimation technique of the airframe design for manufacturability

    NASA Astrophysics Data System (ADS)

    Govorkov, A.; Zhilyaev, A.

    2016-04-01

    This paper discusses the method of quantitative estimation of a design for manufacturability of the parts of the airframe. The method is based on the interaction of individual indicators considering the weighting factor. The authors of the paper introduce the algorithm of the design for manufacturability of parts based on its 3D model

  1. Experimental techniques for studying poroelasticity in brain phantom gels under high flow microinfusion.

    PubMed

    Ivanchenko, O; Sindhwani, N; Linninger, A

    2010-05-01

    Convection enhanced delivery is an attractive option for the treatment of several neurodegenerative diseases such as Parkinson, Alzheimer, and brain tumors. However, the occurrence of a backflow is a major problem impeding the widespread use of this technique. In this paper, we analyze experimentally the force impact of high flow microinfusion on the deformable gel matrix. To investigate these fluid structure interactions, two optical methods are reported. First, gel stresses during microinfusion were visualized through a linear polariscope. Second, the displacement field was tracked using 400 nm nanobeads as space markers. The corresponding strain and porosity fields were calculated from the experimental observations. Finally, experimental data were used to validate a computational model for fluid flow and deformation in soft porous media. Our studies demonstrate experimentally, the distribution and magnitude of stress and displacement fields near the catheter tip. The effect of fluid traction on porosity and hydraulic conductivity is analyzed. The increase in fluid content in the catheter vicinity enhances the gel hydraulic conductivity. Our computational model takes into account the changes in porosity and hydraulic conductivity. The simulations agree with experimental findings. The experiments quantified solid matrix deformation, due to fluid infusion. Maximum deformations occur in areas of relatively large fluid velocities leading to volumetric strain of the matrix, causing changes in hydraulic conductivity and porosity close to the catheter tip. The gradual expansion of this region with increased porosity leads to decreased hydraulic resistance that may also create an alternative pathway for fluid flow. PMID:20459209

  2. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  3. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  4. Cryogenic refractor design techniques. [for Infrared Astronomy Satellite

    NASA Technical Reports Server (NTRS)

    Darnell, R. J.

    1985-01-01

    The Infrared Astronomical Satellite (IRAS) was designed to operate at 2K, and over the spectral range of 8 to 120 micrometers. The focal plane is approximately 2 by 3 inches in size, and contains 62 individual field stop apertures, each with its own field lens, one or more filters and a detector. The design of the lenses involved a number of difficulties and challenges that are not usually encountered in optical design. Operating temperature is assumed during the design phase, which requires reliable information on dN/dT (Index Coefficient) for the materials. The optics and all supporting structures are then expanded to room temperature, which requires expansion coefficient data on the various materials, and meticulous attention to detail. The small size and dense packaging, as well as the high precision required, further contributed to the magnitude of the task.

  5. INNOVATIVE TECHNIQUE TO EVALUATE LINT CLEANER GRID BAR DESIGNS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Photographic techniques were used to show the path that fibers attached to a gin saw take as they are drawn over a lint cleaner cleaning grid bar. A 1979 study showed that fibers were swept backwards, closer to the saw, as saw speed increased. The angle between the tip of the saw tooth and the fib...

  6. Artificial tektites: an experimental technique for capturing the shapes of spinning drops.

    PubMed

    Baldwin, Kyle A; Butler, Samuel L; Hill, Richard J A

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or 'dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax 'artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation. PMID:25564381

  7. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    NASA Astrophysics Data System (ADS)

    Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or `dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax `artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation.

  8. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    PubMed Central

    Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or ‘dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax ‘artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation. PMID:25564381

  9. Design considerations and experimental analysis for silicon carbide power rectifiers

    NASA Astrophysics Data System (ADS)

    Khemka, V.; Patel, R.; Chow, T. P.; Gutmann, R. J.

    1999-10-01

    In this paper we present the investigation of properties of silicon carbide power rectifiers, in particular Schottky, PiN and advanced hybrid power rectifiers such as the trench MOS barrier Schottky rectifier. Analysis of the forward, reverse and switching experimental characteristics are presented and these silicon carbide rectifiers are compared to silicon devices. Silicon carbide Schottky rectifiers are attractive for applications requiring blocking voltage in excess of 100 V as the use of Si is precluded by its large specific on-resistance. Analysis of power dissipation indicates that silicon carbide Schottky rectifiers offer significant improvement over silicon counterparts. Silicon carbide junction rectifiers, on the other hand, are superior to silicon counterparts only for blocking voltage greater than 2000 V. Performance of acceptor (boron) and donor (phosphorus) implanted experimental silicon carbide junction rectifiers are presented and compared. Some of the recent developments in silicon carbide rectifiers have been described and compared with theory and our experimental results. The well established silicon rectifiers theory are often inadequate to describe the characteristics of the experimental silicon carbide junction rectifiers and appropriate generalization of these theories are presented. Experimental trench MOS barrier Schottky rectifiers (TMBS) have demonstrated significant improvement in leakage current compared to planar Schottky devices. Performance of current state-of-the-art silicon carbide rectifiers are far from theoretical predictions. Availability of high-quality silicon carbide crystals is crucial to successful realization of these performance projections.

  10. Critical Zone Experimental Design to Assess Soil Processes and Function

    NASA Astrophysics Data System (ADS)

    Banwart, Steve

    2010-05-01

    experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  11. Introduction to Experimental Design: Can You Smell Fear?

    ERIC Educational Resources Information Center

    Willmott, Chris J. R.

    2011-01-01

    The ability to design appropriate experiments in order to interrogate a research question is an important skill for any scientist. The present article describes an interactive lecture-based activity centred around a comparison of two contrasting approaches to investigation of the question "Can you smell fear?" A poorly designed experiment (a video…

  12. Return to Our Roots: Raising Radishes To Teach Experimental Design.

    ERIC Educational Resources Information Center

    Stallings, William M.

    To provide practice in making design decisions, collecting and analyzing data, and writing and documenting results, a professor of statistics has his graduate students in statistics and research methodology classes design and perform an experiment on the effects of fertilizers on the growth of radishes. This project has been required of students…

  13. SSSFD manipulator engineering using statistical experiment design techniques

    NASA Technical Reports Server (NTRS)

    Barnes, John

    1991-01-01

    The Satellite Servicer System Flight Demonstration (SSSFD) program is a series of Shuttle flights designed to verify major on-orbit satellite servicing capabilities, such as rendezvous and docking of free flyers, Orbital Replacement Unit (ORU) exchange, and fluid transfer. A major part of this system is the manipulator system that will perform the ORU exchange. The manipulator must possess adequate toolplate dexterity to maneuver a variety of EVA-type tools into position to interface with ORU fasteners, connectors, latches, and handles on the satellite, and to move workpieces and ORUs through 6 degree of freedom (dof) space from the Target Vehicle (TV) to the Support Module (SM) and back. Two cost efficient tools were combined to perform a study of robot manipulator design parameters. These tools are graphical computer simulations and Taguchi Design of Experiment methods. Using a graphics platform, an off-the-shelf robot simulation software package, and an experiment designed with Taguchi's approach, the sensitivities of various manipulator kinematic design parameters to performance characteristics are determined with minimal cost.

  14. Music and video iconicity: theory and experimental design.

    PubMed

    Kendall, Roger A

    2005-01-01

    Experimental studies on the relationship between quasi-musical patterns and visual movement have largely focused on either referential, associative aspects or syntactical, accent-oriented alignments. Both of these are very important, however, between the referential and areferential lays a domain where visual pattern perceptually connects to musical pattern; this is iconicity. The temporal syntax of accent structures in iconicity is hypothesized to be important. Beyond that, a multidimensional visual space connects to musical patterning through mapping of visual time/space to musical time/magnitudes. Experimental visual and musical correlates are presented and comparisons to previous research provided. PMID:15684561

  15. Techniques for dataset design: a utilization management system model.

    PubMed

    Fuller, S R; O'Gara, S A

    1992-05-01

    Designing a clinical information system offers a sense of accomplishment similar to that of a dramatic performance. The development of the data dictionary and proposed system description requires the same attention to detail as stage directions in a script. The people involved in daily system operation are of key importance in developing a clear understanding of how things actually happen in the information flow and decision process. Once the business rules are defined and edits and conditions are developed to ensure data integrity, it is time to step back and let the performance begin. The real power of the user-designed system, like that of a performance before a live audience, comes with the ability to query the data for answers to issues and problems decision makers did not face at the time of the initial system design. PMID:10119031

  16. The development of experimental techniques for the study of helicopter rotor noise

    NASA Technical Reports Server (NTRS)

    Widnall, S. E.; Harris, W. L.; Lee, Y. C. A.; Drees, H. M.

    1974-01-01

    The features of existing wind tunnels involved in noise studies are discussed. The acoustic characteristics of the MIT low noise open jet wind tunnel are obtained by employing calibration techniques: one technique is to measure the decay of sound pressure with distance in the far field; the other technique is to utilize a speaker, which was calibrated, as a sound source. The sound pressure level versus frequency was obtained in the wind tunnel chamber and compared with the corresponding calibrated values. Fiberglas board-block units were installed on the chamber interior. The free field was increased significantly after this treatment and the chamber cut-off frequency was reduced to 160 Hz from the original designed 250 Hz. The flow field characteristics of the rotor-tunnel configuration were studied by using flow visualization techniques. The influence of open-jet shear layer on the sound transmission was studied by using an Aeolian tone as the sound source. A dynamometer system was designed to measure the steady and low harmonics of the rotor thrust. A theoretical Mach number scaling formula was developed to scale the rotational noise and blade slap noise data of model rotors to full scale helicopter rotors.

  17. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  18. Robust control design techniques for active flutter suppression

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay; Bachmann, Glen R.

    1994-01-01

    In this paper, an active flutter suppression problem is studied for a thin airfoil in unsteady aerodynamics. The mathematical model of this system is infinite dimensional because of Theodorsen's function which is irrational. Several second order approximations of Theodorsen's function are compared. A finite dimensional model is obtained from such an approximation. We use H infinity control techniques to find a robustly stabilizing controller for active flutter suppression.

  19. Respiratory protective device design using control system techniques

    NASA Technical Reports Server (NTRS)

    Burgess, W. A.; Yankovich, D.

    1972-01-01

    The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.

  20. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  1. Study of an experimental technique for application to structural dynamic problems

    NASA Technical Reports Server (NTRS)

    Snell, R. F.

    1973-01-01

    An experimental program was conducted to determine the feasibility of using subscale plastic models to determine the response of full-scale aerospace structural components to impulsive, pyrotechnic loadings. A monocoque cylinder was impulsively loaded around the circumference of one end, causing a compressive stress wave to propagate in the axial direction. The resulting structural responses of two configurations of the cylinder (with and without a cutout) were recorded by photoelasticity, strain gages, and accelerometers. A maximum dynamic stress concentration was photoelastically determined and the accelerations calculated from strain-gage data were in good agreement with those recorded by accelerometers. It is concluded that reliable, quantitative structural response data can be obtained by the experimental techniques described in this report.

  2. Experimental verification of a computational technique for determining ground reactions in human bipedal stance.

    PubMed

    Audu, Musa L; Kirsch, Robert F; Triolo, Ronald J

    2007-01-01

    We have developed a three-dimensional (3D) biomechanical model of human standing that enables us to study the mechanisms of posture and balance simultaneously in various directions in space. Since the two feet are on the ground, the system defines a kinematically closed-chain which has redundancy problems that cannot be resolved using the laws of mechanics alone. We have developed a computational (optimization) technique that avoids the problems with the closed-chain formulation thus giving users of such models the ability to make predictions of joint moments, and potentially, muscle activations using more sophisticated musculoskeletal models. This paper describes the experimental verification of the computational technique that is used to estimate the ground reaction vector acting on an unconstrained foot while the other foot is attached to the ground, thus allowing human bipedal standing to be analyzed as an open-chain system. The computational approach was verified in terms of its ability to predict lower extremity joint moments derived from inverse dynamic simulations performed on data acquired from four able-bodied volunteers standing in various postures on force platforms. Sensitivity analyses performed with model simulations indicated which ground reaction force (GRF) and center of pressure (COP) components were most critical for providing better estimates of the joint moments. Overall, the joint moments predicted by the optimization approach are strongly correlated with the joint moments computed using the experimentally measured GRF and COP (0.78 < or = r(2) < or = 0.99,median,0.96) with a best-fit that was not statistically different from a straight line with unity slope (experimental=computational results) for postures of the four subjects examined. These results indicate that this model-based technique can be relied upon to predict reasonable and consistent estimates of the joint moments using the predicted GRF and COP for most standing postures. PMID

  3. Evaluation Design: New York State Experimental Prekindergarten Program.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Child Development and Parent Education.

    In order to expose disadvantaged preschool children to a variety of educational experiences and to health and social services, the New York State Legislature funded the State Experimental Prekindergarten Program (PreK). In 1975, a five-year longitudinal evaluation study was begun. The study has two major parts: (1) a general study of 5,800…

  4. Association mapping: critical considerations shift from genotyping to experimental design

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The goal of many plant scientists’ research is to explain natural phenotypic variation in terms of simple changes in DNA sequence. Traditionally, linkage mapping has been the most commonly employed method to reach this goal: experimental crosses are made to generate a family with known relatedness ...

  5. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  6. Integration of Risk Management Techniques into Outdoor Adventure Program Design.

    ERIC Educational Resources Information Center

    Bruner, Eric V.

    This paper is designed to acquaint the outdoor professional with the risk management decision making process required for the operation and management of outdoor adventure activities. The document examines the programming implications of fear in adventure activities; the risk management process in adventure programming; a definition of an…

  7. Experimental demonstration of tomographic slit technique for measurement of arbitrary intensity profiles of light beams

    NASA Astrophysics Data System (ADS)

    Soto, José; Rendón, Manuel; Martín, Manuel

    1997-10-01

    We demonstrate experimentally an optical imaging method that makes use of a slit to collect tomographic projection data of arbitrarily shaped light beams; a tomographic backprojection algorithm is then used to reconstruct the intensity profiles of these beams. Two different implementations of the method are presented. In one, a single slit is scanned and rotated in front of the laser beam. In the other, the sides of a polygonal slit, which is linearly displaced in a x-y plane perpendicular to the beam, are used to collect the data. This latter version is more suitable than the other for adaptation at micrometer-size scale. A mathematical justification is given here for the superior performance against laser-power fluctuations of the tomographic slit technique compared with the better-known tomographic knife-edge technique.

  8. Experimental Technique and Assessment for Measuring the Convective Heat Transfer Coefficient from Natural Ice Accretions

    NASA Technical Reports Server (NTRS)

    Masiulaniec, K. Cyril; Vanfossen, G. James, Jr.; Dewitt, Kenneth J.; Dukhan, Nihad

    1995-01-01

    A technique was developed to cast frozen ice shapes that had been grown on a metal surface. This technique was applied to a series of ice shapes that were grown in the NASA Lewis Icing Research Tunnel on flat plates. Nine flat plates, 18 inches square, were obtained from which aluminum castings were made that gave good ice shape characterizations. Test strips taken from these plates were outfitted with heat flux gages, such that when placed in a dry wind tunnel, can be used to experimentally map out the convective heat transfer coefficient in the direction of flow from the roughened surfaces. The effects on the heat transfer coefficient for both parallel and accelerating flow will be studied. The smooth plate model verification baseline data as well as one ice roughened test case are presented.

  9. Experimental, theoretical and computational study of frequency upshift of electromagnetic radiation using plasma techniques

    SciTech Connect

    Joshi, C.

    1992-09-01

    This is a second year progress report on Experimental, Theoretical and Computational Study of Frequency Upshift of Electromagnetic Radiation Using Plasma Techniques.'' The highlights are: (I) Ionization fronts have been shown to frequency upshift e.m. radiation by greater than a factor 5. In the experiments, 33 GHz microwave radiation is upshifted to more than 175 GHz using a relativistically propagating ionization front created by a laser beam. (II) A Letter describing the results has been published in Physical Review Letters and an invited'' paper has been submitted to IEEE Trans. in Plasma Science.

  10. Experimental design for research on shock-turbulence interaction

    NASA Technical Reports Server (NTRS)

    Radcliffe, S. W.

    1969-01-01

    Report investigates the production of acoustic waves in the interaction of a supersonic shock and a turbulence environment. The five stages of the investigation are apparatus design, development of instrumentation, preliminary experiment, turbulence generator selection, and main experiments.

  11. International Thermonuclear Experimental Reactor (ITER) neutral beam design

    SciTech Connect

    Myers, T.J.; Brook, J.W.; Spampinato, P.T.; Mueller, J.P.; Luzzi, T.E.; Sedgley, D.W. . Space Systems Div.)

    1990-10-01

    This report discusses the following topics on ITER neutral beam design: ion dump; neutralizer and module gas flow analysis; vacuum system; cryogenic system; maintainability; power distribution; and system cost.

  12. An experimental technique of split Hopkinson pressure bar using fiber micro-displacement interferometer system for any reflector

    NASA Astrophysics Data System (ADS)

    Fu, H.; Tang, X. R.; Li, J. L.; Tan, D. W.

    2014-04-01

    A novel non-contact measurement technique had been developed for the mechanical properties of materials in Split Hopkinson Pressure Bars (SHPB). Instead of the traditional strain gages mounted on the surfaces of bars, two shutters were mounted on the end of bars to directly measure interfacial velocity using Fiber Micro-Displacement Interferometer System for Any Reflector. Using the new technique, the integrated stress-strain responses could be determined. The experimental technique was validated by SHPB test simulation. The technique had been used to investigate the dynamic response of a brittle explosive material. The results showed that the new experimental technique could be applied to the dynamic behavior in SHPB test.

  13. A new experimental device to evaluate eye ulcers using a multispectral electrical impedance technique

    NASA Astrophysics Data System (ADS)

    Bellotti, Mariela I.; Bast, Walter; Berra, Alejandro; Bonetto, Fabián J.

    2011-07-01

    We present a novel experimental technique to determine eye ulcers in animals using a spectral electrical impedance technique. We expect that this technique will be useful in dry eye syndrome. We used a sensor that is basically a platinum (Pt) microelectrode electrically insulated by glass from a cylindrical stainless steel counter-electrode. This sensor was applied to the naked eye of New Zealand rabbits (2.0-3.5 kg in weight). Whereas half of the eyes were normal (control), we applied to the remainder a few drops of 20% (v/v) alcohol to produce an ulcer in the eye. Using a multispectral electrical impedance system we measured ulcerated and control eyes and observed significant difference between normal and pathological samples. We also investigated the effects of different applied pressures and natural degradation of initially normal eyes as a function of time. We believe that this technique could be sufficiently sensitive and repetitive to help diagnose ocular surface diseases such as dry eye syndrome.

  14. Design Techniques for Uniform-DFT, Linear Phase Filter Banks

    NASA Technical Reports Server (NTRS)

    Sun, Honglin; DeLeon, Phillip

    1999-01-01

    Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.

  15. Experimental launcher facility - ELF-I: Design and operation

    NASA Astrophysics Data System (ADS)

    Deis, D. W.; Ross, D. P.

    1982-01-01

    In order to investigate the general area of ultra-high-current density, high-velocity sliding contacts as applied to electromagnetic launcher armatures, a small experimental launcher, ELF-I, has been developed, and preliminary experiments have been performed. The system uses a 36 kJ, 5 kV capacitor bank as a primary pulse power source. When used in conjunction with a 5-microhenry pulse conditioning coil, a 100-kA peak current and 10-ms-wide pulse is obtained. A three-station 150 kV flash X-ray system is operational for obtaining in-bore photographs of the projectiles. Experimental results obtained for both metal and plasma armatures at sliding velocities of up to 1 km/s are discussed with emphasis on armature-rail interactions.

  16. Teaching Simple Experimental Design to Undergraduates: Do Your Students Understand the Basics?

    ERIC Educational Resources Information Center

    Hiebert, Sara M.

    2007-01-01

    This article provides instructors with guidelines for teaching simple experimental design for the comparison of two treatment groups. Two designs with specific examples are discussed along with common misconceptions that undergraduate students typically bring to the experiment design process. Features of experiment design that maximize power and…

  17. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Technical Reports Server (NTRS)

    Ippolito, Louis J.

    1989-01-01

    The NASA Propagation Effects Handbook for Satellite Systems Design provides a systematic compilation of the major propagation effects experienced on space-Earth paths in the 10 to 100 GHz frequency band region. It provides both a detailed description of the propagation phenomenon and a summary of the impact of the effect on the communications system design and performance. Chapter 2 through 5 describe the propagation effects, prediction models, and available experimental data bases. In Chapter 6, design techniques and prediction methods available for evaluating propagation effects on space-Earth communication systems are presented. Chapter 7 addresses the system design process and how the effects of propagation on system design and performance should be considered and how that can be mitigated. Examples of operational and planned Ku, Ka, and EHF satellite communications systems are given.

  18. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-01

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model. PMID:26575545

  19. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers.

    PubMed

    Eriksson, Tobias J R; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio ( SNR ) ≃ 15 dB in transmit-receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  20. Using a hybrid approach to optimize experimental network design for aquifer parameter identification.

    PubMed

    Chang, Liang-Cheng; Chu, Hone-Jay; Lin, Yu-Pin; Chen, Yu-Wen

    2010-10-01

    This research develops an optimum design model of groundwater network using genetic algorithm (GA) and modified Newton approach, based on the experimental design conception. The goal of experiment design is to minimize parameter uncertainty, represented by the covariance matrix determinant of estimated parameters. The design problem is constrained by a specified cost and solved by GA and a parameter identification model. The latter estimates optimum parameter value and its associated sensitivity matrices. The general problem is simplified into two classes of network design problems: an observation network design problem and a pumping network design problem. Results explore the relationship between the experimental design and the physical processes. The proposed model provides an alternative to solve optimization problems for groundwater experimental design. PMID:19757116

  1. Engineering design of a throat valve experimental facility

    NASA Astrophysics Data System (ADS)

    Osofsky, Irving B.; Hove, Duane T.; Derbes, William C.

    1995-06-01

    This report covers the design of a gas dynamic test facility. The facility studied is a medium-scale blast simulator. The primary use of the facility would be to test fast-acting, computer-controlled valves. The valve would be used to control nuclear blast simulation by controlling the release of high pressure gas from drivers into an expansion tunnel to form a shock wave. The development of the valves themselves is reported elsewhere. The facility is composed of a heated gas supply, driver tube, expansion tunnel, reaction pier, piping, sensors, and controls. The driver tube and heated gas supply are existing components. The expansion tunnel, piping, sensors, and controls are all new components. Much of the report is devoted to the design of the reaction pier and the development of heat transfer relations used in designing the piping and controls.

  2. Integrating RFID technique to design mobile handheld inventory management system

    NASA Astrophysics Data System (ADS)

    Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung

    2008-04-01

    An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.

  3. Design and experimental characterization of a bandpass sampling receiver

    NASA Astrophysics Data System (ADS)

    Singh, Avantika; Kumar, Devika S.; Venkateswaran, Gomathy; Manjukrishna, S.; Singh, Amrendra Kumar; Kurup, Dhanesh G.

    2016-03-01

    In this paper, we present a robust and efficient approach for deigning reconfigurable Radio receivers based on Bandpass sampling. The direct sampled RF frontend is followed by signal processing blocks implemented on an FPGA and consists of a PLL based on second order Costas technique and a Kaiser windowing based lowpass filtering. The proposed method can be used for implementing a cost effective multi-channel receiver for data, audio, video etc. over various channels.

  4. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The paper summarizes the results obtained in an exploratory evaluation of ceramics for automobile thermal reactors. Candidate ceramic materials were evaluated in several reactor designs using both engine dynamometer and vehicle road tests. Silicon carbide contained in a corrugated metal support structure exhibited the best performance, lasting 1100 hours in engine dynamometer tests and for more than 38,600 kilimeters (24,000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  5. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The results obtained in an exploratory evaluation of ceramics for automobile thermal reactors are summarized. Candidate ceramic materials were evaluated in several reactor designs by using both engine-dynamometer and vehicle road tests. Silicon carbide contained in a corrugated-metal support structure exhibited the best performance, lasting 1100 hr in engine-dynamometer tests and more than 38,600 km (24000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as those containing silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  6. Design and experimental validation of a compact collimated Knudsen source.

    PubMed

    Wouters, Steinar H W; Ten Haaf, Gijs; Mutsaers, Peter H A; Vredenbregt, Edgar J D

    2016-08-01

    In this paper, the design and performance of a collimated Knudsen source, which has the benefit of a simple design over recirculating sources, is discussed. Measurements of the flux, transverse velocity distribution, and brightness of the resulting rubidium beam at different source temperatures were conducted to evaluate the performance. The scaling of the flux and brightness with the source temperature follows the theoretical predictions. The transverse velocity distribution in the transparent operation regime also agrees with the simulated data. The source was tested up to a temperature of 433 K and was able to produce a flux in excess of 10(13) s(-1). PMID:27587111

  7. Experimental and Imaging Techniques for Examining Fibrin Clot Structures in Normal and Diseased States

    PubMed Central

    Fan, Natalie K.; Keegan, Philip M.; Platt, Manu O.; Averett, Rodney D.

    2015-01-01

    Fibrin is an extracellular matrix protein that is responsible for maintaining the structural integrity of blood clots. Much research has been done on fibrin in the past years to include the investigation of synthesis, structure-function, and lysis of clots. However, there is still much unknown about the morphological and structural features of clots that ensue from patients with disease. In this research study, experimental techniques are presented that allow for the examination of morphological differences of abnormal clot structures due to diseased states such as diabetes and sickle cell anemia. Our study focuses on the preparation and evaluation of fibrin clots in order to assess morphological differences using various experimental assays and confocal microscopy. In addition, a method is also described that allows for continuous, real-time calculation of lysis rates in fibrin clots. The techniques described herein are important for researchers and clinicians seeking to elucidate comorbid thrombotic pathologies such as myocardial infarctions, ischemic heart disease, and strokes in patients with diabetes or sickle cell disease. PMID:25867016

  8. Ultrasonic Technique for Experimental Investigation of Statistical Characteristics of Grid Generated Turbulence.

    NASA Astrophysics Data System (ADS)

    Andreeva, Tatiana; Durgin, William

    2001-11-01

    This paper focuses on ultrasonic measurements of a grid-generated turbulent flow using the travel time technique. In the present work an attempt to describe a turbulent flow by means of statistics of ultrasound wave propagation time is undertaken in combination with Kolmogorov (2/3)-power law. There are two objectives in current research work. The first one is to demonstrate an application of the travel-time ultrasonic technique for data acquisition in the grid-generated turbulence produced in a wind tunnel. The second one is to use the experimental data to verify or refute the analytically obtained expression for travel time dispersion as a function of velocity fluctuation metrics. The theoretical analysis and derivations of that formula are based on Kolmogorov theory. The series of experiment was conducted at different values of wind speeds and distances from the grid giving rise to different values of the dimensional turbulence characteristic coefficient K. Theoretical analysis, based on the experimental data reveals strong dependence of the turbulent characteristic K on the mean wind velocity. Tabulated values of the turbulent characteristic coefficient may be used for further understanding of the effect of turbulence on sound propagation.

  9. EPSA: A Novel Supercritical Fluid Chromatography Technique Enabling the Design of Permeable Cyclic Peptides

    PubMed Central

    2014-01-01

    Most peptides are generally insufficiently permeable to be used as oral drugs. Designing peptides with improved permeability without reliable permeability monitoring is a challenge. We have developed a supercritical fluid chromatography technique for peptides, termed EPSA, which is shown here to enable improved permeability design. Through assessing the exposed polarity of a peptide, this technique can be used as a permeability surrogate. PMID:25313332

  10. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  11. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  12. Design of high speed proprotors using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Mccarthy, Thomas R.; Chattopadhyay, Aditi

    1992-01-01

    An integrated, multiobjective optimization procedure is developed for the design of high speed proprotors with the coupling of aerodynamic, dynamic, aeroelastic, and structural criteria. The objectives are to maximize propulsive efficiency in high speed cruise and rotor figure of merit in hover. Constraints are imposed on rotor blade aeroelastic stability in cruise and on total blade weight. Two different multiobjective formulation procedures, the Min summation of beta and the K-S function approaches are used to formulate the two-objective optimization problems.

  13. EXPERIMENTAL DESIGN AND INSTRUMENTATION FOR A FIELD EXPERIMENT

    EPA Science Inventory

    This report concerns the design of a field experiment for a military setting in which the effects of carbon monoxide on neurobehavioral variables are to be studied. ield experiment is distinguished from a survey by the fact that independent variables are manipulated, just as in t...

  14. The Inquiry Flame: Scaffolding for Scientific Inquiry through Experimental Design

    ERIC Educational Resources Information Center

    Pardo, Richard; Parker, Jennifer

    2010-01-01

    In the lesson presented in this article, students learn to organize their thinking and design their own inquiry experiments through careful observation of an object, situation, or event. They then conduct these experiments and report their findings in a lab report, poster, trifold board, slide, or video that follows the typical format of the…

  15. Creativity in Advertising Design Education: An Experimental Study

    ERIC Educational Resources Information Center

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  16. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  17. Comparison of visibility measurement techniques for forklift truck design factors.

    PubMed

    Choi, Chin-Bong; Park, Peom; Kim, Young-Ho; Susan Hallbeck, M; Jung, Myung-Chul

    2009-03-01

    This study applied the light bulb shadow test, a manikin vision assessment test, and an individual test to a forklift truck to identify forklift truck design factors influencing visibility. The light bulb shadow test followed the standard of ISO/DIS 13564-1 for traveling and maneuvering tests with four test paths (Test Nos. 1, 3, 4, and 6). Digital human and forklift truck models were developed for the manikin vision assessment test with CATIA V5R13 human modeling solutions. Six participants performed the individual tests. Both employed similar parameters to the light bulb shadow test. The individual test had better visibility with fewer numbers and a greater distribution of the shadowed grids than the other two tests due to eye movement and anthropometric differences. The design factors of load backrest extension, lift chain, hose, dashboard, and steering wheel should be the first factors considered to improve visibility, especially when a forklift truck mainly performs a forward traveling task in an open area. PMID:18501875

  18. Design and modeling considerations for experimental railgun armatures

    NASA Astrophysics Data System (ADS)

    Sink, D. A.; Krzastek, L. J.

    1991-01-01

    A calculational model for obtaining detailed armature parameters associated with railgun launches has been developed. Calculated parameters are obtained for device features and operating conditions supplied as input parameters. The model was validated by reproducing several sets of experimental data from a variety of devices. Model parameters associated with armature mass loss and plasma axial profiles were obtained as part of anchoring the calculations. The data included complete sets of dynamics, armature lengths, and muzzle voltages for each case studied. From the calculations, several differences between the various types of armatures (i.e., solid, hybrids, and plasma) and bore sizes were identified and found to account for the resulting performance features.

  19. Experimental investigation of contamination prevention techniques to cryogenic surfaces on board orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Hetrick, M. A.; Rantanen, R. O.; Ress, E. B.; Froechtenigt, J. F.

    1978-01-01

    Within the simulation limitations of on-orbit conditions, it was demonstrated that a helium purge system could be an effective method for reducing the incoming flux of contaminant species. Although a generalized purge system was employed in conjunction with basic telescope components, the simulation provided data that could be used for further modeling and design of a specific helium injection system. Experimental telescope pressures required for 90% attenuation appeared to be slightly higher (factor of 2 to 5). Cooling the helium purge gas and telescope components from 300 to 140 K had no measurable effect on stopping efficiency of a given mass flow of helium from the diffuse injector.

  20. Wood lens design philosophy based on a binary additive manufacturing technique

    NASA Astrophysics Data System (ADS)

    Marasco, Peter L.; Bailey, Christopher

    2016-04-01

    Using additive manufacturing techniques in optical engineering to construct a gradient index (GRIN) optic may overcome a number of limitations of GRIN technology. Such techniques are maturing quickly, yielding additional design degrees of freedom for the engineer. How best to employ these degrees of freedom is not completely clear at this time. This paper describes a preliminary design philosophy, including assumptions, pertaining to a particular printing technique for GRIN optics. It includes an analysis based on simulation and initial component measurement.

  1. Investigation on experimental techniques to detect, locate and quantify gear noise in helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Flanagan, P. M.; Atherton, W. J.

    1985-01-01

    A robotic system to automate the detection, location, and quantification of gear noise using acoustic intensity measurement techniques has been successfully developed. Major system components fabricated under this grant include an instrumentation robot arm, a robot digital control unit and system software. A commercial, desktop computer, spectrum analyzer and two microphone probe complete the equipment required for the Robotic Acoustic Intensity Measurement System (RAIMS). Large-scale acoustic studies of gear noise in helicopter transmissions cannot be performed accurately and reliably using presently available instrumentation and techniques. Operator safety is a major concern in certain gear noise studies due to the operating environment. The man-hours needed to document a noise field in situ is another shortcoming of present techniques. RAIMS was designed to reduce the labor and hazard in collecting data and to improve the accuracy and repeatability of characterizing the acoustic field by automating the measurement process. Using RAIMS a system operator can remotely control the instrumentation robot to scan surface areas and volumes generating acoustic intensity information using the two microphone technique. Acoustic intensity studies requiring hours of scan time can be performed automatically without operator assistance. During a scan sequence, the acoustic intensity probe is positioned by the robot and acoustic intensity data is collected, processed, and stored.

  2. Optimization and evaluation of clarithromycin floating tablets using experimental mixture design.

    PubMed

    Uğurlu, Timucin; Karaçiçek, Uğur; Rayaman, Erkan

    2014-01-01

    The purpose of the study was to prepare and evaluate clarithromycin (CLA) floating tablets using experimental mixture design for treatment of Helicobacter pylori provided by prolonged gastric residence time and controlled plasma level. Ten different formulations were generated based on different molecular weight of hypromellose (HPMC K100, K4M, K15M) by using simplex lattice design (a sub-class of mixture design) with Minitab 16 software. Sodium bicarbonate and anhydrous citric acid were used as gas generating agents. Tablets were prepared by wet granulation technique. All of the process variables were fixed. Results of cumulative drug release at 8th h (CDR 8th) were statistically analyzed to get optimized formulation (OF). Optimized formulation, which gave floating lag time lower than 15 s and total floating time more than 10 h, was analyzed and compared with target for CDR 8th (80%). A good agreement was shown between predicted and actual values of CDR 8th with a variation lower than 1%. The activity of clarithromycin contained optimizedformula against H. pylori were quantified using well diffusion agar assay. Diameters of inhibition zones vs. log10 clarithromycin concentrations were plotted in order to obtain a standard curve and clarithromycin activity. PMID:25272652

  3. Tocorime Apicu: design and validation of an experimental search engine

    NASA Astrophysics Data System (ADS)

    Walker, Reginald L.

    2001-07-01

    In the development of an integrated, experimental search engine, Tocorime Apicu, the incorporation and emulation of the evolutionary aspects of the chosen biological model (honeybees) and the field of high-performance knowledge discovery in databases results in the coupling of diverse fields of research: evolutionary computations, biological modeling, machine learning, statistical methods, information retrieval systems, active networks, and data visualization. The use of computer systems provides inherent sources of self-similarity traffic that result from the interaction of file transmission, caching mechanisms, and user-related processes. These user-related processes are initiated by the user, application programs, or the operating system (OS) for the user's benefit. The effect of Web transmission patterns, coupled with these inherent sources of self-similarity associated with the above file system characteristics, provide an environment for studying network traffic. The goal of the study was client-based, but with no user interaction. New methodologies and approaches were needed as network packet traffic increased in the LAN, LAN+WAN, and WAN. Statistical tools and methods for analyzing datasets were used to organize data captured at the packet level for network traffic between individual source/destination pairs. Emulation of the evolutionary aspects of the biological model equips the experimental search engine with an adaptive system model which will eventually have the capability to evolve with an ever- changing World Wide Web environment. The results were generated using a LINUX OS.

  4. New Materials Design Through Friction Stir Processing Techniques

    SciTech Connect

    Buffa, G.; Fratini, L.; Shivpuri, R.

    2007-04-07

    Friction Stir Welding (FSW) has reached a large interest in the scientific community and in the last years also in the industrial environment, due to the advantages of such solid state welding process with respect to the classic ones. The complex material flow occurring during the process plays a fundamental role in such solid state welding process, since it determines dramatic changes in the material microstructure of the so called weld nugget, which affects the effectiveness of the joints. What is more, Friction Stir Processing (FSP) is mainly being considered for producing high-strain-rate-superplastic (HSRS) microstructure in commercial aluminum alloys. The aim of the present research is the development of a locally composite material through the Friction Stir Processing (FSP) of two AA7075-T6 blanks and a different material insert. The results of a preliminary experimental campaign, carried out at the varying of the additional material placed at the sheets interface under different conditions, are presented. Micro and macro observation of the such obtained joints permitted to investigate the effects of such process on the overall joint performance.

  5. New Materials Design Through Friction Stir Processing Techniques

    NASA Astrophysics Data System (ADS)

    Buffa, G.; Fratini, L.; Shivpuri, R.

    2007-04-01

    Friction Stir Welding (FSW) has reached a large interest in the scientific community and in the last years also in the industrial environment, due to the advantages of such solid state welding process with respect to the classic ones. The complex material flow occurring during the process plays a fundamental role in such solid state welding process, since it determines dramatic changes in the material microstructure of the so called weld nugget, which affects the effectiveness of the joints. What is more, Friction Stir Processing (FSP) is mainly being considered for producing high-strain-rate-superplastic (HSRS) microstructure in commercial aluminum alloys. The aim of the present research is the development of a locally composite material through the Friction Stir Processing (FSP) of two AA7075-T6 blanks and a different material insert. The results of a preliminary experimental campaign, carried out at the varying of the additional material placed at the sheets interface under different conditions, are presented. Micro and macro observation of the such obtained joints permitted to investigate the effects of such process on the overall joint performance.

  6. An Experimental Study of Turbulent Skin Friction Reduction in Supersonic Flow Using a Microblowing Technique

    NASA Technical Reports Server (NTRS)

    Hwang, Danny P.

    1999-01-01

    A new turbulent skin friction reduction technology, called the microblowing technique has been tested in supersonic flow (Mach number of 1.9) on specially designed porous plates with microholes. The skin friction was measured directly by a force balance and the boundary layer development was measured by a total pressure rake at the tailing edge of a test plate. The free stream Reynolds number was 1.0(10 exp 6) per meter. The turbulent skin friction coefficient ratios (C(sub f)/C(sub f0)) of seven porous plates are given in this report. Test results showed that the microblowing technique could reduce the turbulent skin friction in supersonic flow (up to 90 percent below a solid flat plate value, which was even greater than in subsonic flow).

  7. Experimental techniques for ballistic pressure measurements and recent development in means of calibration

    NASA Astrophysics Data System (ADS)

    Elkarous, L.; Coghe, F.; Pirlot, M.; Golinval, J. C.

    2013-09-01

    This paper presents a study carried out with the commonly used experimental techniques of ballistic pressure measurement. The comparison criteria were the peak chamber pressure and its standard deviation inside specific weapon/ammunition system configurations. It is impossible to determine exactly how precise either crusher, direct or conformal transducer methods are, as there is no way to know exactly what the actual pressure is; Nevertheless, the combined use of these measuring techniques could improve accuracy. Furthermore, a particular attention has been devoted to the problem of calibration. Calibration of crusher gauges and piezoelectric transducers is paramount and an essential task for a correct determination of the pressure inside a weapon. This topic has not been completely addressed yet and still requires further investigation. In this work, state of the art calibration methods are presented together with their specific aspects. Many solutions have been developed to satisfy this demand; nevertheless current systems do not cover the whole range of needs, calling for further development effort. In this work, research being carried out for the development of suitable practical calibration methods will be presented. The behavior of copper crushers under different high strain rates by the use of the Split Hopkinson Pressure Bars (SHPB) technique is investigated in particular. The Johnson-Cook model was employed as suitable model for the numerical study using FEM code

  8. Experimental Study of Active Techniques for Blade/Vortex Interaction Noise Reduction

    NASA Astrophysics Data System (ADS)

    Kobiki, Noboru; Murashige, Atsushi; Tsuchihashi, Akihiko; Yamakawa, Eiichi

    This paper presents the experimental results of the effect of Higher Harmonic Control (HHC) and Active Flap on the Blade/Vortex Interaction (BVI) noise. Wind tunnel tests were performed with a 1-bladed rotor system to evaluate the simplified BVI phenomenon avoiding the complicated aerodynamic interference which is characteristically and inevitably caused by a multi-bladed rotor. Another merit to use this 1-bladed rotor system is that the several objective active techniques can be evaluated under the same condition installed in the same rotor system. The effects of the active techniques on the BVI noise reduction were evaluated comprehensively by the sound pressure, the blade/vortex miss distance obtained by Laser light Sheet (LLS), the blade surface pressure distribution and the tip vortex structure by Particle Image Velocimetry (PIV). The correlation among these quantities to describe the effect of the active techniques on the BVI conditions is well obtained. The experiments show that the blade/vortex miss distance is more dominant for BVI noise than the other two BVI governing factors, such as blade lift and vortex strength at the moment of BVI.

  9. Experimental Comparison of the Hemodynamic Effects of Bifurcating Coronary Stent Implantation Techniques

    NASA Astrophysics Data System (ADS)

    Brindise, Melissa; Vlachos, Pavlos; AETheR Lab Team

    2015-11-01

    Stent implantation in coronary bifurcations imposes unique effects to the blood flow patterns and currently there is no universally accepted stent deployment approach. Despite the fact that stent-induced changes can greatly alter clinical outcomes, no concrete understanding exists regarding the hemodynamic effects of each implantation method. This work presents an experimental evaluation of the hemodynamic differences between implantation techniques. We used four common stent implantation methods including the currently preferred one-stent provisional side branch (PSB) technique and the crush (CRU), Culotte (CUL), and T-stenting (T-PR) two-stent techniques, all deployed by a cardiologist in coronary models. Particle image velocimetry was used to obtain velocity and pressure fields. Wall shear stress (WSS), oscillatory shear index, residence times, and drag and compliance metrics were evaluated and compared against an un-stented case. The results of this study demonstrate that while PSB is preferred, both it and T-PR yielded detrimental hemodynamic effects such as low WSS values. CRU provided polarizing and unbalanced results. CUL demonstrated a symmetric flow field, balanced WSS distribution, and ultimately the most favorable hemodynamic environment.

  10. Design of motorcycle rider protection systems using numerical techniques.

    PubMed

    Miralbes, R

    2013-10-01

    The goal of this paper is the development of a design methodology, based on the use of finite elements numerical tools and dummies in order to study the damages and injuries that appear during a motorcyclist collision against a motorcyclist protection system (MPS). According to the existing regulation, a Hybrid III dummy FEM model has been used as a starting point and some modifications have been included. For instance a new finite element helmet model has been developed and later added to the dummy model. Moreover, some structural elements affecting the simulation results such as the connecting bolts or the ground have been adequately modeled. Finally there have been analyzed diverse types of current motorcyclists protection systems, for which it has been made a comparative numerical-experiment analysis to validate the numerical results and the methodology used. PMID:23792610

  11. High-power CMUTs: design and experimental verification.

    PubMed

    Yamaner, F Yalçin; Olçum, Selim; Oğuz, H Kağan; Bozkurt, Ayhan; Köymen, Hayrettin; Atalar, Abdullah

    2012-06-01

    Capacitive micromachined ultrasonic transducers (CMUTs) have great potential to compete with piezoelectric transducers in high-power applications. As the output pressures increase, nonlinearity of CMUT must be reconsidered and optimization is required to reduce harmonic distortions. In this paper, we describe a design approach in which uncollapsed CMUT array elements are sized so as to operate at the maximum radiation impedance and have gap heights such that the generated electrostatic force can sustain a plate displacement with full swing at the given drive amplitude. The proposed design enables high output pressures and low harmonic distortions at the output. An equivalent circuit model of the array is used that accurately simulates the uncollapsed mode of operation. The model facilities the design of CMUT parameters for high-pressure output, without the intensive need for computationally involved FEM tools. The optimized design requires a relatively thick plate compared with a conventional CMUT plate. Thus, we used a silicon wafer as the CMUT plate. The fabrication process involves an anodic bonding process for bonding the silicon plate with the glass substrate. To eliminate the bias voltage, which may cause charging problems, the CMUT array is driven with large continuous wave signals at half of the resonant frequency. The fabricated arrays are tested in an oil tank by applying a 125-V peak 5-cycle burst sinusoidal signal at 1.44 MHz. The applied voltage is increased until the plate is about to touch the bottom electrode to get the maximum peak displacement. The observed pressure is about 1.8 MPa with -28 dBc second harmonic at the surface of the array. PMID:22718878

  12. Design on intelligent gateway technique in home network

    NASA Astrophysics Data System (ADS)

    Hu, Zhonggong; Feng, Xiancheng

    2008-12-01

    Based on digitization, multimedia, mobility, wide band, real-time interaction and so on,family networks, because can provide diverse and personalized synthesis service in information, correspondence work, entertainment, education and health care and so on, are more and more paid attention by the market. The family network product development has become the focus of the related industry. In this paper,the concept of the family network and the overall reference model of the family network are introduced firstly.Then the core techniques and the correspondence standard related with the family network are proposed.The key analysis is made for the function of family gateway, the function module of the software,the key technologies to client side software architecture and the trend of development of the family network entertainment seeing and hearing service and so on. Product present situation of the family gateway and the future trend of development, application solution of the digital family service are introduced. The development of the family network product bringing about the digital family network industry is introduced finally.It causes the development of software industries,such as communication industry,electrical appliances industry, computer and game and so on.It also causes the development of estate industry.

  13. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  14. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  15. Process designed for experimentation for increased-caliper Fresnel lenses

    SciTech Connect

    Zderad, A.J.

    1992-04-01

    The feasibility of producing increased caliper linear and point focus Fresnel lenses in a continuous sheet is described. Both a 8.16-inch-square radial 2 {times} 7 parquet, and a 22-inch-wide linear lens were produced at .11-inch in caliper. The primary purpose of this experimentation is to determine the replication effectiveness and production rate of the polymeric web process at increased thickness. The results demonstrated that both radial and linear lenses, at increased caliper, can be replicated with performance comparable to that of the current state-of-the-art 3M laminated lenses; however, the radial parquets were bowed on the edges. Additional process development is necessary to solve this problem. Current estimates are that the .11-inch caliper parquets cost significantly more than customer laminated parquets using 0.022-inch thick lensfilm.

  16. A validated spectrofluorimetric method for the determination of nifuroxazide through coumarin formation using experimental design

    PubMed Central

    2013-01-01

    Background Nifuroxazide (NF) is an oral nitrofuran antibiotic, having a wide range of bactericidal activity against gram positive and gram negative enteropathogenic organisms. It is formulated either in single form, as intestinal antiseptic or in combination with drotaverine (DV) for the treatment of gastroenteritis accompanied with gastrointestinal spasm. Spectrofluorimetry is a convenient and sensitive technique for pharmaceutical quality control. The new proposed spectrofluorimetric method allows its determination either in single form or in binary mixture with DV. Furthermore, experimental conditions were optimized using the new approach: Experimental design, which has many advantages over the old one, one variable at a time (OVAT approach). Results A novel and sensitive spectrofluorimetric method was designed and validated for the determination of NF in pharmaceutical formulation. The method was based upon the formation of a highly fluorescent coumarin compound by the reaction between NF and ethylacetoacetate (EAA) using sulfuric acid as catalyst. The fluorescence was measured at 390 nm upon excitation at 340 nm. Experimental design was used to optimize experimental conditions. Volumes of EAA and sulfuric acid, temperature and heating time were considered the critical factors to be studied in order to establish an optimum fluorescence. Each two factors were co-tried at three levels. Regression analysis revealed good correlation between fluorescence intensity and concentration over the range 20–400 ng ml-1. The suggested method was successfully applied for the determination of NF in pure and capsule forms. The procedure was validated in terms of linearity, accuracy, precision, limit of detection and limit of quantification. The selectivity of the method was investigated by analysis of NF in presence of the co-mixed drug DV where no interference was observed. The reaction pathway was suggested and the structure of the fluorescent product was proposed

  17. Facilitating Preemptive Hardware System Design Using Partial Reconfiguration Techniques

    PubMed Central

    Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration. PMID:24672292

  18. Interaction of condom design and user techniques and condom acceptability.

    PubMed

    Gerofi, J; Deniaud, F; Friel, P

    1995-10-01

    In 1991, the source of public sector condom supplies in an African country changed from USAID to WHO. Following a complaint, the two types of condoms were sampled and compared. Laboratory tests indicated that the new-style condoms were of adequate quality, but a number of differences were noted between the two types. Complaints that the condoms were short and broke frequently could not be reconciled with measurements. Lubricant quantities on the WHO-supplied condoms were found to be lower than on the USAID condoms, but still within the range found on the commercial market. Also, the WHO condoms were marginally narrower and thicker. WHO asked the authors to conduct field interviews to seek reasons for the reported problems. These revealed that the relative dissatisfaction with the WHO condoms was largely confined to a group of sex workers in a follow-up programme conducted by two educators funded by a European agency. The instructions for use being given by the educators magnified the risk of incorrect application of the condom. Design changes to the WHO condoms (regarding lubricant, size and thickness) were subsequently made to minimise the chance of wrong use. PMID:8605780

  19. Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design

    NASA Technical Reports Server (NTRS)

    Page, L. W.; From, T. P.

    1977-01-01

    Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.

  20. Analysis, design and experimental characterization of electrostatically actuated gas micropumps

    NASA Astrophysics Data System (ADS)

    Astle, Aaron A.

    This work goal is to realize a high-performance, multi-stage micropump integrated within a wireless micro gas chromatograph (muGC) for measuring airborne environment pollutants. The work described herein focuses on the development of high-fidelity mathematical and physical design models, and the testing and validation of the most promising models with large-scale and micro-scale (MEMS) pump prototypes. It is shown that an electrostatically-actuated, multistage, diaphragm micropump with active valve control provides the best expected performance for this application. A hierarchy of models is developed to characterize the various factors governing micropump performance. This includes a thermodynamic model, an idealized reduced-order model and a reduced-order model that incorporates realistic valve flow effects and accounts for fluidic load. The reduced-order models are based on fundamental fluid dynamic principles and allow predictions of flow rate and pressure rise as a function of geometric design variables, and drive signal. The reduced order models are validated in several tests. Two-stage, 20x scale pump results reveal the need to incorporate realistic valve flow effects and the output load for accurate modeling. The more realistic reduced order model is then validated using micropumps with two and four pumping stages. The reduced order model captures the micropump performance accurately, provided that separate measurements of valve pressure losses and pump geometry are used. The four-stage micropump fabricated using theoretical model guidelines from this research provides a maximum flow rate and pressure rise of 3 cm 3/min and 1.75 kPa/stage respectively with a power consumption of only 4 mW per stage. The four-stage micropump occupies and area of 54 mm 2. Each pumping cavity has a volume of 6x10-6 m 3. This performance indicates that this pump design will be sufficient to meet the requirements for extended field operation of a wireless integrated muGC. During

  1. The ISR Asymmetrical Capacitor Thruster: Experimental Results and Improved Designs

    NASA Technical Reports Server (NTRS)

    Canning, Francis X.; Cole, John; Campbell, Jonathan; Winet, Edwin

    2004-01-01

    A variety of Asymmetrical Capacitor Thrusters has been built and tested at the Institute for Scientific Research (ISR). The thrust produced for various voltages has been measured, along with the current flowing, both between the plates and to ground through the air (or other gas). VHF radiation due to Trichel pulses has been measured and correlated over short time scales to the current flowing through the capacitor. A series of designs were tested, which were increasingly efficient. Sharp features on the leading capacitor surface (e.g., a disk) were found to increase the thrust. Surprisingly, combining that with sharp wires on the trailing edge of the device produced the largest thrust. Tests were performed for both polarizations of the applied voltage, and for grounding one or the other capacitor plate. In general (but not always) it was found that the direction of the thrust depended on the asymmetry of the capacitor rather than on the polarization of the voltage. While no force was measured in a vacuum, some suggested design changes are given for operation in reduced pressures.

  2. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most

  3. Flight control design using a blend of modern nonlinear adaptive and robust techniques

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolong

    In this dissertation, the modern control techniques of feedback linearization, mu synthesis, and neural network based adaptation are used to design novel control laws for two specific applications: F/A-18 flight control and reusable launch vehicle (an X-33 derivative) entry guidance. For both applications, the performance of the controllers is assessed. As a part of a NASA Dryden program to develop and flight test experimental controllers for an F/A-18 aircraft, a novel method of combining mu synthesis and feedback linearization is developed to design longitudinal and lateral-directional controllers. First of all, the open-loop and closed-loop dynamics of F/A-18 are investigated. The production F/A-18 controller as well as the control distribution mechanism are studied. The open-loop and closed-loop handling qualities of the F/A-18 are evaluated using low order transfer functions. Based on this information, a blend of robust mu synthesis and feedback linearization is used to design controllers for a low dynamic pressure envelope of flight conditions. For both the longitudinal and the lateral-directional axes, a robust linear controller is designed for a trim point in the center of the envelope. Then by including terms to cancel kinematic nonlinearities and variations in the aerodynamic forces and moments over the flight envelope, a complete nonlinear controller is developed. In addition, to compensate for the model uncertainty, linearization error and variations between operating points, neural network based adaptation is added to the designed longitudinal controller. The nonlinear simulations, robustness and handling qualities analysis indicate that the performance is similar to or better than that for the production F/A-18 controllers. When the dynamic pressure is very low, the performance of both the experimental and the production flight controllers is degraded, but Level I handling qualities are still achieved. A new generation of Reusable Launch Vehicles

  4. Time domain and frequency domain design techniques for model reference adaptive control systems

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1971-01-01

    Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.

  5. Experimental observation of silver and gold penetration into dental ceramic by means of a radiotracer technique

    SciTech Connect

    Moya, F.; Payan, J.; Bernardini, J.; Moya, E.G.

    1987-12-01

    A radiotracer technique was used to study silver and gold diffusion into dental porcelain under experimental conditions close to the real conditions in prosthetic laboratories for porcelain bakes. It was clearly shown that these non-oxidizable elements were able to diffuse into the ceramic as well as oxidizable ones. The penetration depth varied widely according to the element. The ratio DAg/DAu was about 10(3) around 850 degrees C. In contrast to gold, the silver diffusion rate was high enough to allow silver, from the metallic alloy, to be present at the external ceramic surface after diffusion into the ceramic. Hence, the greening of dental porcelains baked on silver-rich alloys could be explained mainly by a solid-state diffusion mechanism.

  6. Photon spectra calculation for an Elekta linac beam using experimental scatter measurements and Monte Carlo techniques.

    PubMed

    Juste, B; Miro, R; Campayo, J M; Diez, S; Verdu, G

    2008-01-01

    The present work is centered in reconstructing by means of a scatter analysis method the primary beam photon spectrum of a linear accelerator. This technique is based on irradiating the isocenter of a rectangular block made of methacrylate placed at 100 cm distance from surface and measuring scattered particles around the plastic at several specific positions with different scatter angles. The MCNP5 Monte Carlo code has been used to simulate the particles transport of mono-energetic beams to register the scatter measurement after contact the attenuator. Measured ionization values allow calculating the spectrum as the sum of mono-energetic individual energy bins using the Schiff Bremsstrahlung model. The measurements have been made in an Elekta Precise linac using a 6 MeV photon beam. Relative depth and profile dose curves calculated in a water phantom using the reconstructed spectrum agree with experimentally measured dose data to within 3%. PMID:19163410

  7. Improved field experimental designs and quantitative evaluation of aquatic ecosystems

    SciTech Connect

    McKenzie, D.H.; Thomas, J.M.

    1984-05-01

    The paired-station concept and a log transformed analysis of variance were used as methods to evaluate zooplankton density data collected during five years at an electrical generation station on Lake Michigan. To discuss the example and the field design necessary for a valid statistical analysis, considerable background is provided on the questions of selecting (1) sampling station pairs, (2) experimentwise error rates for multi-species analyses, (3) levels of Type I and II error rates, (4) procedures for conducting the field monitoring program, and (5) a discussion of the consequences of violating statistical assumptions. Details for estimating sample sizes necessary to detect changes of a specified magnitude are included. Both statistical and biological problems with monitoring programs (as now conducted) are addressed; serial correlation of successive observations in the time series obtained was identified as one principal statistical difficulty. The procedure reduces this problem to a level where statistical methods can be used confidently. 27 references, 4 figures, 2 tables.

  8. The resisted rise of randomisation in experimental design: British agricultural science, c.1910-1930.

    PubMed

    Berry, Dominic

    2015-09-01

    The most conspicuous form of agricultural experiment is the field trial, and within the history of such trials, the arrival of the randomised control trial (RCT) is considered revolutionary. Originating with R.A. Fisher within British agricultural science in the 1920s and 1930s, the RCT has since become one of the most prodigiously used experimental techniques throughout the natural and social sciences. Philosophers of science have already scrutinised the epistemological uniqueness of RCTs, undermining their status as the 'gold standard' in experimental design. The present paper introduces a historical case study from the origins of the RCT, uncovering the initially cool reception given to this method by agricultural scientists at the University of Cambridge and the (Cambridge based) National Institute of Agricultural Botany. Rather than giving further attention to the RCT, the paper focuses instead on a competitor method-the half-drill strip-which both predated the RCT and remained in wide use for at least a decade beyond the latter's arrival. In telling this history, John Pickstone's Ways of Knowing is adopted, as the most flexible and productive way to write the history of science, particularly when sciences and scientists have to work across a number of different kinds of place. It is shown that those who resisted the RCT did so in order to preserve epistemic and social goals that randomisation would have otherwise run a tractor through. PMID:26205200

  9. GMPLS-based multiterabit optical router: design and experimentation

    NASA Astrophysics Data System (ADS)

    Wei, Wei; Zeng, QingJi; Ouyang, Yong; Liu, Jimin; Luo, Xuan; Huang, Xuejun

    2002-09-01

    Internet backbone network is undergoing a large-scale transformation from the current complex, static and multi-layer electronic-based architecture to the emerging simplified, dynamic and single-layer photonic-based architecture. The explosive growth in the Internet, multi-media services, and IP router links are demanding the next generation Internet that can accommodate the entire traffic in a cost-effective manner. There is a consensus in current industries that IP over WDM integration technologies will be viable for the next generation of the optical Internet where the simplified flat network architecture can facilitate the networking performance and the networking management. In this paper, we firstly propose a novel node architecture-Terabit Optical Router (TOR) for building the next generation optical Internet and analyzes each key function unit of TOR including multi-granularity electrical-optical hybrid switching fabrics, unified control plane unit and so on. Secondly, we discussed the unified control plane unit of TOR in detailed Thirdly we describe our cost vs. performance analysis for various application of TOR. According to our evaluation carriers can get a cost reduction of more than 60 percent by using the TOR. Finally, we reach conclusions that TORs rather than OBS or BFR(Big Fat Router) routers, a cost effective multi-granularity switching and routing technique, are feasible to build the next generation Internet.

  10. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  11. A rational design change methodology based on experimental and analytical modal analysis

    SciTech Connect

    Weinacht, D.J.; Bennett, J.G.

    1993-08-01

    A design methodology that integrates analytical modeling and experimental characterization is presented. This methodology represents a powerful tool for making rational design decisions and changes. An example of its implementation in the design, analysis, and testing of a precisions machine tool support structure is given.

  12. Development of experimental verification techniques for non-linear deformation and fracture.

    SciTech Connect

    Moody, Neville Reid; Bahr, David F.

    2003-12-01

    This project covers three distinct features of thin film fracture and deformation in which the current experimental technique of nanoindentation demonstrates limitations. The first feature is film fracture, which can be generated either by nanoindentation or bulge testing thin films. Examples of both tests will be shown, in particular oxide films on metallic or semiconductor substrates. Nanoindentations were made into oxide films on aluminum and titanium substrates for two cases; one where the metal was a bulk (effectively single crystal) material and the other where the metal was a 1 pm thick film grown on a silica or silicon substrate. In both cases indentation was used to produce discontinuous loading curves, which indicate film fracture after plastic deformation of the metal. The oxides on bulk metals fractures occurred at reproducible loads, and the tensile stress in the films at fracture were approximately 10 and 15 GPa for the aluminum and titanium oxides respectively. Similarly, bulge tests of piezoelectric oxide films have been carried out and demonstrate film fracture at stresses of only 100's of MPa, suggesting the importance of defects and film thickness in evaluating film strength. The second feature of concern is film adhesion. Several qualitative and quantitative tests exist today that measure the adhesion properties of thin films. A relatively new technique that uses stressed overlayers to measure adhesion has been proposed and extensively studied. Delamination of thin films manifests itself in the form of either telephone cord or straight buckles. The buckles are used to calculate the interfacial fracture toughness of the film-substrate system. Nanoindentation can be utilized if more energy is needed to initiate buckling of the film system. Finally, deformation in metallic systems can lead to non-linear deformation due to 'bursts' of dislocation activity during nanoindentation. An experimental study to examine the structure of dislocations around

  13. Recent developments in optimal experimental designs for functional magnetic resonance imaging

    PubMed Central

    Kao, Ming-Hung; Temkit, M'hamed; Wong, Weng Kee

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is one of the leading brain mapping technologies for studying brain activity in response to mental stimuli. For neuroimaging studies utilizing this pioneering technology, there is a great demand of high-quality experimental designs that help to collect informative data to make precise and valid inference about brain functions. This paper provides a survey on recent developments in experimental designs for fMRI studies. We briefly introduce some analytical and computational tools for obtaining good designs based on a specified design selection criterion. Research results about some commonly considered designs such as blocked designs, and m-sequences are also discussed. Moreover, we present a recently proposed new type of fMRI designs that can be constructed using a certain type of Hadamard matrices. Under certain assumptions, these designs can be shown to be statistically optimal. Some future research directions in design of fMRI experiments are also discussed. PMID:25071884

  14. A propagation effects handbook for satellite systems design. A summary of propagation impairments on 10-100 GHz satellite links, with techniques for system design. [tropospheric scattering

    NASA Technical Reports Server (NTRS)

    Kaul, R.; Wallace, R.; Kinal, G.

    1980-01-01

    This handbook provides satellite system engineers with a concise summary of the major propagation effects experienced on Earth-space paths in the 10 to 100 GHz frequency range. The dominant effect, attenuation due to rain, is dealt with in terms of both experimental data from measurements made in the U.S. and Canada, and the mathematical and conceptual models devised to explain the data. Rain systems, rain and attenuation models, depolarization and experimental data are described. The design techniques recommended for predicting propagation effects in Earth-space communications systems are presented. The questions of where in the system design process the effects of propagation should be considered, and what precautions should be taken when applying the propagation results are addressed in order to bridge the gap between the propagation research data and the classical link budget analysis of Earth-space communications system.

  15. Patient reactions to personalized medicine vignettes: An experimental design

    PubMed Central

    Butrick, Morgan; Roter, Debra; Kaphingst, Kimberly; Erby, Lori H.; Haywood, Carlton; Beach, Mary Catherine; Levy, Howard P.

    2011-01-01

    Purpose Translational investigation on personalized medicine is in its infancy. Exploratory studies reveal attitudinal barriers to “race-based medicine” and cautious optimism regarding genetically personalized medicine. This study describes patient responses to hypothetical conventional, race-based, or genetically personalized medicine prescriptions. Methods Three hundred eighty-seven participants (mean age = 47 years; 46% white) recruited from a Baltimore outpatient center were randomized to this vignette-based experimental study. They were asked to imagine a doctor diagnosing a condition and prescribing them one of three medications. The outcomes are emotional response to vignette, belief in vignette medication efficacy, experience of respect, trust in the vignette physician, and adherence intention. Results Race-based medicine vignettes were appraised more negatively than conventional vignettes across the board (Cohen’s d = −0.51−0.57−0.64, P < 0.001). Participants rated genetically personalized comparably with conventional medicine (− 0.14−0.15−0.17, P = 0.47), with the exception of reduced adherence intention to genetically personalized medicine (Cohen’s d = −0.38−0.41−0.44, P = 0.009). This relative reluctance to take genetically personalized medicine was pronounced for racial minorities (Cohen’s d =−0.38−0.31−0.25, P = 0.02) and was related to trust in the vignette physician (change in R2 = 0.23, P < 0.001). Conclusions This study demonstrates a relative reluctance to embrace personalized medicine technology, especially among racial minorities, and highlights enhancement of adherence through improved doctor-patient relationships. PMID:21270639

  16. A Modified Experimental Hut Design for Studying Responses of Disease-Transmitting Mosquitoes to Indoor Interventions: The Ifakara Experimental Huts

    PubMed Central

    Okumu, Fredros O.; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J.

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  17. A modified experimental hut design for studying responses of disease-transmitting mosquitoes to indoor interventions: the Ifakara experimental huts.

    PubMed

    Okumu, Fredros O; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  18. Development of experimental techniques to study protein and nucleic acid structures

    SciTech Connect

    Trewhella, J.; Bradbury, E.M.; Gupta, G.; Imai, B.; Martinez, R.; Unkefer, C.

    1996-04-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This research project sought to develop experimental tools for structural biology, specifically those applicable to three-dimensional, biomolecular-structure analysis. Most biological systems function in solution environments, and the ability to study proteins and polynucleotides under physiologically relevant conditions is of paramount importance. The authors have therefore adopted a three-pronged approach which involves crystallographic and nuclear magnetic resonance (NMR) spectroscopic methods to study protein and DNA structures at high (atomic) resolution as well as neutron and x-ray scattering techniques to study the complexes they form in solution. Both the NMR and neutron methods benefit from isotope labeling strategies, and all provide experimental data that benefit from the computational and theoretical tools being developed. The authors have focused on studies of protein-nucleic acid complexes and DNA hairpin structures important for understanding the regulation of gene expression, as well as the fundamental interactions that allow these complexes to form.

  19. Experimental and numerical analysis of high-resolution injection technique for capillary electrophoresis microchip.

    PubMed

    Chang, Chin-Lung; Leong, Jik-Chang; Hong, Ting-Fu; Wang, Yao-Nan; Fu, Lung-Ming

    2011-01-01

    This study presents an experimental and numerical investigation on the use of high-resolution injection techniques to deliver sample plugs within a capillary electrophoresis (CE) microchip. The CE microfluidic device was integrated into a U-shaped injection system and an expansion chamber located at the inlet of the separation channel, which can miniize the sample leakage effect and deliver a high-quality sample plug into the separation channel so that the detection performance of the device is enhanced. The proposed 45° U-shaped injection system was investigated using a sample of Rhodamine B dye. Meanwhile, the analysis of the current CE microfluidic chip was studied by considering the separation of Hae III digested ϕx-174 DNA samples. The experimental and numerical results indicate that the included 45° U-shaped injector completely eliminates the sample leakage and an expansion separation channel with an expansion ratio of 2.5 delivers a sample plug with a perfect detection shape and highest concentration intensity, hence enabling an optimal injection and separation performance. PMID:21747696

  20. Combustion behavior of single coal-water slurry droplets, Part 1: Experimental techniques

    SciTech Connect

    Levendis, Y.A.; Metghalchi, M.; Wise, D.

    1991-12-31

    Techniques to produce single droplets of coal-water slurries have been developed in order to study the combustion behavior of the slurries. All stages of slurry combustion are of interest to the present study, however, emphasis will be given to the combustion of the solid agglomerate char which remains upon the termination of the water evaporation and the devolatilization periods. An experimental facility is under construction where combustion of coal-water slurries will be monitored in a variety of furnace temperatures and oxidizing atmospheres. The effect of the initial size of the slurry droplet and the solids loading (coal to water ratio) will be investigated. A drop tube, laminar flow furnace coupled to a near-infrared, ratio pyrometer win be used to monitor temperature-time histories of single particles from ignition to extinction. This paper describes the experimental built-up to this date and presents results obtained by numerical analysis that help understanding the convective and radiating environment in the furnace.

  1. Advanced Techniques for Seismic Protection of Historical Buildings: Experimental and Numerical Approach

    SciTech Connect

    Mazzolani, Federico M.

    2008-07-08

    The seismic protection of historical and monumental buildings, namely dating back from the ancient age up to the 20th Century, is being looked at with greater and greater interest, above all in the Euro-Mediterranean area, its cultural heritage being strongly susceptible to undergo severe damage or even collapse due to earthquake. The cultural importance of historical and monumental constructions limits, in many cases, the possibility to upgrade them from the seismic point of view, due to the fear of using intervention techniques which could have detrimental effects on their cultural value. Consequently, a great interest is growing in the development of sustainable methodologies for the use of Reversible Mixed Technologies (RMTs) in the seismic protection of the existing constructions. RMTs, in fact, are conceived for exploiting the peculiarities of innovative materials and special devices, and they allow ease of removal when necessary. This paper deals with the experimental and numerical studies, framed within the EC PROHITECH research project, on the application of RMTs to the historical and monumental constructions mainly belonging to the cultural heritage of the Euro-Mediterranean area. The experimental tests and the numerical analyses are carried out at five different levels, namely full scale models, large scale models, sub-systems, devices, materials and elements.

  2. Experimental research on radius of curvature measurement of spherical lenses based on laser differential confocal technique

    NASA Astrophysics Data System (ADS)

    Ding, Xiang; Sun, Ruoduan; Li, Fei; Zhao, Weiqian; Liu, Wenli

    2011-11-01

    A new approach based on laser differential confocal technique is potential to achieve high accuracy in radius of curvature (ROC) measurement. It utilizes two digital microscopes with virtual pinholes on the CCD detectors to precisely locate the cat's-eye and the confocal positions, which can enhance the focus-identification resolution. An instrumental system was established and experimental research was carried out to determine how error sources contribute to the uncertainty of ROC measurement, such as optical axis misalignment, dead path of the interferometer, surface figure error of tested lenses and temperature fluctuation, etc. Suggestions were also proposed on how these factors could be avoided or suppressed. The system performance was tested by employing four pairs of template lenses with a serial of ROC values. The relative expanded uncertainty was analyzed and calculated based on theoretical analysis and experimental determination, which was smaller than 2x10-5 (k=2). The results were supported by comparison measurement between the differential confocal radius measurement (DCRM) system and an ultra-high accuracy three-dimensional profilometer, showing good consistency. It demonstrated that the DCRM system was capable of high-accuracy ROC measurement.

  3. New experimental technique for detecting the effect of low-frequency electric fields on enzyme structure

    SciTech Connect

    Greco, G. Jr.; Gianfreda, L.; d'Ambrosio, G.; Massa, R.; Scaglione, A.; Scarfi, M.R. )

    1990-01-01

    A new experimental approach has been developed to determine kinetic and thermodynamic parameters of the inactivation of an enzyme under labile conditions both with and without exposure to electrical currents as sources of perturbation. Studies were undertaken to investigate if low-frequency electric currents can accelerate the thermal inactivation of an enzyme through interactions with dipole moments in enzymatic molecules and through related mechanical stresses. The experiments were conducted with the enzyme acid phosphatase. The enzyme was exposed to a 50-Hz current at different densities (10 to 60 mA/cm2 rms) or to a sinusoidal or square-wave current at an average density of 3 mA/cm2 and frequencies from, respectively, 50 Hz to 20 kHz and 500 pulses per second (pps) to 50,000 pps. Positive-control experiments were performed in the presence of a stabilizer or a deactivator. The results indicate that the technique is sensitive to conformational changes that otherwise may be impossible to detect. However, exposure to electric currents under the experimental conditions described herein showed no effects of the currents.

  4. New experimental technique for detecting the effect of low-frequency electric fields on enzyme structure.

    PubMed

    Greco, G; Gianfreda, L; d'Ambrosio, G; Massa, R; Scaglione, A; Scarfi, M R

    1990-01-01

    A new experimental approach has been developed to determine kinetic and thermodynamic parameters of the inactivation of an enzyme under labile conditions both with and without exposure to electrical currents as sources of perturbation. Studies were undertaken to investigate if low-frequency electric currents can accelerate the thermal inactivation of an enzyme through interactions with dipole moments in enzymatic molecules and through related mechanical stresses. The experiments were conducted with the enzyme acid phosphatase. The enzyme was exposed to a 50-Hz current at different densities (10 to 60 mA/cm2 rms) or to a sinusoidal or square-wave current at an average density of 3 mA/cm2 and frequencies from, respectively, 50 Hz to 20 kHz and 500 pulses per second (pps) to 50,000 pps. Positive-control experiments were performed in the presence of a stabilizer or a deactivator. The results indicate that the technique is sensitive to conformational changes that otherwise may be impossible to detect. However, exposure to electric currents under the experimental conditions described herein showed no effects of the currents. PMID:2346508

  5. Computational Design of Creep-Resistant Alloys and Experimental Validation in Ferritic Superalloys

    SciTech Connect

    Liaw, Peter

    2014-12-31

    A new class of ferritic superalloys containing B2-type zones inside parent L21-type precipitates in a disordered solid-solution matrix, also known as a hierarchical-precipitate strengthened ferritic alloy (HPSFA), has been developed for high-temperature structural applications in fossil-energy power plants. These alloys were designed by the addition of the Ti element into a previously-studied NiAl-strengthened ferritic alloy (denoted as FBB8 in this study). In the present research, systematic investigations, including advanced experimental techniques, first-principles calculations, and numerical simulations, have been integrated and conducted to characterize the complex microstructures and excellent creep resistance of HPSFAs. The experimental techniques include transmission-electron microscopy, scanningtransmission- electron microscopy, neutron diffraction, and atom-probe tomography, which provide detailed microstructural information of HPSFAs. Systematic tension/compression creep tests revealed that HPSFAs exhibit the superior creep resistance, compared with the FBB8 and conventional ferritic steels (i.e., the creep rates of HPSFAs are about 4 orders of magnitude slower than the FBB8 and conventional ferritic steels.) First-principles calculations include interfacial free energies, anti-phase boundary (APB) free energies, elastic constants, and impurity diffusivities in Fe. Combined with kinetic Monte- Carlo simulations of interdiffusion coefficients, and the integration of computational thermodynamics and kinetics, these calculations provide great understanding of thermodynamic and mechanical properties of HPSFAs. In addition to the systematic experimental approach and first-principles calculations, a series of numerical tools and algorithms, which assist in the optimization of creep properties of ferritic superalloys, are utilized and developed. These numerical simulation results are compared with the available experimental data and previous first

  6. An experimental comparative study of 20 Italian opera houses: Measurement techniques

    NASA Astrophysics Data System (ADS)

    Farina, Angelo; Armelloni, Enrico; Martignon, Paolo

    2001-05-01

    For ``acoustical photography'' we mean a set of measured impulse responses, which enable us to ``listen'' at the measured room by means of advanced auralization methods. Once these data sets have been measured, they can be employed in two different ways: objective analysis and listening test. In fact, it is possible to compute dozens of acoustical objective parameters, describing the temporal texture, the spatial effect and the frequency-domain coloring of each opera house. On the other hand, by means of the auralization technique, it becomes easy to conduct listening experiments with human subjects. This paper focuses principally on the development and specification of the measurement technique, which is the topic assigned to the research unit of Parma, to which the authors belong. It describes the hardware equipment, the software, the electro-acoustic transducers (microphones and loudspeakers), the measurement positions, the system for automatic displacement of the microphones and the conditions of the room during the measurements. Experimental results are reported about a couple of opera houses which were employed for testing the measurement procedure and showing the benefits of the new method against the previously employed ones.

  7. Refinement and reduction in animal experimentation: options for new imaging techniques.

    PubMed

    Heindl, Cornelia; Hess, Andreas; Brune, Kay

    2008-01-01

    Attempts to substitute animal experiments with in vitro or in silico methods were of limited success when complex (regulatory) processes, e.g. of the cardiovascular, metabolic or neuronal system, were to be analysed. Consequently, strategies to reduce the number of and the burden placed on experimental animals in these fields of research are required. One option consists in the application of non-invasive imaging techniques like (functional) magnetic resonance imaging ((f)MRI), positron emission tomography (PET), and optical imaging (OI). All these methods allow for the observation of functional changes within the body of e.g. genetically modified animals without pain, suffering or (premature) termination. The use of these methods has now reached new dimensions of resolution and precision. With this article we would like to demonstrate a few options of these techniques. We hope that our enthusiasm becomes contagious, thus motivating more scientists to make use of the still expensive equipment which has become available in "small animal imaging" centres. On the basis of four examples--three from our group--we would like to highlight some merits of the new technologies. PMID:18551236

  8. A review of experimental techniques to produce a nacre-like structure.

    PubMed

    Corni, I; Harvey, T J; Wharton, J A; Stokes, K R; Walsh, F C; Wood, R J K

    2012-09-01

    The performance of man-made materials can be improved by exploring new structures inspired by the architecture of biological materials. Natural materials, such as nacre (mother-of-pearl), can have outstanding mechanical properties due to their complicated architecture and hierarchical structure at the nano-, micro- and meso-levels which have evolved over millions of years. This review describes the numerous experimental methods explored to date to produce composites with structures and mechanical properties similar to those of natural nacre. The materials produced have sizes ranging from nanometres to centimetres, processing times varying from a few minutes to several months and a different range of mechanical properties that render them suitable for various applications. For the first time, these techniques have been divided into those producing bulk materials, coatings and free-standing films. This is due to the fact that the material's application strongly depends on its dimensions and different results have been reported by applying the same technique to produce materials with different sizes. The limitations and capabilities of these methodologies have been also described. PMID:22535879

  9. Experimental Evaluation of Quantitative Diagnosis Technique for Hepatic Fibrosis Using Ultrasonic Phantom

    NASA Astrophysics Data System (ADS)

    Koriyama, Atsushi; Yasuhara, Wataru; Hachiya, Hiroyuki

    2012-07-01

    Since clinical diagnosis using ultrasonic B-mode images depends on the skill of the doctor, the realization of a quantitative diagnosis method using an ultrasound echo signal is highly required. We have been investigating a quantitative diagnosis technique, mainly for hepatic disease. In this paper, we present the basic experimental evaluation results on the accuracy of the proposed quantitative diagnosis technique for hepatic fibrosis by using a simple ultrasonic phantom. As a region of interest crossed on the boundary between two scatterer areas with different densities in a phantom, we can simulate the change of the echo amplitude distribution from normal tissue to fibrotic tissue in liver disease. The probability density function is well approximated by our fibrosis distribution model that is a mixture of normal and fibrotic tissue. The fibrosis parameters of the amplitude distribution model can be estimated relatively well at a mixture rate from 0.2 to 0.6. In the inversion processing, the standard deviation of the estimated fibrosis results at mixture ratios of less than 0.2 and larger than 0.6 are relatively large. Although the probability density is not large at high amplitude, the estimated variance ratio and mixture rate of the model are strongly affected by higher amplitude data.

  10. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  11. Experimental design and simulation of a metal hydride hydrogen storage system

    NASA Astrophysics Data System (ADS)

    Gadre, Sarang Ajit

    internal geometric design point of view. At the same time, the statistical design of experiments approach was shown to be a very efficient technique for identifying the most important process parameters that affect the performance of the metal hydride hydrogen storage unit with minimal experimental effort.

  12. Improved Experimental Techniques for Analyzing Nucleic Acid Transport Through Protein Nanopores in Planar Lipid Bilayers

    NASA Astrophysics Data System (ADS)

    Costa, Justin A.

    The translocation of nucleic acid polymers across cell membranes is a fundamental requirement for complex life and has greatly contributed to genomic molecular evolution. The diversity of pathways that have evolved to transport DNA and RNA across membranes include protein receptors, active and passive transporters, endocytic and pinocytic processes, and various types of nucleic acid conducting channels known as nanopores. We have developed a series of experimental techniques, collectively known as "Wicking", that greatly improves the biophysical analysis of nucleic acid transport through protein nanopores in planar lipid bilayers. We have verified the Wicking method using numerous types of classical ion channels including the well-studied chloride selective channel, CLIC1. We used the Wicking technique to reconstitute α-hemolysin and found that DNA translocation events of types A and B could be routinely observed using this method. Furthermore, measurable differences were observed in the duration of blockade events as DNA length and composition was varied, consistent with previous reports. Finally, we tested the ability of the Wicking technology to reconstitute the dsRNA transporter Sid-1. Exposure to dsRNAs of increasing length and complexity showed measurable differences in the current transitions suggesting that the charge carrier was dsRNA. However, the translocation events occurred so infrequently that a meaningful electrophysiological analysis was not possible. Alterations in the lipid composition of the bilayer had a minor effect on the frequency of translocation events but not to such a degree as to permit rigorous statistical analysis. We conclude that in many instances the Wicking method is a significant improvement to the lipid bilayer technique, but is not an optimal method for analyzing transport through Sid-1. Further refinements to the Wicking method might have future applications in high throughput DNA sequencing, DNA computation, and

  13. Experimental Techniques for Evaluating the Effects of Aging on Impact and High Strain Rate Properties of Triaxial Braided Composite Materials

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael; Roberts, Gary D.; Ruggeri, Charles R.; Gilat, Amos; Matrka, Thomas

    2010-01-01

    An experimental program is underway to measure the impact and high strain rate properties of triaxial braided composite materials and to quantify any degradation in properties as a result of thermal and hygroscopic aging typically encountered during service. Impact tests are being conducted on flat panels using a projectile designed to induce high rate deformation similar to that experienced in a jet engine fan case during a fan blade-out event. The tests are being conducted on as-fabricated panels and panels subjected to various numbers of aging cycles. High strain rate properties are being measured using a unique Hopkinson bar apparatus that has a larger diameter than conventional Hopkinson bars. This larger diameter is needed to measure representative material properties because of the large unit cell size of the materials examined in this work. In this paper the experimental techniques used for impact and high strain rate testing are described and some preliminary results are presented for both as-fabricated and aged composites.

  14. Estimating intervention effects across different types of single-subject experimental designs: empirical illustration.

    PubMed

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S Natasha; Van den Noortgate, Wim

    2015-03-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs often focuses on combining simple AB phase designs or multiple-baseline designs. We discuss the estimation of the average intervention effect estimate across different types of single-subject experimental designs using several multilevel meta-analytic models. We illustrate the different models using a reanalysis of a meta-analysis of single-subject experimental designs (Heyvaert, Saenen, Maes, & Onghena, in press). The intervention effect estimates using univariate 3-level models differ from those obtained using a multivariate 3-level model that takes the dependence between effect sizes into account. Because different results are obtained and the multivariate model has multiple advantages, including more information and smaller standard errors, we recommend researchers to use the multivariate multilevel model to meta-analyze studies that utilize different single-subject designs. PMID:24884449

  15. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    SciTech Connect

    Dobler, Gregory; Fassnacht, Christopher D.; Rumbaugh, Nicholas; Treu, Tommaso; Liao, Kai; Marshall, Phil; Hojjati, Alireza; Linder, Eric

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  16. Design and testing of 15kv to 35kv porcelain terminations using new connection techniques

    SciTech Connect

    Fox, J.W.; Hill, R.J.

    1982-07-01

    New techniques for conductor connection in underground cable terminations were investigated in a design of a new distribution class porcelain cable termination. Connections to the conductor were accomplished using set screws, building upon previous designs with additions to assure a conservative design approach. The connector design was tested according to applicable standards for load cycling of connections, and the result appears capable of conservative performance in the operating environment.

  17. Suspension flow in microfluidic devices--a review of experimental techniques focussing on concentration and velocity gradients.

    PubMed

    van Dinther, A M C; Schroën, C G P H; Vergeldt, F J; van der Sman, R G M; Boom, R M

    2012-05-15

    Microfluidic devices are an emerging technology for processing suspensions in e.g. medical applications, pharmaceutics and food. Compared to larger scales, particles will be more influenced by migration in microfluidic devices, and this may even be used to facilitate segregation and separation. In order to get most out of these completely new technologies, methods to experimentally measure (or compute) particle migration are needed to gain sufficient insights for rational design. However, the currently available methods only allow limited access to particle behaviour. In this review we compare experimental methods to investigate migration phenomena that can occur in microfluidic systems when operated with natural suspensions, having typical particle diameters of 0.1 to 10 μm. The methods are used to monitor concentration and velocity profiles of bidisperse and polydisperse suspensions, which are notoriously difficult to measure due to the small dimensions of channels and particles. Various methods have been proposed in literature: tomography, ultrasound, and optical analysis, and here we review and evaluate them on general dimensionless numbers related to process conditions and channel dimensions. Besides, eleven practical criteria chosen such that they can also be used for various applications, are used to evaluate the performance of the methods. We found that NMR and CSLM, although expensive, are the most promising techniques to investigate flowing suspensions in microfluidic devices, where one may be preferred over the other depending on the size, concentration and nature of the suspension, the dimensions of the channel, and the information that has to be obtained. The paper concludes with an outlook on future developments of measurement techniques. PMID:22405541

  18. An experimental technique for performing 3-D LDA measurements inside whirling annular seals

    NASA Astrophysics Data System (ADS)

    Morrison, Gerald L.; Johnson, Mark C.; Deotte, Robert E., Jr.; Thames, H. Davis, III; Wiedner, Brian G.

    1992-09-01

    During the last several years, the Fluid Mechanics Division of the Turbomachinery Laboratory at Texas A&M University has developed a rather unique facility with the experimental capability for measuring the flow field inside journal bearings, labyrinth seals, and annular seals. The facility consists of a specially designed 3-D LDA system which is capable of measuring the instantaneous velocity vector within 0.2 mm of a wall while the laser beams are aligned almost perpendicular to the wall. This capability was required to measure the flow field inside journal bearings, labyrinth seals, and annular seals. A detailed description of this facility along with some representative results obtained for a whirling annular seal are presented.

  19. Design of a digital voice data compression technique for orbiter voice channels

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.

  20. Designing reduced-order linear multivariable controllers using experimentally derived plant data

    NASA Technical Reports Server (NTRS)

    Frazier, W. G.; Irwin, R. D.

    1993-01-01

    An iterative numerical algorithm for simultaneously improving multiple performance and stability robustness criteria for multivariable feedback systems is developed. The unsatisfied design criteria are improved by updating the free parameters of an initial, stabilizing controller's state-space matrices. Analytical expressions for the gradients of the design criteria are employed to determine a parameter correction that improves all of the feasible, unsatisfied design criteria at each iteration. A controller design is performed using the algorithm with experimentally derived data from a large space structure test facility. Experimental results of the controller's performance at the facility are presented.

  1. Laboratory Analyses of Micron-Sized Solid Grains: Experimental Techniques and Recent Results

    NASA Astrophysics Data System (ADS)

    Colangeli, L.; Bussoletti, E.

    1997-12-01

    The investigation of comets has proceded for long time on remote observations from ground. In 1986 several space missions towards comet Halley have allowed, for the first time, to have a close look to a comet (Encounters with comet Halley 1986). In particular, the GIOTTO mission by the European Space Agency (ESA) has provided "in situ" observations and measurements up to a distance of about 600 Km from the nucleus. Surface morphology and physical properties have been observed; plasma, gas and dust components in the coma have been analyzed. It is clear, however, that definite answers about the primordial nature of comets and their relation with interstellar material can be obtained only from direct analysis of cometary samples. Future space missions such as CRAF (NASA) and ROSETTA (ESA) have exactly this aim. In particular, the ambitious goal of Rosetta mission is to return to earth comet samples which can be analyzed carefully in laboratory. In preparation to this event a large effort must be placed both in the improvement of existing analytical techniques and in the development of new methods which will provide as much information as possible on "returned comet samples" (hereinafter RCSS). Handling of extra-terrestrial samples will require to operate in carefully controlled and extremely "inert" ambient conditions. In addition, working on a limited amount of "unique" cometary material will also impose to use analytical techniques which should not produce alteration, contamination or destruction of the sample. Many suggestions can come from people working in laboratory on "cosmic dust"; in fact, experimental methods which are applied to analyze: (a) interplanetary dust particles (IDPS) collected in stratosphere, (b) meteorites, and (c) laboratory produced cosmic dust analog samples, can be mutuated or properly improved in the future for specific application to RCSS. Since modern techniques used to analyze IDPs and meteorites are reviewed elsewhere in this workshop

  2. Optimization of experimental designs and model parameters exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schürch, M.; Slawig, T.

    2014-09-01

    The weighted least squares estimator for model parameters was presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs was described together with a lesser known approach which takes into account a potential nonlinearity of the model parameters. These two approaches were combined with two different methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and handling was described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two models for sediment concentration in seawater of different complexity served as application example. The advantages and disadvantages of the different approaches were compared, and an evaluation of the approaches was performed.

  3. Experimental, computational, and analytical techniques for diagnosing breast cancer using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Palmer, Gregory M.

    This dissertation presents the results of an investigation into experimental, computational, and analytical methodologies for diagnosing breast cancer using fluorescence and diffuse reflectance spectroscopy. First, the optimal experimental methodology for tissue biopsy studies was determined using an animal study. It was found that the use of freshly excised tissue samples preserved the original spectral line shape and magnitude of the fluorescence and diffuse reflectance. Having established the optimal experimental methodology, a clinical study investigating the use of fluorescence and diffuse reflectance spectroscopy for the diagnosis of breast cancer was undertaken. In addition, Monte Carlo-based models of diffuse reflectance and fluorescence were developed and validated to interpret these data. These models enable the extraction of physically meaningful information from the measured spectra, including absorber concentrations, and scattering and intrinsic fluorescence properties. The model was applied to the measured spectra, and using a support vector machine classification algorithm based on physical features extracted from the diffuse reflectance spectra, it was found that breast cancer could be diagnosed with a cross-validated sensitivity and specificity of 82% and 92%, respectively, which are substantially better than that obtained using a conventional, empirical algorithm. It was found that malignant tissues had lower hemoglobin oxygen saturation, were more scattering, and had lower beta-carotene concentration, relative to the non-malignant tissues. It was also found that the fluorescence model could successfully extract the intrinsic fluorescence line shape from tissue samples. One limitation of the previous study is that a priori knowledge of the tissue's absorbers and scatterers is required. To address this limitation, and to improve upon the method with which fiber optic probes are designed, an alternate approach was developed. This method used a

  4. Neuroimaging in aphasia treatment research: Issues of experimental design for relating cognitive to neural changes

    PubMed Central

    Rapp, Brenda; Caplan, David; Edwards, Susan; Visch-Brink, Evy; Thompson, Cynthia K.

    2012-01-01

    The design of functional neuroimaging studies investigating the neural changes that support treatment-based recovery of targeted language functions in acquired aphasia faces a number of challenges. In this paper, we discuss these challenges and focus on experimental tasks and experimental designs that can be used to address the challenges, facilitate the interpretation of results and promote integration of findings across studies. PMID:22974976

  5. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.

    2013-11-01

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c 2 mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  6. Submillimeter Measurements of Photolysis Products in Interstellar Ice Analogs: A New Experimental Technique

    NASA Technical Reports Server (NTRS)

    Milam, Stefanie N.; Weaver, Susanna Widicus

    2012-01-01

    Over 150 molecular species have been confirmed in space, primarily by their rotational spectra at millimeter/submillimeter wavelengths, which yield an unambiguous identification. Many of the known interstellar organic molecules cannot be explained by gas-phase chemistry. It is now presumed that they are produced by surface reactions of the simple ices and/or grains observed and released into the gas phase by sublimation, sputtering, etc. Additionally, the chemical complexity found in meteorites and samples returned from comets far surpasses that of the remote detections for the interstellar medium (ISM), comets, and planetary atmospheres. Laboratory simulations of interstellar/cometary ices have found, from the analysis of the remnant residue of the warmed laboratory sample, that such molecules are readily formed; however, it has yet to be determined if they are formed during the warm phase or within the ice during processing. Most analysis of the ice during processing reveals molecular changes, though the exact quantities and species formed are highly uncertain with current techniques due to overwhelming features of simple ices. Remote sensing with high resolution spectroscopy is currently the only method to detect trace species in the ISM and the primary method for comets and icy bodies in the Solar System due to limitations of sample return. We have recently designed an experiment to simulate interstellar/cometary/planetary ices and detect trace species employing the same techniques used for remote observations. Preliminary results will be presented.

  7. An Electronic Engineering Curriculum Design Based on Concept-Mapping Techniques

    ERIC Educational Resources Information Center

    Toral, S. L.; Martinez-Torres, M. R.; Barrero, F.; Gallardo, S.; Duran, M. J.

    2007-01-01

    Curriculum design is a concern in European Universities as they face the forthcoming European Higher Education Area (EHEA). This process can be eased by the use of scientific tools such as Concept-Mapping Techniques (CMT) that extract and organize the most relevant information from experts' experience using statistics techniques, and helps a…

  8. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  9. Is the surgical knot tying technique associated with a risk for unnoticed glove perforation? An experimental study

    PubMed Central

    2014-01-01

    Background The issue of safety in the surgical procedure has recently been widely and openly discussed at the World Health Organization. The use of latex gloves is the current standard of protection during surgery, as they remain intact throughout the procedure. The present study was designed to evaluate the rate of glove perforation during a two-hand technique using polyester sutures in a controlled experimental study. Methods Hypothesis was that the gloves used during a two-hand technique using polyester suture suffer punctures. We used 150 pairs of gloves during the experiment. Each investigator performed 30 tests always using double gloving. They made five surgical knots on each test over a custom-made table specifically developed for the experiment. Ten tests were done at a time with a week- interval. The Control Group (CG) has 30 pairs of intact surgical gloves. The gloves were tested to impermeability by water filling and leaking was observed at three different times. Statistics relating to the perforation rate were analyzed using the chi-square test. A P value less than 0.05 was considered statistically significant. Results During the experiment there was no loss of gloves by drilling or inadvertent error in performing the impermeability test. No perforations were detected at any time during the impermeability test with the gloves used for sutures. Also, the CG presented no leakage of the liquid used for the test. There was no statistical difference between the groups underwent suture nor between them and the GC. Conclusion Under the studied conditions, the authors’ hypotheses could not be proved. There was no damage to the surgical gloves during the entire experiment. The authors believe that the skin abrasions observed in the ulnar side of the little finger, constant throughout the experiment, must be caused by friction. We feel there is no risk of perforation of surgical gloves during a two-hand technique using polyester suture. PMID:24991234

  10. Question-Answering-Technique to Support Freshman and Senior Engineers in Processes of Engineering Design

    ERIC Educational Resources Information Center

    Winkelmann, Constance; Hacker, Winfried

    2010-01-01

    In two experimental studies, the influence of question-based reflection on the quality of design solutions was investigated. Students and experts with different know-how and professional experience had to design an artefact that should meet a list of requirements. Subsequently, they were asked to answer a system of interrogative questions…

  11. Teaching simple experimental design to undergraduates: do your students understand the basics?

    PubMed

    Hiebert, Sara M

    2007-03-01

    This article provides instructors with guidelines for teaching simple experimental design for the comparison of two treatment groups. Two designs with specific examples are discussed along with common misconceptions that undergraduate students typically bring to the experiment design process. Features of experiment design that maximize power and minimize the effects of interindividual variation, thus allowing reduction of sample sizes, are described. Classroom implementation that emphasizes student-centered learning is suggested, and thought questions, designed to help students discover and name the basic principles of simple experiment design for themselves, are included with an answer key. PMID:17327588

  12. Design studies for the transmission simulator method of experimental dynamic substructuring.

    SciTech Connect

    Mayes, Randall Lee; Arviso, Michael

    2010-05-01

    In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.

  13. Monte Carlo techniques for scattering foil design and dosimetry in total skin electron irradiations.

    PubMed

    Ye, Sung-Joon; Pareek, Prem N; Spencer, Sharon; Duan, Jun; Brezovich, Ivan A

    2005-06-01

    Total skin electron irradiation (TSEI) with single fields requires large electron beams having good dose uniformity, dmax at the skin surface, and low bremsstrahlung contamination. To satisfy these requirements, energy degraders and scattering foils have to be specially designed for the given accelerator and treatment room. We used Monte Carlo (MC) techniques based on EGS4 user codes (BEAM, DOSXYZ, and DOSRZ) as a guide in the beam modifier design of our TSEI system. The dosimetric characteristics at the treatment distance of 382 cm source-to-surface distance (SSD) were verified experimentally using a linear array of 47 ion chambers, a parallel plate chamber, and radiochromic film. By matching MC simulations to standard beam measurements at 100 cm SSD, the parameters of the electron beam incident on the vacuum window were determined. Best match was achieved assuming that electrons were monoenergetic at 6.72 MeV, parallel, and distributed in a circular pattern having a Gaussian radial distribution with full width at half maximum = 0.13 cm. These parameters were then used to simulate our TSEI unit with various scattering foils. Two of the foils were fabricated and experimentally evaluated by measuring off-axis dose uniformity and depth doses. A scattering foil, consisting of a 12 x 12 cm2 aluminum plate of 0.6 cm thickness and placed at isocenter perpendicular to the beam direction, was considered optimal. It produced a beam that was flat within +/-3% up to 60 cm off-axis distance, dropped by not more than 8% at a distance of 90 cm, and had an x-ray contamination of <3%. For stationary beams, MC-computed dmax, Rp, and R50 agreed with measurements within 0.5 mm. The MC-predicted surface dose of the rotating phantom was 41% of the dose rate at dmax of the stationary phantom, whereas our calculations based on a semiempirical formula in the literature yielded a drop to 42%. The MC simulations provided the guideline of beam modifier design for TSEI and estimated the

  14. Experimental Studies of Active and Passive Flow Control Techniques Applied in a Twin Air-Intake

    PubMed Central

    Joshi, Shrey; Jindal, Aman; Maurya, Shivam P.; Jain, Anuj

    2013-01-01

    The flow control in twin air-intakes is necessary to improve the performance characteristics, since the flow traveling through curved and diffused paths becomes complex, especially after merging. The paper presents a comparison between two well-known techniques of flow control: active and passive. It presents an effective design of a vortex generator jet (VGJ) and a vane-type passive vortex generator (VG) and uses them in twin air-intake duct in different combinations to establish their effectiveness in improving the performance characteristics. The VGJ is designed to insert flow from side wall at pitch angle of 90 degrees and 45 degrees. Corotating (parallel) and counterrotating (V-shape) are the configuration of vane type VG. It is observed that VGJ has the potential to change the flow pattern drastically as compared to vane-type VG. While the VGJ is directed perpendicular to the side walls of the air-intake at a pitch angle of 90 degree, static pressure recovery is increased by 7.8% and total pressure loss is reduced by 40.7%, which is the best among all other cases tested for VGJ. For bigger-sized VG attached to the side walls of the air-intake, static pressure recovery is increased by 5.3%, but total pressure loss is reduced by only 4.5% as compared to all other cases of VG. PMID:23935422

  15. Enhancements of Tow-Steering Design Techniques: Design of Rectangular Panel Under Combined Loads

    NASA Technical Reports Server (NTRS)

    Tatting, Brian F.; Setoodeh, Shahriar; Gurdal, Zafer

    2005-01-01

    An extension to existing design tools that utilize tow-steering is presented which is used to investigate the use of elastic tailoring for a flat panel with a central hole under combined loads of compression and shear. The elastic tailoring is characterized by tow-steering within individual lamina as well as a novel approach based on selective reinforcement, which attempts to minimize compliance through the use of Cellular Automata design concepts. The selective reinforcement designs lack any consideration of manufacturing constraints, so a new tow-steered path definition was developed to translate the prototype selective reinforcement designs into manufacturable plies. The minimum weight design of a flat panel under combined loading was based on a model provided by NASA-Langley personnel and analyzed by STAGS within the OLGA design environment. Baseline designs using traditional straight fiber plies were generated, as well as tow-steered designs which incorporated parallel, tow-drop, and overlap plies within the laminate. These results indicated that the overlap method provided the best improvement with regards to weight and performance as compared to traditional constant stiffness monocoque panels, though the laminates did not measure up to similar designs from the literature using sandwich and isogrid constructions. Further design studies were conducted using various numbers of the selective reinforcement plies at the core and outer surface of the laminate. None of these configurations exhibited notable advantages with regard to weight or buckling performance. This was due to the fact that the minimization of the compliance tended to direct the major stresses toward the center of the panel, which decreased the ability of the structure to withstand loads leading to instability.

  16. An experimental technique of split Hopkinson pressure bar using fiber micro-displacement interferometer system for any reflector.

    PubMed

    Fu, H; Tang, X R; Li, J L; Tan, D W

    2014-04-01

    A novel non-contact measurement technique had been developed for the mechanical properties of materials in Split Hopkinson Pressure Bars (SHPB). Instead of the traditional strain gages mounted on the surfaces of bars, two shutters were mounted on the end of bars to directly measure interfacial velocity using Fiber Micro-Displacement Interferometer System for Any Reflector. Using the new technique, the integrated stress-strain responses could be determined. The experimental technique was validated by SHPB test simulation. The technique had been used to investigate the dynamic response of a brittle explosive material. The results showed that the new experimental technique could be applied to the dynamic behavior in SHPB test. PMID:24784672

  17. Experimental techniques for characterizing the thermo-electro-mechanical shakedown response of SMA wires and tubes

    NASA Astrophysics Data System (ADS)

    Churchill, Christopher B.

    Shape Memory Alloys (SMAs) are a unique and valuable group of active materials. NiTi, the most popular SMA, has a power density orders of magnitude greater than any other known material, making it valuable in the medical and transportation industries where weight and space are at a premium. In the nearly half-century since its discovery, the adoption of NiTi has been slowed primarily by the engineering difficulties associated with its use: strong thermal coupling, material level instabilities, and rapid shakedown of material properties during cycling. Material properties change drastically with minute changes in alloy composition, so it is common to require a variety of experiments to fully characterize a new SMA material, all of which must be performed and interpreted with specialized techniques. This thesis collects many of these techniques into a series of characterization experiments, documenting several new phenomena in the process. First, three different alloys of NiTi wire are characterized through differential scanning calorimetry, isothermal tension, and constant load thermal cycling experiments. New techniques are presented for ER measurement and temperature control of SMA wires and temperature measurement of SMA tubes. It is shown that the shakedown of material properties with thermal cycling is not only dependent on the applied load and number of cycles, but has a large association with the direction of phase transformation. Several of these techniques are then applied to a systematic characterization of NiTi tubes in tension, compression, and bending. Particular attention is given to the nucleation and propagation of transformation fronts in tensile specimens. Compression experiments show dramatic asymmetry in the uniaxial response, with compression characterized by a lower transformation strain, higher transformation stress, and uniform transformations (no fronts). A very simple SMA actuator model is introduced. After identifying the relevant non

  18. Optimization of fast disintegration tablets using pullulan as diluent by central composite experimental design.

    PubMed

    Patel, Dipil; Chauhan, Musharraf; Patel, Ravi; Patel, Jayvadan

    2012-03-01

    The objective of this work was to apply central composite experimental design to investigate main and interaction effect of formulation parameters in optimizing novel fast disintegration tablets formulation using pullulan as diluents. Face centered central composite experimental design was employed to optimize fast disintegration tablet formulation. The variables studied were concentration of diluents (pullulan, X(1)), superdisintigrant (sodium starch glycolate, X(2)), and direct compression aid (spray dried lactose, X(3)). Tablets were characterized for weight variation, thickness, disintegration time (Y(1)) and hardness (Y(2)). Good correlation between the predicted values and experimental data of the optimized formulation methodology in optimizing fast disintegrating tablets using pullulan as a diluent. PMID:23066220

  19. Electro fluido dynamic techniques to design instructive biomaterials for tissue engineering and drug delivery

    NASA Astrophysics Data System (ADS)

    Guarino, Vincenzo; Altobelli, Rosaria; Cirillo, Valentina; Ambrosio, Luigi

    2015-12-01

    A large variety of processes and tools is continuously investigated to discover new solutions to design instructive materials with controlled chemical, physical and biological properties for tissue engineering and drug delivery. Among them, electro fluido dynamic techniques (EFDTs) are emerging as an interesting strategy, based on highly flexible and low-cost processes, to revisit old biomaterial's manufacturing approach by utilizing electrostatic forces as the driving force for the fabrication of 3D architectures with controlled physical and chemical functionalities to guide in vitro and in vivo cell activities. By a rational selection of polymer solution properties and process conditions, EFDTs allow to produce fibres and/or particles at micro and/or nanometric size scale which may be variously assembled by tailored experimental setups, thus giving the chance to generate a plethora of different 3D devices able to incorporate biopolymers (i.e., proteins, polysaccharides) or active molecules (e.g., drugs) for different applications. Here, we focus on the optimization of basic EFDTs - namely electrospinning, electrospraying and electrodynamic atomization - to develop active platforms (i.e., monocomponent, protein and drug loaded scaffolds and µ-scaffolds) made of synthetic (PCL, PLGA) or natural (chitosan, alginate) polymers. In particular, we investigate how to set materials and process parameters to impart specific morphological, biochemical or physical cues to trigger all the fundamental cell-biomaterial and cell- cell cross-talking elicited during regenerative processes, in order to reproduce the complex microenvironment of native or pathological tissues.

  20. Electro fluido dynamic techniques to design instructive biomaterials for tissue engineering and drug delivery

    SciTech Connect

    Guarino, Vincenzo Altobelli, Rosaria; Cirillo, Valentina; Ambrosio, Luigi

    2015-12-17

    A large variety of processes and tools is continuously investigated to discover new solutions to design instructive materials with controlled chemical, physical and biological properties for tissue engineering and drug delivery. Among them, electro fluido dynamic techniques (EFDTs) are emerging as an interesting strategy, based on highly flexible and low-cost processes, to revisit old biomaterial’s manufacturing approach by utilizing electrostatic forces as the driving force for the fabrication of 3D architectures with controlled physical and chemical functionalities to guide in vitro and in vivo cell activities. By a rational selection of polymer solution properties and process conditions, EFDTs allow to produce fibres and/or particles at micro and/or nanometric size scale which may be variously assembled by tailored experimental setups, thus giving the chance to generate a plethora of different 3D devices able to incorporate biopolymers (i.e., proteins, polysaccharides) or active molecules (e.g., drugs) for different applications. Here, we focus on the optimization of basic EFDTs - namely electrospinning, electrospraying and electrodynamic atomization - to develop active platforms (i.e., monocomponent, protein and drug loaded scaffolds and µ-scaffolds) made of synthetic (PCL, PLGA) or natural (chitosan, alginate) polymers. In particular, we investigate how to set materials and process parameters to impart specific morphological, biochemical or physical cues to trigger all the fundamental cell–biomaterial and cell– cell cross-talking elicited during regenerative processes, in order to reproduce the complex microenvironment of native or pathological tissues.

  1. Design of Optical Systems with Extended Depth of Field: An Educational Approach to Wavefront Coding Techniques

    ERIC Educational Resources Information Center

    Ferran, C.; Bosch, S.; Carnicer, A.

    2012-01-01

    A practical activity designed to introduce wavefront coding techniques as a method to extend the depth of field in optical systems is presented. The activity is suitable for advanced undergraduate students since it combines different topics in optical engineering such as optical system design, aberration theory, Fourier optics, and digital image…

  2. Zeta potential of microfluidic substrates: 1. Theory, experimental techniques, and effects on separations.

    PubMed

    Kirby, Brian J; Hasselbrink, Ernest F

    2004-01-01

    This paper summarizes theory, experimental techniques, and the reported data pertaining to the zeta potential of silica and silicon with attention to use as microfluidic substrate materials, particularly for microchip chemical separations. Dependence on cation concentration, buffer and cation type, pH, cation valency, and temperature are discussed. The Debye-Hückel limit, which is often correctly treated as a good approximation for describing the ion concentration in the double layer, can lead to serious errors if it is extended to predict the dependence of zeta potential on the counterion concentration. For indifferent univalent electrolytes (e.g., sodium and potassium), two simple scalings for the dependence of zeta potential on counterion concentration can be derived in high- and low-zeta limits of the nonlinear Poisson-Boltzman equation solution in the double layer. It is shown that for most situations relevant to microchip separations, the high-zeta limit is most applicable, leading to the conclusion that the zeta potential on silica substrates is approximately proportional to the logarithm of the molar counterion concentration. The zeta vs. pH dependence measurements from several experiments are compared by normalizing the zeta based on concentration. PMID:14743473

  3. Tensile-shear correlations obtained from shear punch test technique using a modified experimental approach

    NASA Astrophysics Data System (ADS)

    Karthik, V.; Visweswaran, P.; Vijayraghavan, A.; Kasiviswanathan, K. V.; Raj, Baldev

    2009-09-01

    Shear punch testing has been a very useful technique for evaluating mechanical properties of irradiated alloys using a very small volume of material. The load-displacement data is influenced by the compliance of the fixture components. This paper describes a modified experimental approach where the compliances of the punch and die components are eliminated. The analysis of the load-displacement data using the modified setup for various alloys like low carbon steel, SS316, modified 9Cr-1Mo, 2.25Cr-1Mo indicate that the shear yield strength evaluated at 0.2% offset of normalized displacement relates to the tensile YS as per the Von Mises yield relation ( σys = 1.73 τys). A universal correlation of type UTS = mτmax where m is a function of strain hardening exponent, is seen to be obeyed for all the materials in this study. The use of analytical models developed for blanking process are explored for evaluating strain hardening exponent from the load-displacement data. This study is directed towards rationalizing the tensile-shear empirical correlations for a more reliable prediction of tensile properties from shear punch tests.

  4. Experimental technique for observing free oscillation of a spherical gas bubble in highly viscous liquids.

    NASA Astrophysics Data System (ADS)

    Nakajima, Takehiro; Ando, Keita

    2015-11-01

    An experimental technique is developed to observe free oscillations of a spherical gas bubble in highly viscous liquids. It is demonstrated that focusing a nanosecond laser pulse of wavelength 532 nm and energy up to 1.5 mJ leads to the formation of a spherical gaseous bubble, not a vaporous bubble (quickly condensed back to the liquid), whose equilibrium radius is up to 200 microns in glycerin saturated with gases at room temperature. The subsequent free oscillations of the spherical gas bubble is visualized using a high-speed camera. Since the oscillation periods are short enough to ignore bubble translation under gravity and mass transfer out of the bubble, the observed bubble dynamics can be compared to nonlinear and linearized Reyleigh-Plesset-type calculations that account for heat conduction and acoustic radiation as well as the liquid viscosity. In this presentation, we report on the measurements with varying the viscosity and comparisons to the theory to quantify damping mechanisms in the bubble dynamics.

  5. An Experimentally Validated Numerical Modeling Technique for Perforated Plate Heat Exchangers

    PubMed Central

    Nellis, G. F.; Kelin, S. A.; Zhu, W.; Gianchandani, Y.

    2010-01-01

    Cryogenic and high-temperature systems often require compact heat exchangers with a high resistance to axial conduction in order to control the heat transfer induced by axial temperature differences. One attractive design for such applications is a perforated plate heat exchanger that utilizes high conductivity perforated plates to provide the stream-to-stream heat transfer and low conductivity spacers to prevent axial conduction between the perforated plates. This paper presents a numerical model of a perforated plate heat exchanger that accounts for axial conduction, external parasitic heat loads, variable fluid and material properties, and conduction to and from the ends of the heat exchanger. The numerical model is validated by experimentally testing several perforated plate heat exchangers that are fabricated using microelectromechanical systems based manufacturing methods. This type of heat exchanger was investigated for potential use in a cryosurgical probe. One of these heat exchangers included perforated plates with integrated platinum resistance thermometers. These plates provided in situ measurements of the internal temperature distribution in addition to the temperature, pressure, and flow rate measured at the inlet and exit ports of the device. The platinum wires were deposited between the fluid passages on the perforated plate and are used to measure the temperature at the interface between the wall material and the flowing fluid. The experimental testing demonstrates the ability of the numerical model to accurately predict both the overall performance and the internal temperature distribution of perforated plate heat exchangers over a range of geometry and operating conditions. The parameters that were varied include the axial length, temperature range, mass flow rate, and working fluid. PMID:20976021

  6. Shape and Surface: The challenges and advantages of 3D techniques in innovative fashion, knitwear and product design

    NASA Astrophysics Data System (ADS)

    Bendt, E.

    2016-07-01

    The presentation wants to show what kind of problems fashion and textile designers are facing in 3D-knitwear design, especially regarding fashionable flat-knit styles, and how they can use different kinds of techniques and processes to generate new types of 3D-designs and structures. To create really new things we have to overcome standard development methods and traditional thinking and should start to open our minds again for the material itself to generate new advanced textile solutions. This paper mainly introduces different results of research projects worked out in the master program “Textile Produkte” during lectures in “Innovative Product Design” and “Experimental Knitting”.

  7. Simulation and Prototype Design of Variable Step Angle Techniques Based Asteroid Deflection for Future Planetary Mission

    NASA Astrophysics Data System (ADS)

    Sathiyavel, C.

    2016-07-01

    Asteroids are minor planets, especially those of the inner Solar System. The most desirable asteroids for cross the geo-synchronous orbit are the carbonaceous C-type asteroids that are deemed by the astronomy community to have a planetary protection categorization of unrestricted Earth return. The mass of near earth Asteroids (assuming spherical asteroid) as a function of its diameter varies from 2 m to 10m, the corresponding densities from 1.9/cm3 to 3.8 g/cm3. For example, a 6.5-m diameter asteroid with a density of 2.8 g/cm3 has a mass of order 4,00,000 kg. If this Asteroid falls on earth then the earth will be destroyed at when the equally of inclination angle both of earth and Asteroid. My proposed work is how we can avert this great danger for near feature the above mass of Asteroid. The present work is Simulation and Prototype Design of a Variable Step Angle Techniques Based Asteroid Deflection for future planetary Mission. Proposed method is comparing with previous method that will be very useful to achieving hit the ion velocity to asteroid surface in several direction at static position of Asteroid deviate mission[ADM].The deviate angle α1 to α2 with help of Variable step angle techniques, it is containing Stepper Motor with attach of Ion propulsion module system.VASAT module is locating the top edge on the three axis stabilized Method in ADM.The three axis stabilized method is including the devices are like Gyroscope sensor ,Arduino Microcontroller system and ion propulsion techniques. Arduino Microcontroller system determines the orientation from gyroscope sensor. Then it uses ion Propulsion techniques modules to control the required motion like pitch, yaw and roll attitude of the ADM. The exhaust thrust value is 1500 mN and velocity is 10,000 m/s [from simulation results but experimental output results is small because low quality of Components is used in research lab] .The propulsion techniques also used as a static position of ADM Mission [both

  8. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological…

  9. Using an Experimental Design--Just the Thing for That Rainy Day.

    ERIC Educational Resources Information Center

    Donlan, Dan

    1986-01-01

    Presents "fail-safe" lessons for emergencies and substitutes. Describes an experimental design with six steps, designed to help teachers teach students some things about themselves, such as whether boys are better spellers than girls. Offers other examples from the classroom. (EL)

  10. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    ERIC Educational Resources Information Center

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  11. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  12. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cédric

    2014-01-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…

  13. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  14. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  15. Experimental designs and their recent advances in set-up, data interpretation, and analytical applications.

    PubMed

    Dejaegher, Bieke; Heyden, Yvan Vander

    2011-09-10

    In this review, the set-up and data interpretation of experimental designs (screening, response surface, and mixture designs) are discussed. Advanced set-ups considered are the application of D-optimal and supersaturated designs as screening designs. Advanced data interpretation approaches discussed are an adaptation of the algorithm of Dong and the estimation of factor effects from supersaturated design results. Finally, some analytical applications in separation science, on the one hand, and formulation-, product-, or process optimization, on the other, are discussed. PMID:21632194

  16. Combining simulaton techniques and design expertise in a renewable energy system design package, RESSAD

    SciTech Connect

    Jennings, S.U.; Pryor, T.L.; Remmer, D.P.

    1996-10-01

    Computer simulation is an increasingly popular tool for determining the most suitable renewable energy system type, design and control for an isolated community or homestead. However for the user without any expertise in system design, the complicated process of system component and control selection using computer simulation takes on a trial and error approach. Our renewable energy system design package, RESSAD, has been developed to simulate a wide range of renewable power supply systems, and to go beyond system simulation, by combining design expertise with the simulation model. The knowledge of the system designer is incorporated into the package through a range of analysis tools that assist in the selection process, without removing or restricting individual choices. The system selection process is analysed from the early stages of renewable resource assessment to the final evaluation of the results from a simulation of the chosen system. The approach of the RESSAD package in this selection process is described and its use is illustrated by two case studies in Western Australia. 11 refs., 3 tabs.

  17. Increasing the precision and accuracy of top-loading balances:  application of experimental design.

    PubMed

    Bzik, T J; Henderson, P B; Hobbs, J P

    1998-01-01

    The traditional method of estimating the weight of multiple objects is to obtain the weight of each object individually. We demonstrate that the precision and accuracy of these estimates can be improved by using a weighing scheme in which multiple objects are simultaneously on the balance. The resulting system of linear equations is solved to yield the weight estimates for the objects. Precision and accuracy improvements can be made by using a weighing scheme without requiring any more weighings than the number of objects when a total of at least six objects are to be weighed. It is also necessary that multiple objects can be weighed with about the same precision as that obtained with a single object, and the scale bias remains relatively constant over the set of weighings. Simulated and empirical examples are given for a system of eight objects in which up to five objects can be weighed simultaneously. A modified Plackett-Burman weighing scheme yields a 25% improvement in precision over the traditional method and implicitly removes the scale bias from seven of the eight objects. Applications of this novel use of experimental design techniques are shown to have potential commercial importance for quality control methods that rely on the mass change rate of an object. PMID:21644600

  18. Optimizing the spectrofluorimetric determination of cefdinir through a Taguchi experimental design approach.

    PubMed

    Abou-Taleb, Noura Hemdan; El-Wasseef, Dalia Rashad; El-Sherbiny, Dina Tawfik; El-Ashry, Saadia Mohamed

    2016-05-01

    The aim of this work is to optimize a spectrofluorimetric method for the determination of cefdinir (CFN) using the Taguchi method. The proposed method is based on the oxidative coupling reaction of CFN and cerium(IV) sulfate. The quenching effect of CFN on the fluorescence of the produced cerous ions is measured at an emission wavelength (λem ) of 358 nm after excitation (λex ) at 301 nm. The Taguchi orthogonal array L9 (3(4) ) was designed to determine the optimum reaction conditions. The results were analyzed using the signal-to-noise (S/N) ratio and analysis of variance (ANOVA). The optimal experimental conditions obtained from this study were 1 mL of 0.2% MBTH, 0.4 mL of 0.25% Ce(IV), a reaction time of 10 min and methanol as the diluting solvent. The calibration plot displayed a good linear relationship over a range of 0.5-10.0 µg/mL. The proposed method was successfully applied to the determination of CFN in bulk powder and pharmaceutical dosage forms. The results are in good agreement with those obtained using the comparison method. Finally, the Taguchi method provided a systematic and efficient methodology for this optimization, with considerably less effort than would be required for other optimizations techniques. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26456088

  19. Molecular assay optimized by Taguchi experimental design method for venous thromboembolism investigation.

    PubMed

    Celani de Souza, Helder Jose; Moyses, Cinthia B; Pontes, Fabrício J; Duarte, Roberto N; Sanches da Silva, Carlos Eduardo; Alberto, Fernando Lopes; Ferreira, Ubirajara R; Silva, Messias Borges

    2011-01-01

    Two mutations - Factor V Leiden (1691G > A) and the 20210G > A on the Prothrombin gene - are key risk factors for a frequent and potentially fatal disorder called Venous Thromboembolism. These molecular alterations can be investigated using real-time Polymerase Chain Reaction (PCR) with Fluorescence Resonance Energy Transfer (FRET) probes and distinct DNA pools for both factors. The objective of this paper is to present an application of Taguchi Experimental Design Method to determine the best parameters adjustment of a Molecular Assays Process in order to obtain the best diagnostic result for Venous Thromboembolism investigation. The complete process contains six three-level factors which usually demands 729 experiments to obtain the final result, if using a Full Factorial Array. In this research, a Taguchi L27 Orthogonal Array is chosen to optimize the analysis and reduce the number of experiments to 27 without degrading the final result accuracy. The application of this method can lessen the time and cost necessary to achieve the best operation condition for a required performance. The results is proven in practice and confirmed that the Taguchi method can really offer a good approach for clinical assay efficiency and effectiveness improvement even though the clinical diagnostics can be based on the use of qualitative techniques. PMID:21867748

  20. Single-Case Experimental Designs to Evaluate Novel Technology-Based Health Interventions

    PubMed Central

    Cassidy, Rachel N; Raiff, Bethany R

    2013-01-01

    Technology-based interventions to promote health are expanding rapidly. Assessing the preliminary efficacy of these interventions can be achieved by employing single-case experiments (sometimes referred to as n-of-1 studies). Although single-case experiments are often misunderstood, they offer excellent solutions to address the challenges associated with testing new technology-based interventions. This paper provides an introduction to single-case techniques and highlights advances in developing and evaluating single-case experiments, which help ensure that treatment outcomes are reliable, replicable, and generalizable. These advances include quality control standards, heuristics to guide visual analysis of time-series data, effect size calculations, and statistical analyses. They also include experimental designs to isolate the active elements in a treatment package and to assess the mechanisms of behavior change. The paper concludes with a discussion of issues related to the generality of findings derived from single-case research and how generality can be established through replication and through analysis of behavioral mechanisms. PMID:23399668

  1. Optimization of experimental design in fMRI: a general framework using a genetic algorithm.

    PubMed

    Wager, Tor D; Nichols, Thomas E

    2003-02-01

    This article describes a method for selecting design parameters and a particular sequence of events in fMRI so as to maximize statistical power and psychological validity. Our approach uses a genetic algorithm (GA), a class of flexible search algorithms that optimize designs with respect to single or multiple measures of fitness. Two strengths of the GA framework are that (1) it operates with any sort of model, allowing for very specific parameterization of experimental conditions, including nonstandard trial types and experimentally observed scanner autocorrelation, and (2) it is flexible with respect to fitness criteria, allowing optimization over known or novel fitness measures. We describe how genetic algorithms may be applied to experimental design for fMRI, and we use the framework to explore the space of possible fMRI design parameters, with the goal of providing information about optimal design choices for several types of designs. In our simulations, we considered three fitness measures: contrast estimation efficiency, hemodynamic response estimation efficiency, and design counterbalancing. Although there are inherent trade-offs between these three fitness measures, GA optimization can produce designs that outperform random designs on all three criteria simultaneously. PMID:12595184

  2. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    NASA Astrophysics Data System (ADS)

    Baldwin, K. A.

    2014-12-01

    Tektites are small stones formed from rapidly cooling drops of molten rock ejected from high velocity asteroid impacts with the Earth, that freeze into a myriad of shapes during flight. Many splash-form tektites have an elongated or dumb-bell shape owing to their rotation prior to solidification[1]. Here we present a novel method for creating 'artificial tektites' from spinning drops of molten wax, using diamagnetic levitation to suspend the drops[2]. We find that the solid wax models produced this way are the stable equilibrium shapes of a spinning liquid droplet held together by surface tension. In addition to the geophysical interest in tektite formation, the stable equilibrium shapes of liquid drops have implications for many physical phenomena, covering a wide range of length scales, from nuclear physics (e.g. in studies of rapidly rotating atomic nuclei), to astrophysics (e.g. in studies of the shapes of astronomical bodies such as asteroids, rapidly rotating stars and event horizons of rotating black holes). For liquid drops bound by surface tension, analytical and numerical methods predict a series of stable equilibrium shapes with increasing angular momentum. Slowly spinning drops have an oblate-like shape. With increasing angular momentum these shapes become secularly unstable to a series of triaxial pseudo-ellipsoids that then evolve into a family of two-lobed 'dumb-bell' shapes as the angular momentum is increased still further. Our experimental method allows accurate measurements of the drops to be taken, which are useful to validate numerical models. This method has provided a means for observing tektite formation, and has additionally confirmed experimentally the stable equilibrium shapes of liquid drops, distinct from the equivalent shapes of rotating astronomical bodies. Potentially, this technique could be applied to observe the non-equilibrium dynamic processes that are also important in real tektite formation, involving, e.g. viscoelastic

  3. Designing for Damage: Robust Flight Control Design using Sliding Mode Techniques

    NASA Technical Reports Server (NTRS)

    Vetter, T. K.; Wells, S. R.; Hess, Ronald A.; Bacon, Barton (Technical Monitor); Davidson, John (Technical Monitor)

    2002-01-01

    A brief review of sliding model control is undertaken, with particular emphasis upon the effects of neglected parasitic dynamics. Sliding model control design is interpreted in the frequency domain. The inclusion of asymptotic observers and control 'hedging' is shown to reduce the effects of neglected parasitic dynamics. An investigation into the application of observer-based sliding mode control to the robust longitudinal control of a highly unstable is described. The sliding mode controller is shown to exhibit stability and performance robustness superior to that of a classical loop-shaped design when significant changes in vehicle and actuator dynamics are employed to model airframe damage.

  4. A technique for optimally designing engineering structures with manufacturing tolerances accounted for

    NASA Astrophysics Data System (ADS)

    Tabakov, P. Y.; Walker, M.

    2007-01-01

    Accurate optimal design solutions for most engineering structures present considerable difficulties due to the complexity and multi-modality of the functional design space. The situation is made even more complex when potential manufacturing tolerances must be accounted for in the optimizing process. The present study provides an in-depth analysis of the problem, and then a technique for determining the optimal design of engineering structures, with manufacturing tolerances in the design variables accounted for, is proposed and demonstrated. The examples used to demonstrate the technique involve the design optimization of simple fibre-reinforced laminated composite structures. The technique is simple, easy to implement and, at the same time, very efficient. It is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus it is a worst-case scenario approach. In addition, the technique is non-probabilistic. A genetic algorithm with fitness sharing, including a micro-genetic algorithm, has been found to be very suitable to use, and implemented in the technique. The numerical examples presented in the article deal with buckling load design optimization of an laminated angle ply plate, and evaluation of the maximum burst pressure in a thick laminated anisotropic pressure vessel. Both examples clearly demonstrate the impact of manufacturing tolerances on the overall performance of a structure and emphasize the importance of accounting for such tolerances in the design optimization phase. This is particularly true of the pressure vessel. The results show that when the example tolerances are accounted for, the maximum design pressure is reduced by 60.2% (in the case of a single layer vessel), and when five layers are specified, if the nominal fibre orientations are implemented and the example tolerances are incurred during fabrication, the actual design pressure could be 64% less than predicted.

  5. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique.

    PubMed

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-01-01

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)₂ solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete. PMID:27618054

  6. Experimental layering development by indenter technique and application to fault rheology differentiation

    NASA Astrophysics Data System (ADS)

    Gratier, J. P.; Noiriel, C. N.; Renard, F.

    2014-12-01

    Natural deformation of rocks is often associated with differentiation processes leading to irreversible transformations of their microstructural thus leading in turn to modifications of their rheological properties. The mechanisms of development of such processes at work during diagenesis, metamorphism or fault differentiation are poorly known as they are not easy to reproduce in the laboratory due to the long duration required for most of chemically controlled differentiation processes. Here we show that experimental compaction with layering development, similar to what happens in natural deformation, can be obtained in the laboratory by indenter techniques. Samples of plaster mixed with clay and samples of diatomite loosely interbedded with clays were loaded during several months at 40°C (plaster) and 150°C (diatomite) in presence of their saturated solutions. High-resolution X-ray tomography and SEM studies show that the layering development is a self-organized process. Stress driven dissolution of the soluble minerals (gypsum in plaster, silica in diatomite) is initiated in the zones initially richer in clays because the kinetics of diffusive mass transfer along the clay/soluble mineral interfaces is much faster than along the healed boundaries of the soluble minerals. The passive concentration of the clay minerals amplifies the localization of the dissolution along some layers oriented perpendicular to the maximum compressive stress component. Conversely, in the areas with initial low content in clay and clustered soluble minerals, dissolution is more difficult as the grain boundaries of the soluble species are healed together. These areas are less deformed and they act as rigid objects that concentrate the dissolution near their boundaries thus amplifying the differentiation. Applications to fault processes are discussed: i) localized pressure solution and sealing processes may lead to fault rheology differentiation with a partition between two end

  7. Integrated flight/propulsion control design for a STOVL aircraft using H-infinity control design techniques

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Ouzts, Peter J.

    1991-01-01

    Results are presented from an application of H-infinity control design methodology to a centralized integrated flight propulsion control (IFPC) system design for a supersonic Short Takeoff and Vertical Landing (STOVL) fighter aircraft in transition flight. The emphasis is on formulating the H-infinity control design problem such that the resulting controller provides robustness to modeling uncertainties and model parameter variations with flight condition. Experience gained from a preliminary H-infinity based IFPC design study performed earlier is used as the basis to formulate the robust H-infinity control design problem and improve upon the previous design. Detailed evaluation results are presented for a reduced order controller obtained from the improved H-infinity control design showing that the control design meets the specified nominal performance objectives as well as provides stability robustness for variations in plant system dynamics with changes in aircraft trim speed within the transition flight envelope. A controller scheduling technique which accounts for changes in plant control effectiveness with variation in trim conditions is developed and off design model performance results are presented.

  8. Determination of hydroxy acids in cosmetics by chemometric experimental design and cyclodextrin-modified capillary electrophoresis.

    PubMed

    Liu, Pei-Yu; Lin, Yi-Hui; Feng, Chia Hsien; Chen, Yen-Ling

    2012-10-01

    A CD-modified CE method was established for quantitative determination of seven hydroxy acids in cosmetic products. This method involved chemometric experimental design aspects, including fractional factorial design and central composite design. Chemometric experimental design was used to enhance the method's separation capability and to explore the interactions between parameters. Compared to the traditional investigation that uses multiple parameters, the method that used chemometric experimental design was less time-consuming and lower in cost. In this study, the influences of three experimental variables (phosphate concentration, surfactant concentration, and methanol percentage) on the experimental response were investigated by applying a chromatographic resolution statistic function. The optimized conditions were as follows: a running buffer of 150 mM phosphate solution (pH 7) containing 0.5 mM CTAB, 3 mM γ-CD, and 25% methanol; 20 s sample injection at 0.5 psi; a separation voltage of -15 kV; temperature was set at 25°C; and UV detection at 200 nm. The seven hydroxy acids were well separated in less than 10 min. The LOD (S/N = 3) was 625 nM for both salicylic acid and mandelic acid. The correlation coefficient of the regression curve was greater than 0.998. The RSD and relative error values were all less than 9.21%. After optimization and validation, this simple and rapid analysis method was considered to be established and was successfully applied to several commercial cosmetic products. PMID:22996609

  9. Adaptive combinatorial design to explore large experimental spaces: approach and validation.

    PubMed

    Lejay, L V; Shasha, D E; Palenchar, P M; Kouranov, A Y; Cruikshank, A A; Chou, M F; Coruzzi, G M

    2004-12-01

    Systems biology requires mathematical tools not only to analyse large genomic datasets, but also to explore large experimental spaces in a systematic yet economical way. We demonstrate that two-factor combinatorial design (CD), shown to be useful in software testing, can be used to design a small set of experiments that would allow biologists to explore larger experimental spaces. Further, the results of an initial set of experiments can be used to seed further 'Adaptive' CD experimental designs. As a proof of principle, we demonstrate the usefulness of this Adaptive CD approach by analysing data from the effects of six binary inputs on the regulation of genes in the N-assimilation pathway of Arabidopsis. This CD approach identified the more important regulatory signals previously discovered by traditional experiments using far fewer experiments, and also identified examples of input interactions previously unknown. Tests using simulated data show that Adaptive CD suffers from fewer false positives than traditional experimental designs in determining decisive inputs, and succeeds far more often than traditional or random experimental designs in determining when genes are regulated by input interactions. We conclude that Adaptive CD offers an economical framework for discovering dominant inputs and interactions that affect different aspects of genomic outputs and organismal responses. PMID:17051692

  10. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design

    PubMed Central

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges. PMID:27458364

  11. Experimental validation of optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Joshi, Suresh M.; Walz, Joseph E.

    1993-01-01

    An optimization-based integrated design approach for flexible space structures is experimentally validated using three types of dissipative controllers, including static, dynamic, and LQG dissipative controllers. The nominal phase-0 of the controls structure interaction evolutional model (CEM) structure is redesigned to minimize the average control power required to maintain specified root-mean-square line-of-sight pointing error under persistent disturbances. The redesign structure, phase-1 CEM, was assembled and tested against phase-0 CEM. It is analytically and experimentally demonstrated that integrated controls-structures design is substantially superior to that obtained through the traditional sequential approach. The capability of a software design tool based on an automated design procedure in a unified environment for structural and control designs is demonstrated.

  12. Experimental hydrogen-fueled automotive engine design data-base project. Volume 2. Main technical report

    SciTech Connect

    Swain, M.R.; Adt, R.R. Jr.; Pappas, J.M.

    1983-05-01

    Operational performance and emissions characteristics of hydrogen-fueled engines are reviewed. The project activities are reviewed including descriptions of the test engine and its components, the test apparatus, experimental techniques, experiments performed and the results obtained. Analyses of other hydrogen engine project data are also presented and compared with the results of the present effort.

  13. Systematic design of output filters for audio class-D amplifiers via Simplified Real Frequency Technique

    NASA Astrophysics Data System (ADS)

    Hintzen, E.; Vennemann, T.; Mathis, W.

    2014-11-01

    In this paper a new filter design concept is proposed and implemented which takes into account the complex loudspeaker impedance. By means of techniques of broadband matching, that has been successfully applied in radio technology, we are able to optimize the reconstruction filter to achieve an overall linear frequency response. Here, a passive filter network is inserted between source and load that matches the complex load impedance to the complex source impedance within a desired frequency range. The design and calculation of the filter is usually done using numerical approximation methods which are known as Real Frequency Techniques (RFT). A first approach to systematic design of reconstruction filters for class-D amplifiers is proposed, using the Simplified Real Frequency Technique (SRFT). Some fundamental considerations are introduced as well as the benefits and challenges of impedance matching between class-D amplifiers and loudspeakers. Current simulation data using MATLAB is presented and supports some first conclusions.

  14. A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Watts, Stephen R.

    1995-01-01

    This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.

  15. Research in advanced formal theorem-proving techniques. [design and implementation of computer languages

    NASA Technical Reports Server (NTRS)

    Raphael, B.; Fikes, R.; Waldinger, R.

    1973-01-01

    The results are summarised of a project aimed at the design and implementation of computer languages to aid in expressing problem solving procedures in several areas of artificial intelligence including automatic programming, theorem proving, and robot planning. The principal results of the project were the design and implementation of two complete systems, QA4 and QLISP, and their preliminary experimental use. The various applications of both QA4 and QLISP are given.

  16. Experimental evaluation of fatty acid profiles as a technique to determine dietary composition in benthic elasmobranchs.

    PubMed

    Beckmann, Crystal L; Mitchell, James G; Seuront, Laurent; Stone, David A J; Huveneers, Charlie

    2013-01-01

    Fatty acid (FA) analysis is a tool for dietary investigation that complements traditional stomach content analyses. Controlled feeding experiments were used to determine the extent to which the FA composition of diet is reflected in the liver and muscle tissue of the Port Jackson shark Heterodontus portusjacksoni. Over 10 wk, two groups of sharks were fed prawns or squid, which have distinct FA profiles. The percentage of total FA was significantly different for shark liver and muscle tissue when comparing controls with prawn- and squid-fed sharks. Compared with experimentally fed sharks, control shark muscle and liver had higher levels of 18:1n-9 and 20:2n-9. When comparing prawn- and squid-fed sharks, only liver tissue showed a significant difference in FA profiles. The livers of prawn-fed sharks were comparatively higher in 18:1n-7, 22:5n-3, 20:0, and 18:1n-9, while the squid-fed sharks had higher levels of 16:0 and 22:6n-3. These FAs in shark liver tissue were all reflective of higher amounts in their respective dietary items, demonstrating the conservative transfer of FA from diet to liver tissue. This study shows that liver and muscle FA profiles can be used as indicators of dietary change through the comparison of controls and fed sharks. The timescale of this study may not have been sufficient for capturing the integration of FA into muscle tissue because only liver FA profiles were useful to distinguish between sharks fed different diets. These findings have important implications for sampling design where FA profiles are used to infer dietary preferences. PMID:23434786

  17. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy J

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new "experimentation assessments," 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  18. Development of the Neuron Assessment for Measuring Biology Students’ Use of Experimental Design Concepts and Representations

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students’ competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new “experimentation assessments,” 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  19. Experimental designs for evaluation of genetic variability and selection of ancient grapevine varieties: a simulation study.

    PubMed

    Gonçalves, E; St Aubyn, A; Martins, A

    2010-06-01

    Classical methodologies for grapevine selection used in the vine-growing world are generally based on comparisons among a small number of clones. This does not take advantage of the entire genetic variability within ancient varieties, and therefore limits selection challenges. Using the general principles of plant breeding and of quantitative genetics, we propose new breeding strategies, focussed on conservation and quantification of genetic variability by performing a cycle of mass genotypic selection prior to clonal selection. To exploit a sufficiently large amount of genetic variability, initial selection trials must be generally very large. The use of experimental designs adequate for those field trials has been intensively recommended for numerous species. However, their use in initial trials of grapevines has not been studied. With the aim of identifying the most suitable experimental designs for quantification of genetic variability and selection of ancient varieties, a study was carried out to assess through simulation the comparative efficiency of various experimental designs (randomized complete block design, alpha design and row-column (RC) design). The results indicated a greater efficiency for alpha and RC designs, enabling more precise estimates of genotypic variance, greater precision in the prediction of genetic gain and consequently greater efficiency in genotypic mass selection. PMID:19904297

  20. Evaluation of the Doppler technique for fat emboli detection in an experimental flow model.

    PubMed

    Wikstrand, Victoria; Linder, Nadja; Engström, Karl Gunnar

    2008-09-01

    Pericardial suction blood (PSB) is known to be contaminated with fat droplets, which may cause embolic brain damage during cardiopulmonary bypass (CPB). This study aimed to investigate the possibility to detect fat emboli by a Doppler technique. An in vitro flow model was designed, with a main pump, a filter, a reservoir, and an injector. A Hatteland Doppler probe was attached to the circulation loop to monitor particle counts and their size distribution. Suspended soya oil or heat-extracted human wound fat was analyzed in the model. The concentrations of these fat emboli were calibrated to simulate clinical conditions with either a continuous return of PSB to the systemic circulation or when PSB was collected for rapid infusion at CPB weaning. For validation purpose, air and solid emboli were also analyzed. Digital image analysis was performed to characterize the nature of the tested emboli. With soya suspension, there was an apparent dose response between Doppler counts and the nominal fat concentration. This pattern was seen for computed Doppler output (p = .037) but not for Doppler raw counts (p = .434). No correlation was seen when human fat suspensions were tested. Conversely, the image analysis showed an obvious relationship between microscopy particle count and the nominal fat concentration (p < .001). However, the scatter plot between image analysis counting and Doppler recordings showed a random distribution (p = .873). It was evident that the Doppler heavily underestimated the true number of injected fat emboli. When the image analysis data were subdivided into diameter intervals, it was discovered that the few large-size droplets accounted for a majority of total fat volume compared with the numerous small-size particles (< 10 microm). Our findings strongly suggest that the echogenecity of fat droplets is insufficient for detection by means of the tested Doppler method. PMID:18853829

  1. Experimental investigations of micro-scale flow and heat transfer phenomena by using molecular tagging techniques

    NASA Astrophysics Data System (ADS)

    Hu, Hui; Jin, Zheyan; Nocera, Daniel; Lum, Chee; Koochesfahani, Manoochehr

    2010-08-01

    Recent progress made in the development of novel molecule-based flow diagnostic techniques, including molecular tagging velocimetry (MTV) and lifetime-based molecular tagging thermometry (MTT), to achieve simultaneous measurements of multiple important flow variables for micro-flows and micro-scale heat transfer studies is reported in this study. The focus of the work described here is the particular class of molecular tagging tracers that relies on phosphorescence. Instead of using tiny particles, especially designed phosphorescent molecules, which can be turned into long-lasting glowing marks upon excitation by photons of appropriate wavelength, are used as tracers for both flow velocity and temperature measurements. A pulsed laser is used to 'tag' the tracer molecules in the regions of interest, and the tagged molecules are imaged at two successive times within the photoluminescence lifetime of the tracer molecules. The measured Lagrangian displacement of the tagged molecules provides the estimate of the fluid velocity. The simultaneous temperature measurement is achieved by taking advantage of the temperature dependence of phosphorescence lifetime, which is estimated from the intensity ratio of the tagged molecules in the acquired two phosphorescence images. The implementation and application of the molecular tagging approach for micro-scale thermal flow studies are demonstrated by two examples. The first example is to conduct simultaneous flow velocity and temperature measurements inside a microchannel to quantify the transient behavior of electroosmotic flow (EOF) to elucidate underlying physics associated with the effects of Joule heating on electrokinematically driven flows. The second example is to examine the time evolution of the unsteady heat transfer and phase changing process inside micro-sized, icing water droplets, which is pertinent to the ice formation and accretion processes as water droplets impinge onto cold wind turbine blades.

  2. Design and control of energy efficient food drying processes with specific reference to quality; Model development and experimental studies: Moisture movement and dryer design

    SciTech Connect

    Kim, M.; Litchfield, B.; Singh, R.; Liang, H.; Narsimhan, G.; Waananen, K.

    1989-08-01

    The ultimate goal of the project is to develop procedures, techniques, data and other information that will aid in the design of cost effective and energy efficient drying processes that produce high quality foods. This objective has been sought by performing studies to determine the pertinent properties of food products, by developing models to describe the fundamental phenomena of food drying and by testing the models at laboratory scale. Finally, this information is used to develop recommendations and strategies for improved dryer design and control. This volume, Model Development and Experimental Studies, emphasizes the direct and indirect drying processes. An extensive literature review identifies key characteristics of drying models including controlling process resistances, internal mechanisms of moisture movement, structural and thermodynamic assumptions, and methods of model coefficients and material property measurement/determination, model solution, and model validation. Similarities and differences between previous work are noted, and strategies for future drying model development are suggested.

  3. Fuzzy Controller Design Using Evolutionary Techniques for Twin Rotor MIMO System: A Comparative Study

    PubMed Central

    Hashim, H. A.; Abido, M. A.

    2015-01-01

    This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738

  4. Fuzzy Controller Design Using Evolutionary Techniques for Twin Rotor MIMO System: A Comparative Study.

    PubMed

    Hashim, H A; Abido, M A

    2015-01-01

    This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738

  5. Cost-Optimal Design of a 3-Phase Core Type Transformer by Gradient Search Technique

    NASA Astrophysics Data System (ADS)

    Basak, R.; Das, A.; Sensarma, A. K.; Sanyal, A. N.

    2014-04-01

    3-phase core type transformers are extensively used as power and distribution transformers in power system and their cost is a sizable proportion of the total system cost. Therefore they should be designed cost-optimally. The design methodology for reaching cost-optimality has been discussed in details by authors like Ramamoorty. It has also been discussed in brief in some of the text-books of electrical design. The paper gives a method for optimizing design, in presence of constraints specified by the customer and the regulatory authorities, through gradient search technique. The starting point has been chosen within the allowable parameter space the steepest decent path has been followed for convergence. The step length has been judiciously chosen and the program has been maneuvered to avoid local minimal points. The method appears to be best as its convergence is quickest amongst different optimizing techniques.

  6. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1992-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  7. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1991-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  8. Experimental Design and Data collection of a finishing end milling operation of AISI 1045 steel

    PubMed Central

    Dias Lopes, Luiz Gustavo; de Brito, Tarcísio Gonçalves; de Paiva, Anderson Paulo; Peruchi, Rogério Santana; Balestrassi, Pedro Paulo

    2016-01-01

    In this Data in Brief paper, a central composite experimental design was planned to collect the surface roughness of an end milling operation of AISI 1045 steel. The surface roughness values are supposed to suffer some kind of variation due to the action of several factors. The main objective here was to present a multivariate experimental design and data collection including control factors, noise factors, and two correlated responses, capable of achieving a reduced surface roughness with minimal variance. Lopes et al. (2016) [1], for example, explores the influence of noise factors on the process performance. PMID:26909374

  9. The effectiveness of family planning programs evaluated with true experimental designs.

    PubMed Central

    Bauman, K E

    1997-01-01

    OBJECTIVES: This paper describes the magnitude of effects for family planning programs evaluated with true experimental designs. METHODS: Studies that used true experimental designs to evaluate family planning programs were identified and their results subjected to meta-analysis. RESULTS: For the 14 studies with the information needed to calculate effect size, the Pearson r between program and effect variables ranged from -.08 to .09 and averaged .08. CONCLUSIONS: The programs evaluated in the studies considered have had, on average, smaller effects than many would assume and desire. PMID:9146451

  10. Experimental and Numerical Investigations on the Ballistic Performance of Polymer Matrix Composites Used in Armor Design

    NASA Astrophysics Data System (ADS)

    Colakoglu, M.; Soykasap, O.; Özek, T.

    2007-01-01

    Ballistic properties of two different polymer matrix composites used for military and non-military purposes are investigated in this study. Backside deformation and penetration speed are determined experimentally and numerically for Kevlar 29/Polivnyl Butyral and Polyethylene fiber composites because designing armors for only penetration is not enough for protection. After experimental ballistic tests, a model is constructed using finite element program, Abaqus. The backside deformation and penetration speed are determined numerically. It is found that the experimental and numeric results are in agreement and Polyethylene fiber composite has much better ballistic limit, the backside deformation, and penetration speed than those of Kevlar 29/Polivnyl Butyral composite if areal densities are considered.

  11. Design of Experimental Data Publishing Software for Neutral Beam Injector on EAST

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Zhang, Xiaodan; Wu, Deyun

    2015-02-01

    Neutral Beam Injection (NBI) is one of the most effective means for plasma heating. Experimental Data Publishing Software (EDPS) is developed to publish experimental data to get the NBI system under remote monitoring. In this paper, the architecture and implementation of EDPS including the design of the communication module and web page display module are presented. EDPS is developed based on the Browser/Server (B/S) model, and works under the Linux operating system. Using the data source and communication mechanism of the NBI Control System (NBICS), EDPS publishes experimental data on the Internet.

  12. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Technical Reports Server (NTRS)

    Ippolito, L. J.; Kaul, R. D.; Wallace, R. G.

    1983-01-01

    This Propagation Handbook provides satellite system engineers with a concise summary of the major propagation effects experienced on Earth-space paths in the 10 to 100 GHz frequency range. The dominant effect, attenuation due to rain, is dealt with in some detail, in terms of both experimental data from measurements made in the U.S. and Canada, and the mathematical and conceptual models devised to explain the data. In order to make the Handbook readily usable to many engineers, it has been arranged in two parts. Chapters 2-5 comprise the descriptive part. They deal in some detail with rain systems, rain and attenuation models, depolarization and experimental data. Chapters 6 and 7 make up the design part of the Handbook and may be used almost independently of the earlier chapters. In Chapter 6, the design techniques recommended for predicting propagation effects in Earth-space communications systems are presented. Chapter 7 addresses the questions of where in the system design process the effects of propagation should be considered, and what precautions should be taken when applying the propagation results.

  13. Granular ripples under rotating flow: a new experimental technique for studying ripples in non-rotating, geophysical applications?

    PubMed

    Thomas, P J; Zoueshtiagh, F

    2005-07-15

    A review of our research investigating a new pattern formation process in granular material underlying a rotating fluid is given. The purpose of this summary is to introduce the phenomenon to the geophysical research community and to draw attention to the potential practical benefits of our new experimental method. To this end, the applied and scientific advantages of the technique over traditional studies employing, for instance, water channels, are discussed for the first time. It is shown here that the system rotation in our new technique does not appear to affect the scaling law expressing the dependence of the ripple-pattern wavelength on the governing independent experimental parameters. This suggests that it may become possible to extrapolate appropriate results from rotating to non-rotating systems and, hence, to geophysical environments. Consequently, our new technique may find applications in the context of geophysical research on the formation of sedimentary granular ripple structures. PMID:16011938

  14. Euromech 260: Advanced non-intrusive experimental techniques in fluid and plasma flows

    NASA Astrophysics Data System (ADS)

    The following topics are discussed: coherent anti-Stokes and elastic Rayleigh scattering; elastic scattering and non linear dynamics; fluorescence; molecular tracking techniques and particle image velocimetry.

  15. Using R in experimental design with BIBD: An application in health sciences

    NASA Astrophysics Data System (ADS)

    Oliveira, Teresa A.; Francisco, Carla; Oliveira, Amílcar; Ferreira, Agostinho

    2016-06-01

    Considering the implementation of an Experimental Design, in any field, the experimenter must pay particular attention and look for the best strategies in the following steps: planning the design selection, conduct the experiments, collect observed data, proceed to analysis and interpretation of results. The focus is on providing both - a deep understanding of the problem under research and a powerful experimental process at a reduced cost. Mainly thanks to the possibility of allowing to separate variation sources, the importance of Experimental Design in Health Sciences is strongly recommended since long time. Particular attention has been devoted to Block Designs and more precisely to Balanced Incomplete Block Designs - in this case the relevance states from the fact that these designs allow testing simultaneously a number of treatments bigger than the block size. Our example refers to a possible study of inter reliability of the Parkinson disease, taking into account the UPDRS (Unified Parkinson's disease rating scale) in order to test if there are significant differences between the specialists who evaluate the patients performances. Statistical studies on this disease were described for example in Richards et al (1994), where the authors investigate the inter-rater Reliability of the Unified Parkinson's Disease Rating Scale Motor Examination. We consider a simulation of a practical situation in which the patients were observed by different specialists and the UPDRS on assessing the impact of Parkinson's disease in patients was observed. Assigning treatments to the subjects following a particular BIBD(9,24,8,3,2) structure, we illustrate that BIB Designs can be used as a powerful tool to solve emerging problems in this area. Once a structure with repeated blocks allows to have some block contrasts with minimum variance, see Oliveira et al. (2006), the design with cardinality 12 was selected for the example. R software was used for computations.

  16. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  17. Experimental research on No-oil ignition technique of pulverized coal/coal-water-slurry

    SciTech Connect

    Zhou Zhijun; Fan Haojie; Tu Jianhua

    1997-07-01

    With new coal-fired boilers going into operation and widespread application of substitute-oil fuel such as Coal-Water-Slurry, many oil-fired boiler may stop firing oil. But the ignition of coal-fired boilers stabilizing combustion under low load also need a large amount of oil. Information show that it will consume 5t for a 50MW unit boiler to start one time and for a 125NM unit, 15t oil will be consumed. It will consume 50t oil for a 200NM unit boiler to start one time and 1000t/year on stabilizing combustion. A 600MW unit, according to information from USA, will consume 300t oil to start one time, and 23300t oil are needed for one year. So, the amount of oil used to ignite coal and stabilize combustion are very considerable. Due to attaching importance to conserving oil, novel ignition and stabilizing techniques (such as pulverized coal pre-combustion chamber technique, blunt body burner, boat-shaped burner, great-velocity-difference combustion stabilizing technique, dense-thin phase combustion stabilizing technique and plasma ignition technique) are come out these ten years, and oil consumption for ignition and stabilizing are decreased greatly. Among them, only plasma ignition technique is a kind of ignition technique without oil. Although the others can conserve a large amount of oil during ignition and low load condition, total oil consumption are still very considerable. And plasma ignition technique is not adapt to coal-water-slurry ignition. Therefore, this paper presents a novel ignition technique: electrical thermal chamber ignition technique adapting pulverized coal (PC) and coal-water-slurry (CWS), which absorbs the advantage of pre-combustion chamber technique and does not consume oil.

  18. Design and experimental study of high-speed low-flow-rate centrifugal compressors

    SciTech Connect

    Gui, F.; Reinarts, T.R.; Scaringe, R.P.; Gottschlich, J.M.

    1995-12-31

    This paper describes a design and experimental effort to develop small centrifugal compressors for aircraft air cycle cooling systems and small vapor compression refrigeration systems (20--100 tons). Efficiency improvements at 25% are desired over current designs. Although centrifugal compressors possess excellent performance at high flow rates, low-flow-rate compressors do not have acceptable performance when designed using current approaches. The new compressors must be designed to operate at a high rotating speed to retain efficiency. The emergence of the magnetic bearing provides the possibility of developing such compressors that run at speeds several times higher than current dominating speeds. Several low-flow-rate centrifugal compressors, featured with three-dimensional blades, have been designed, manufactured and tested in this study. An experimental investigation of compressor flow characteristics and efficiency has been conducted to explore a theory for mini-centrifugal compressors. The effects of the overall impeller configuration, number of blades, and the rotational speed on compressor flow curve and efficiency have been studied. Efficiencies as high as 84% were obtained. The experimental results indicate that the current theory can still be used as a guide, but further development for the design of mini-centrifugal compressors is required.

  19. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.; Crum, Jarrod V.

    2015-07-24

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer

  20. Experimental techniques for determination of the role of diffusion and convection in crystal growth from solution

    NASA Technical Reports Server (NTRS)

    Zefiro, L.

    1980-01-01

    Various studies of the concentration of the solution around a growing crystal using interferometric techniques are reviewed. A holographic interferometric technique used in laboratory experiments shows that a simple description of the solution based on the assumption of a purely diffusive mechanism appears inadequate since the convection, effective even in reduced columns, always affects the growth.