Science.gov

Sample records for taguchi experimental design

  1. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  2. Optimizing the spectrofluorimetric determination of cefdinir through a Taguchi experimental design approach.

    PubMed

    Abou-Taleb, Noura Hemdan; El-Wasseef, Dalia Rashad; El-Sherbiny, Dina Tawfik; El-Ashry, Saadia Mohamed

    2016-05-01

    The aim of this work is to optimize a spectrofluorimetric method for the determination of cefdinir (CFN) using the Taguchi method. The proposed method is based on the oxidative coupling reaction of CFN and cerium(IV) sulfate. The quenching effect of CFN on the fluorescence of the produced cerous ions is measured at an emission wavelength (λem ) of 358 nm after excitation (λex ) at 301 nm. The Taguchi orthogonal array L9 (3(4) ) was designed to determine the optimum reaction conditions. The results were analyzed using the signal-to-noise (S/N) ratio and analysis of variance (ANOVA). The optimal experimental conditions obtained from this study were 1 mL of 0.2% MBTH, 0.4 mL of 0.25% Ce(IV), a reaction time of 10 min and methanol as the diluting solvent. The calibration plot displayed a good linear relationship over a range of 0.5-10.0 µg/mL. The proposed method was successfully applied to the determination of CFN in bulk powder and pharmaceutical dosage forms. The results are in good agreement with those obtained using the comparison method. Finally, the Taguchi method provided a systematic and efficient methodology for this optimization, with considerably less effort than would be required for other optimizations techniques. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26456088

  3. Microcosm assays and Taguchi experimental design for treatment of oil sludge containing high concentration of hydrocarbons.

    PubMed

    Castorena-Cortés, G; Roldán-Carrillo, T; Zapata-Peñasco, I; Reyes-Avila, J; Quej-Aké, L; Marín-Cruz, J; Olguín-Lora, P

    2009-12-01

    Microcosm assays and Taguchi experimental design was used to assess the biodegradation of an oil sludge produced by a gas processing unit. The study showed that the biodegradation of the sludge sample is feasible despite the high level of pollutants and complexity involved in the sludge. The physicochemical and microbiological characterization of the sludge revealed a high concentration of hydrocarbons (334,766+/-7001 mg kg(-1) dry matter, d.m.) containing a variety of compounds between 6 and 73 carbon atoms in their structure, whereas the concentration of Fe was 60,000 mg kg(-1) d.m. and 26,800 mg kg(-1) d.m. of sulfide. A Taguchi L(9) experimental design comprising 4 variables and 3 levels moisture, nitrogen source, surfactant concentration and oxidant agent was performed, proving that moisture and nitrogen source are the major variables that affect CO(2) production and total petroleum hydrocarbons (TPH) degradation. The best experimental treatment yielded a TPH removal of 56,092 mg kg(-1) d.m. The treatment was carried out under the following conditions: 70% moisture, no oxidant agent, 0.5% of surfactant and NH(4)Cl as nitrogen source. PMID:19635663

  4. A Taguchi experimental design study of twin-wire electric arc sprayed aluminum coatings

    SciTech Connect

    Steeper, T.J. ); Varacalle, D.J. Jr.; Wilson, G.C.; Johnson, R.W. ); Irons, G.; Kratochvil, W.R. ); Riggs, W.L. II )

    1992-01-01

    An experimental study was conducted on the twin-wire electric arc spraying of aluminum coatings. This aluminum wire system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic experiments. Experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical process parameters in a systematic design of experiments in order to display the range of processing conditions and their effect on the resultant coating. The coatings were characterized by hardness tests, optical metallography, and image analysis. The paper discusses coating qualities with respect to hardness, roughness, deposition efficiency, and microstructure. The study attempts to correlate the features of the coatings with the changes in operating parameters. A numerical model of the process is presented including gas, droplet, and coating dynamics.

  5. A Taguchi experimental design study of twin-wire electric arc sprayed aluminum coatings

    SciTech Connect

    Steeper, T.J.; Varacalle, D.J. Jr.; Wilson, G.C.; Johnson, R.W.; Irons, G.; Kratochvil, W.R.; Riggs, W.L. II

    1992-08-01

    An experimental study was conducted on the twin-wire electric arc spraying of aluminum coatings. This aluminum wire system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic experiments. Experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical process parameters in a systematic design of experiments in order to display the range of processing conditions and their effect on the resultant coating. The coatings were characterized by hardness tests, optical metallography, and image analysis. The paper discusses coating qualities with respect to hardness, roughness, deposition efficiency, and microstructure. The study attempts to correlate the features of the coatings with the changes in operating parameters. A numerical model of the process is presented including gas, droplet, and coating dynamics.

  6. Parametric Appraisal of Process Parameters for Adhesion of Plasma Sprayed Nanostructured YSZ Coatings Using Taguchi Experimental Design

    PubMed Central

    Mantry, Sisir; Mishra, Barada K.; Chakraborty, Madhusudan

    2013-01-01

    This paper presents the application of the Taguchi experimental design in developing nanostructured yittria stabilized zirconia (YSZ) coatings by plasma spraying process. This paper depicts dependence of adhesion strength of as-sprayed nanostructured YSZ coatings on various process parameters, and effect of those process parameters on performance output has been studied using Taguchi's L16 orthogonal array design. Particle velocities prior to impacting the substrate, stand-off-distance, and particle temperature are found to be the most significant parameter affecting the bond strength. To achieve retention of nanostructure, molten state of nanoagglomerates (temperature and velocity) has been monitored using particle diagnostics tool. Maximum adhesion strength of 40.56 MPa has been experimentally found out by selecting optimum levels of selected factors. The enhanced bond strength of nano-YSZ coating may be attributed to higher interfacial toughness due to cracks being interrupted by adherent nanozones. PMID:24288490

  7. Parametric appraisal of process parameters for adhesion of plasma sprayed nanostructured YSZ coatings using Taguchi experimental design.

    PubMed

    Mantry, Sisir; Mishra, Barada K; Chakraborty, Madhusudan

    2013-01-01

    This paper presents the application of the Taguchi experimental design in developing nanostructured yittria stabilized zirconia (YSZ) coatings by plasma spraying process. This paper depicts dependence of adhesion strength of as-sprayed nanostructured YSZ coatings on various process parameters, and effect of those process parameters on performance output has been studied using Taguchi's L16 orthogonal array design. Particle velocities prior to impacting the substrate, stand-off-distance, and particle temperature are found to be the most significant parameter affecting the bond strength. To achieve retention of nanostructure, molten state of nanoagglomerates (temperature and velocity) has been monitored using particle diagnostics tool. Maximum adhesion strength of 40.56 MPa has been experimentally found out by selecting optimum levels of selected factors. The enhanced bond strength of nano-YSZ coating may be attributed to higher interfacial toughness due to cracks being interrupted by adherent nanozones. PMID:24288490

  8. Optimization of Wear Behavior of Magnesium Alloy AZ91 Hybrid Composites Using Taguchi Experimental Design

    NASA Astrophysics Data System (ADS)

    Girish, B. M.; Satish, B. M.; Sarapure, Sadanand; Basawaraj

    2016-03-01

    In the present paper, the statistical investigation on wear behavior of magnesium alloy (AZ91) hybrid metal matrix composites using Taguchi technique has been reported. The composites were reinforced with SiC and graphite particles of average size 37 μm. The specimens were processed by stir casting route. Dry sliding wear of the hybrid composites were tested on a pin-on-disk tribometer under dry conditions at different normal loads (20, 40, and 60 N), sliding speeds (1.047, 1.57, and 2.09 m/s), and composition (1, 2, and 3 wt pct of each of SiC and graphite). The design of experiments approach using Taguchi technique was employed to statistically analyze the wear behavior of hybrid composites. Signal-to-noise ratio and analysis of variance were used to investigate the influence of the parameters on the wear rate.

  9. Optimization of α-amylase production by Bacillus subtilis RSKK96: using the Taguchi experimental design approach.

    PubMed

    Uysal, Ersin; Akcan, Nurullah; Baysal, Zübeyde; Uyar, Fikret

    2011-01-01

    In this study, the Taguchi experimental design was applied to optimize the conditions for α-amylase production by Bacillus subtilis RSKK96, which was purchased from Refik Saydam Hifzissihha Industry (RSHM). Four factors, namely, carbon source, nitrogen source, amino acid, and fermentation time, each at four levels, were selected, and an orthogonal array layout of L(16) (4(5)) was performed. The model equation obtained was validated experimentally at maximum casein (1%), corn meal (1%), and glutamic acid (0.01%) concentrations with incubation time to 72 h in the presence of 1% inoculum density. Point prediction of the design showed that maximum α-amylase production of 503.26 U/mg was achieved under optimal experimental conditions. PMID:21229466

  10. Optimization of critical factors to enhance polyhydroxyalkanoates (PHA) synthesis by mixed culture using Taguchi design of experimental methodology.

    PubMed

    Venkata Mohan, S; Venkateswar Reddy, M

    2013-01-01

    Optimizing different factors is crucial for enhancement of mixed culture bioplastics (polyhydroxyalkanoates (PHA)) production. Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence and specific function of eight important factors (iron, glucose concentration, VFA concentration, VFA composition, nitrogen concentration, phosphorous concentration, pH, and microenvironment) on the bioplastics production. Three levels of factor (2(1) × 3(7)) variation were considered with symbolic arrays of experimental matrix [L(18)-18 experimental trails]. All the factors were assigned with three levels except iron concentration (2(1)). Among all the factors, microenvironment influenced bioplastics production substantially (contributing 81%), followed by pH (11%) and glucose concentration (2.5%). Validation experiments were performed with the obtained optimum conditions which resulted in improved PHA production. Good substrate degradation (as COD) of 68% was registered during PHA production. Dehydrogenase and phosphatase enzymatic activities were monitored during process operation. PMID:23201522

  11. Vertically aligned N-doped CNTs growth using Taguchi experimental design

    NASA Astrophysics Data System (ADS)

    Silva, Ricardo M.; Fernandes, António J. S.; Ferro, Marta C.; Pinna, Nicola; Silva, Rui F.

    2015-07-01

    The Taguchi method with a parameter design L9 orthogonal array was implemented for optimizing the nitrogen incorporation in the structure of vertically aligned N-doped CNTs grown by thermal chemical deposition (TCVD). The maximization of the ID/IG ratio of the Raman spectra was selected as the target value. As a result, the optimal deposition configuration was NH3 = 90 sccm, growth temperature = 825 °C and catalyst pretreatment time of 2 min, the first parameter having the main effect on nitrogen incorporation. A confirmation experiment with these values was performed, ratifying the predicted ID/IG ratio of 1.42. Scanning electron microscopy (SEM) characterization revealed a uniform completely vertically aligned array of multiwalled CNTs which individually exhibit a bamboo-like structure, consisting of periodically curved graphitic layers, as depicted by high resolution transmission electron microscopy (HRTEM). The X-ray photoelectron spectroscopy (XPS) results indicated a 2.00 at.% of N incorporation in the CNTs in pyridine-like and graphite-like, as the predominant species.

  12. Optimization of experimental parameters based on the Taguchi robust design for the formation of zinc oxide nanocrystals by solvothermal method

    SciTech Connect

    Yiamsawas, Doungporn; Boonpavanitchakul, Kanittha; Kangwansupamonkon, Wiyong

    2011-05-15

    Research highlights: {yields} Taguchi robust design can be applied to study ZnO nanocrystal growth. {yields} Spherical-like and rod-like shaped of ZnO nanocrystals can be obtained from solvothermal method. {yields} [NaOH]/[Zn{sup 2+}] ratio plays the most important factor on the aspect ratio of prepared ZnO. -- Abstract: Zinc oxide (ZnO) nanoparticles and nanorods were successfully synthesized by a solvothermal process. Taguchi robust design was applied to study the factors which result in stronger ZnO nanocrystal growth. The factors which have been studied are molar concentration ratio of sodium hydroxide and zinc acetate, amount of polymer templates and molecular weight of polymer templates. Transmission electron microscopy and X-ray diffraction technique were used to analyze the experiment results. The results show that the concentration ratio of sodium hydroxide and zinc acetate ratio has the greatest effect on ZnO nanocrystal growth.

  13. Assessing the applicability of the Taguchi design method to an interrill erosion study

    NASA Astrophysics Data System (ADS)

    Zhang, F. B.; Wang, Z. L.; Yang, M. Y.

    2015-02-01

    Full-factorial experimental designs have been used in soil erosion studies, but are time, cost and labor intensive, and sometimes they are impossible to conduct due to the increasing number of factors and their levels to consider. The Taguchi design is a simple, economical and efficient statistical tool that only uses a portion of the total possible factorial combinations to obtain the results of a study. Soil erosion studies that use the Taguchi design are scarce and no comparisons with full-factorial designs have been made. In this paper, a series of simulated rainfall experiments using a full-factorial design of five slope lengths (0.4, 0.8, 1.2, 1.6, and 2 m), five slope gradients (18%, 27%, 36%, 48%, and 58%), and five rainfall intensities (48, 62.4, 102, 149, and 170 mm h-1) were conducted. Validation of the applicability of a Taguchi design to interrill erosion experiments was achieved by extracting data from the full dataset according to a theoretical Taguchi design. The statistical parameters for the mean quasi-steady state erosion and runoff rates of each test, the optimum conditions for producing maximum erosion and runoff, and the main effect and percentage contribution of each factor obtained from the full-factorial and Taguchi designs were compared. Both designs generated almost identical results. Using the experimental data from the Taguchi design, it was possible to accurately predict the erosion and runoff rates under the conditions that had been excluded from the Taguchi design. All of the results obtained from analyzing the experimental data for both designs indicated that the Taguchi design could be applied to interrill erosion studies and could replace full-factorial designs. This would save time, labor and costs by generally reducing the number of tests to be conducted. Further work should test the applicability of the Taguchi design to a wider range of conditions.

  14. Taguchi's experimental design for optimizing the production of novel thermostable polypeptide antibiotic from Geobacillus pallidus SAT4.

    PubMed

    Muhammad, Syed Aun; Ahmed, Safia; Ismail, Tariq; Hameed, Abdul

    2014-01-01

    Polypeptide antimicrobials used against topical infections are reported to obtain from mesophilic bacterial species. A thermophilic Geobacillus pallidus SAT4 was isolated from hot climate of Sindh Dessert, Pakistan and found it active against Micrococcus luteus ATCC 10240, Staphylococcus aureus ATCC 6538, Bacillus subtilis NCTC 10400 and Pseudomonas aeruginosa ATCC 49189. The current experiment was designed to optimize the production of novel thermostable polypeptide by applying the Taguchi statistical approach at various conditions including the time of incubation, temperature, pH, aeration rate, nitrogen, and carbon concentrations. There were two most important factors that affect the production of antibiotic including time of incubation and nitrogen concentration and two interactions including the time of incubation/pH and time of incubation/nitrogen concentration. Activity was evaluated by well diffusion assay. The antimicrobial produced was stable and active even at 55°C. Ammonium sulphate (AS) was used for antibiotic recovery and it was desalted by dialysis techniques. The resulted protein was evaluated through SDS-PAGE. It was concluded that novel thermostable protein produced by Geobacillus pallidus SAT4 is stable at higher temperature and its production level can be improved statistically at optimum values of pH, time of incubation and nitrogen concentration the most important factors for antibiotic production. PMID:24374431

  15. Application of Taguchi L32 orthogonal array design to optimize copper biosorption by using Spaghnum moss.

    PubMed

    Ozdemir, Utkan; Ozbay, Bilge; Ozbay, Ismail; Veli, Sevil

    2014-09-01

    In this work, Taguchi L32 experimental design was applied to optimize biosorption of Cu(2+) ions by an easily available biosorbent, Spaghnum moss. With this aim, batch biosorption tests were performed to achieve targeted experimental design with five factors (concentration, pH, biosorbent dosage, temperature and agitation time) at two different levels. Optimal experimental conditions were determined by calculated signal-to-noise ratios. "Higher is better" approach was followed to calculate signal-to-noise ratios as it was aimed to obtain high metal removal efficiencies. The impact ratios of factors were determined by the model. Within the study, Cu(2+) biosorption efficiencies were also predicted by using Taguchi method. Results of the model showed that experimental and predicted values were close to each other demonstrating the success of Taguchi approach. Furthermore, thermodynamic, isotherm and kinetic studies were performed to explain the biosorption mechanism. Calculated thermodynamic parameters were in good accordance with the results of Taguchi model. PMID:25011119

  16. A Comparison of Central Composite Design and Taguchi Method for Optimizing Fenton Process

    PubMed Central

    Asghar, Anam; Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    In the present study, a comparison of central composite design (CCD) and Taguchi method was established for Fenton oxidation. [Dye]ini, Dye : Fe+2, H2O2 : Fe+2, and pH were identified control variables while COD and decolorization efficiency were selected responses. L9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications. PMID:25258741

  17. Application of Taguchi Philosophy for Optimization of Design Parameters in a Rectangular Enclosure with Triangular Fin Array

    NASA Astrophysics Data System (ADS)

    Dwivedi, Ankur; Das, Debasish

    2015-10-01

    In this study, an optimum parametric design yielding maximum heat transfer has been suggested using Taguchi Philosophy. This statistical approach has been applied to the results of an experimental parametric study conducted to investigate the influence of fin height ( L); fin spacing ( S) and Rayleigh number ( Ra) on convection heat transfer from triangular fin array within a vertically oriented rectangular enclosure. Taguchi's L9 (3**3) orthogonal array design has been adopted for three different levels of influencing parameters. The goal of this study is to reach maximum heat transfer (i.e. Nusselt number). The dependence of optimum fin spacing on fin height has been also reported. The results proved the suitability of the application of Taguchi design approach in this kind of study, and the predictions by the method are reported in very good agreement with experimental results. This paper also compares the application of classical design approach with Taguchi's methodology used for determination of optimum parametric design

  18. A Taguchi study of the aeroelastic tailoring design process

    NASA Technical Reports Server (NTRS)

    Bohlmann, Jonathan D.; Scott, Robert C.

    1991-01-01

    A Taguchi study was performed to determine the important players in the aeroelastic tailoring design process and to find the best composition of the optimization's objective function. The Wing Aeroelastic Synthesis Procedure (TSO) was used to ascertain the effects that factors such as composite laminate constraints, roll effectiveness constraints, and built-in wing twist and camber have on the optimum, aeroelastically tailored wing skin design. The results show the Taguchi method to be a viable engineering tool for computational inquiries, and provide some valuable lessons about the practice of aeroelastic tailoring.

  19. Using Taguchi robust design method to develop an optimized synthesis procedure of nanocrystalline cancrinite

    NASA Astrophysics Data System (ADS)

    Azizi, Seyed Naser; Asemi, Neda; Samadi-Maybodi, Abdolrouf

    2012-09-01

    In this study, perlite was used as a low-cost source of Si and Al to synthesis of nanocrystalline cancrinite zeolite. The synthesis of cancrinite zeolite from perlite by using the alkaline hydrothermal treatment under saturated steam pressure was investigated. A statistical Taguchi design of experiments was employed to evaluate the effects of the process variables such as type of aging, aging time and hydrothermal crystallization time on the crystallnity of synthesized zeolite. The optimum conditions for maximum crystallinity of nanocrystalline cancrinite were obtained as microwave-assisted aging, 60 min aging time and 6 h hydrothermal crystallization time from statistical analysis of the experimental results using Taguchi design. The synthetic samples were characterization by XRD, FT-IR and FE-SEM techniques. The results showed that the microwave-assisted aging can shorten the crystallization time and reduced the crystal size to form nanocrystalline cancrinite zeolite.

  20. Robust Design of SAW Gas Sensors by Taguchi Dynamic Method.

    PubMed

    Tsai, Hsun-Heng; Wu, Der Ho; Chiang, Ting-Lung; Chen, Hsin Hua

    2009-01-01

    This paper adopts Taguchi's signal-to-noise ratio analysis to optimize the dynamic characteristics of a SAW gas sensor system whose output response is linearly related to the input signal. The goal of the present dynamic characteristics study is to increase the sensitivity of the measurement system while simultaneously reducing its variability. A time- and cost-efficient finite element analysis method is utilized to investigate the effects of the deposited mass upon the resonant frequency output of the SAW biosensor. The results show that the proposed methodology not only reduces the design cost but also promotes the performance of the sensors. PMID:22573961

  1. Fabrication and optimization of camptothecin loaded Eudragit S 100 nanoparticles by Taguchi L4 orthogonal array design

    PubMed Central

    Mahalingam, Manikandan; Krishnamoorthy, Kannan

    2015-01-01

    Introduction: The objective of this investigation was to design and optimize the experimental conditions for the fabrication of camptothecin (CPT) loaded Eudragit S 100. Nanoparticles, and to understand the effect of various process parameters on the average particles size, particle size uniformity and surface area of the prepared polymeric nanoparticles using Taguchi design. Materials and Methods: CPT loaded Eudragit S 100 nanoparticles were prepared by nanoprecipitation method and characterized by particles size analyzer. Taguchi orthogonal array design was implemented to study the influence of seven independent variables on three dependent variables. Eight experimental trials involving seven independent variables at higher and lower levels were generated by design expert. Results: Factorial design result has shown that (a) except, β-cyclodextrin concentration all other parameters do not significantly influenced the average particle size (R1); (b) except, sonication duration and aqueous phase volume, all other process parameters significantly influence the particle size uniformity; (c) all the process parameters does not significantly influence the surface area. Conclusion: The R1, particle size uniformity and surface area of the prepared drug-loaded polymeric nanoparticles were found to be 120 nm, 0.237 and 55.7 m2 /g and the results were good correlated with the data generated by the Taguchi design method. PMID:26258056

  2. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  3. Formulation Development and Evaluation of Hybrid Nanocarrier for Cancer Therapy: Taguchi Orthogonal Array Based Design

    PubMed Central

    Tekade, Rakesh K.; Chougule, Mahavir B.

    2013-01-01

    Taguchi orthogonal array design is a statistical approach that helps to overcome limitations associated with time consuming full factorial experimental design. In this study, the Taguchi orthogonal array design was applied to establish the optimum conditions for bovine serum albumin (BSA) nanocarrier (ANC) preparation. Taguchi method with L9 type of robust orthogonal array design was adopted to optimize the experimental conditions. Three key dependent factors namely, BSA concentration (% w/v), volume of BSA solution to total ethanol ratio (v : v), and concentration of diluted ethanolic aqueous solution (% v/v), were studied at three levels 3%, 4%, and 5% w/v; 1 : 0.75, 1 : 0.90, and 1 : 1.05 v/v; 40%, 70%, and 100% v/v, respectively. The ethanolic aqueous solution was used to impart less harsh condition for desolvation and attain controlled nanoparticle formation. The interaction plot studies inferred the ethanolic aqueous solution concentration to be the most influential parameter that affects the particle size of nanoformulation. This method (BSA, 4% w/v; volume of BSA solution to total ethanol ratio, 1 : 0.90 v/v; concentration of diluted ethanolic solution, 70% v/v) was able to successfully develop Gemcitabine (G) loaded modified albumin nanocarrier (M-ANC-G) of size 25.07 ± 2.81 nm (ζ = −23.03 ± 1.015 mV) as against to 78.01 ± 4.99 nm (ζ = −24.88 ± 1.37 mV) using conventional method albumin nanocarrier (C-ANC-G). Hybrid nanocarriers were generated by chitosan layering (solvent gelation technique) of respective ANC to form C-HNC-G and M-HNC-G of sizes 125.29 ± 5.62 nm (ζ = 12.01 ± 0.51 mV) and 46.28 ± 2.21 nm (ζ = 15.05 ± 0.39 mV), respectively. Zeta potential, entrapment, in vitro release, and pH-based stability studies were investigated and influence of formulation parameters are discussed. Cell-line-based cytotoxicity assay (A549 and H460 cells) and cell internalization assay (H460 cell line) were performed to assess the influence on the bioperformance of these nanoformulations. PMID:24106715

  4. Taguchi statistical design and analysis of cleaning methods for spacecraft materials

    NASA Technical Reports Server (NTRS)

    Lin, Y.; Chung, S.; Kazarians, G. A.; Blosiu, J. O.; Beaudet, R. A.; Quigley, M. S.; Kern, R. G.

    2003-01-01

    In this study, we have extensively tested various cleaning protocols. The variant parameters included the type and concentration of solvent, type of wipe, pretreatment conditions, and various rinsing systems. Taguchi statistical method was used to design and evaluate various cleaning conditions on ten common spacecraft materials.

  5. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  6. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Technical Reports Server (NTRS)

    Olds, John R.; Walberg, Gerald D.

    1993-01-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  7. Applying Taguchi Methods To Brazing Of Rocket-Nozzle Tubes

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Bellows, William J.; Deily, David C.; Brennan, Alex; Somerville, John G.

    1995-01-01

    Report describes experimental study in which Taguchi Methods applied with view toward improving brazing of coolant tubes in nozzle of main engine of space shuttle. Dr. Taguchi's parameter design technique used to define proposed modifications of brazing process reducing manufacturing time and cost by reducing number of furnace brazing cycles and number of tube-gap inspections needed to achieve desired small gaps between tubes.

  8. Hydrometallurgical Extraction of Vanadium from Mechanically Milled Oil-Fired Fly Ash: Analytical Process Optimization by Using Taguchi Design Method

    NASA Astrophysics Data System (ADS)

    Parvizi, Reza; Khaki, Jalil Vahdati; Moayed, Mohammad Hadi; Ardani, Mohammad Rezaei

    2012-12-01

    In this study, the Taguchi design method was employed to determine the optimum experimental parameters in extraction of vanadium by NaOH leaching of oil-fired fly. Prior to designed experiments, the raw precipitates were mechanicallly milled using a high-energy planetary ball mill. Experimental parameters were investigated as follows: mechanical milling (MM) times (2 and 5 hours), NaOH (1 and 2 molar concentration) as reaction solution (RS), powder to solution ( P/ S) ratios (100/400 and 100/600 mg/mL), temperature ( T) of reaction system (303 K and 333 K [30 °C and 60 °C]), stirring times (ST) of reaction media (4 and 12 hours), stirring speed (SS) being adjusted to 400 and 600 rpm, and rinsing times (RT) of remained filtrates (1 and 3 hours). Statistical analysis of signal-to-noise ratio followed by analysis of variance was performed in order to estimate the optimum levels and their relative contributions. Data analysis is carried out using L8 orthogonal array consisting of seven parameters each with two levels. The optimum conditions were MM1 (3 hours), RS2 (2 molar NaOH), P/ S2 (100/600 mg/mL), T2 (333 K [60 °C]), ST2 (12 hours), SS1 (400 rpm), and RT1 (1 hour). Finally, from environmental and economical points of view, the process is faster and better organized by employing this analytical design method.

  9. Microencapsulation of (deoxythymidine)??-DOTAP complexes in stealth liposomes optimized by Taguchi design.

    PubMed

    Tavakoli, Shirin; Tamaddon, Ali Mohammad; Golkar, Nasim; Samani, Soliman Mohammadi

    2015-03-01

    Stealth liposomes encapsulating oligonucleotides are considered as promising non-viral gene delivery carriers; however, general preparation procedures are not capable to encapsulate nucleic acids (NAs) efficiently. In this study, the lyophobic complexes of deoxythymidine20 oligonucleotide (dT20) and DOTAP were used instead of free dT20 for nano-encapsulation process by reverse phase evaporation method. Regarding the various factors that can potentially affect the liposome characteristics, Taguchi design was applied to analyze the simultaneous effects of factors comprising PEG-lipid (%), dT20/total lipid molar ratio, cholesterol (Chol%) and organic-to-aqueous phase ratio (o/w) at three levels. The response variables, hydrodynamic diameter, loading efficiency (LE%) and capacity (LC%), were studied by dynamic light scattering and ethidium bromide exclusion assay, respectively. The optimum condition described by minimum particle size as well as high LE% and LC% was obtained at 5% PEG-lipid, dT20/total lipid of 7, 20% Chol and o/w of 3 with an average size of 84?nm, LE%?=?83.4% and LC%?=?11.6%. Moreover, stability assessments in presence of heparin sulfate revealed the noticeable resistance, unlike DOTAP/dT20 lipoplexes, to premature release of NA. Transmission electron microscopy confirmed formation of discrete and circular vesicles encapsulating dT20. PMID:24960449

  10. Mathematical modeling and analysis of EDM process parameters based on Taguchi design of experiments

    NASA Astrophysics Data System (ADS)

    Laxman, J.; Raj, K. Guru

    2015-12-01

    Electro Discharge Machining is a process used for machining very hard metals, deep and complex shapes by metal erosion in all types of electro conductive materials. The metal is removed through the action of an electric discharge of short duration and high current density between the tool and the work piece. The eroded metal on the surface of both work piece and the tool is flushed away by the dielectric fluid. The objective of this work is to develop a mathematical model for an Electro Discharge Machining process which provides the necessary equations to predict the metal removal rate, electrode wear rate and surface roughness. Regression analysis is used to investigate the relationship between various process parameters. The input parameters are taken as peak current, pulse on time, pulse off time, tool lift time. and the Metal removal rate, electrode wear rate and surface roughness are as responses. Experiments are conducted on Titanium super alloy based on the Taguchi design of experiments i.e. L27 orthogonal experiments.

  11. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-04-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed. PMID:25373790

  12. An Exploratory Exercise in Taguchi Analysis of Design Parameters: Application to a Shuttle-to-space Station Automated Approach Control System

    NASA Technical Reports Server (NTRS)

    Deal, Don E.

    1991-01-01

    The chief goals of the summer project have been twofold - first, for my host group and myself to learn as much of the working details of Taguchi analysis as possible in the time allotted, and, secondly, to apply the methodology to a design problem with the intention of establishing a preliminary set of near-optimal (in the sense of producing a desired response) design parameter values from among a large number of candidate factor combinations. The selected problem is concerned with determining design factor settings for an automated approach program which is to have the capability of guiding the Shuttle into the docking port of the Space Station under controlled conditions so as to meet and/or optimize certain target criteria. The candidate design parameters under study were glide path (i.e., approach) angle, path intercept and approach gains, and minimum impulse bit mode (a parameter which defines how Shuttle jets shall be fired). Several performance criteria were of concern: terminal relative velocity at the instant the two spacecraft are mated; docking offset; number of Shuttle jet firings in certain specified directions (of interest due to possible plume impingement on the Station's solar arrays), and total RCS (a measure of the energy expended in performing the approach/docking maneuver). In the material discussed here, we have focused on single performance criteria - total RCS. An analysis of the possibility of employing a multiobjective function composed of a weighted sum of the various individual criteria has been undertaken, but is, at this writing, incomplete. Results from the Taguchi statistical analysis indicate that only three of the original four posited factors are significant in affecting RCS response. A comparison of model simulation output (via Monte Carlo) with predictions based on estimated factor effects inferred through the Taguchi experiment array data suggested acceptable or close agreement between the two except at the predicted optimum point, where a difference outside a rule-of-thumb bound was observed. We have concluded that there is most likely an interaction effect not provided for in the original orthogonal array selected as the basis for our experimental design. However, we feel that the data indicates that this interaction is a mild one and that inclusion of its effect will not alter the location of the optimum.

  13. Application of Taguchi method in optimization of cervical ring cage.

    PubMed

    Yang, Kai; Teo, Ee-Chon; Fuss, Franz Konstantin

    2007-01-01

    The Taguchi method is a statistical approach to overcome the limitation of the factorial and fractional factorial experiments by simplifying and standardizing the fractional factorial design. The objective of the current study is to illustrate the procedures and strengths of the Taguchi method in biomechanical analysis by using a case study of a cervical ring cage optimization. A three-dimensional finite element (FE) model of C(5)-C(6) with a generic cervical ring cage inserted was modelled. Taguchi method was applied in the optimization of the cervical ring cage in material property and dimensions for producing the lowest stress on the endplate to reduce the risk of cage subsidence, as in the following steps: (1) establishment of objective function; (2) determination of controllable factors and their levels; (3) identification of uncontrollable factors and test conditions; (4) design of Taguchi crossed array layout; (5) execution of experiments according to trial conditions; (6) analysis of results; (7) determination of optimal run; (8) confirmation of optimum run. The results showed that a cage with larger width, depth and wall thickness can produce the lower von Mises stress under various conditions. The contribution of implant materials is found trivial. The current case study illustrates that the strengths of the Taguchi method lie in (1) consistency in experimental design and analysis; (2) reduction of time and cost of experiments; (3) robustness of performance with removing the noise factors. The Taguchi method will have a great potential application in biomechanical field when factors of the issues are at discrete level. PMID:17822708

  14. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Astrophysics Data System (ADS)

    Carrasco, Hector R.

    1992-12-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  15. Use of Taguchi design of experiments to optimize and increase robustness of preliminary designs

    NASA Technical Reports Server (NTRS)

    Carrasco, Hector R.

    1992-01-01

    The research performed this summer includes the completion of work begun last summer in support of the Air Launched Personnel Launch System parametric study, providing support on the development of the test matrices for the plume experiments in the Plume Model Investigation Team Project, and aiding in the conceptual design of a lunar habitat. After the conclusion of last years Summer Program, the Systems Definition Branch continued with the Air Launched Personnel Launch System (ALPLS) study by running three experiments defined by L27 Orthogonal Arrays. Although the data was evaluated during the academic year, the analysis of variance and the final project review were completed this summer. The Plume Model Investigation Team (PLUMMIT) was formed by the Engineering Directorate to develop a consensus position on plume impingement loads and to validate plume flowfield models. In order to obtain a large number of individual correlated data sets for model validation, a series of plume experiments was planned. A preliminary 'full factorial' test matrix indicated that 73,024 jet firings would be necessary to obtain all of the information requested. As this was approximately 100 times more firings than the scheduled use of Vacuum Chamber A would permit, considerable effort was needed to reduce the test matrix and optimize it with respect to the specific objectives of the program. Part of the First Lunar Outpost Project deals with Lunar Habitat. Requirements for the habitat include radiation protection, a safe haven for occasional solar flare storms, an airlock module as well as consumables to support 34 extra vehicular activities during a 45 day mission. The objective for the proposed work was to collaborate with the Habitat Team on the development and reusability of the Logistics Modules.

  16. Estudio numerico y experimental del proceso de soldeo MIG sobre la aleacion 6063--T5 utilizando el metodo de Taguchi

    NASA Astrophysics Data System (ADS)

    Meseguer Valdenebro, Jose Luis

    Electric arc welding processes represent one of the most used techniques on manufacturing processes of mechanical components in modern industry. The electric arc welding processes have been adapted to current needs, becoming a flexible and versatile way to manufacture. Numerical results in the welding process are validated experimentally. The main numerical methods most commonly used today are three: finite difference method, finite element method and finite volume method. The most widely used numerical method for the modeling of welded joints is the finite element method because it is well adapted to the geometric and boundary conditions in addition to the fact that there is a variety of commercial programs which use the finite element method as a calculation basis. The content of this thesis shows an experimental study of a welded joint conducted by means of the MIG welding process of aluminum alloy 6063-T5. The numerical process is validated experimentally by applying the method of finite element through the calculation program ANSYS. The experimental results in this paper are the cooling curves, the critical cooling time t4/3, the weld bead geometry, the microhardness obtained in the welded joint, and the metal heat affected zone base, process dilution, critical areas intersected between the cooling curves and the curve TTP. The numerical results obtained in this thesis are: the thermal cycle curves, which represent both the heating to maximum temperature and subsequent cooling. The critical cooling time t4/3 and thermal efficiency of the process are calculated and the bead geometry obtained experimentally is represented. The heat affected zone is obtained by differentiating the zones that are found at different temperatures, the critical areas intersected between the cooling curves and the TTP curve. In order to conclude this doctoral thesis, an optimization has been conducted by means of the Taguchi method for welding parameters in order to obtain an improvement on mechanical properties in aluminum metal joint. Los procesos de soldadura por arco electrico representan unas de las tecnicas mas utilizadas en los procesos de fabricacion de componentes mecanicos en la industria moderna. Los procesos de soldeo por arco se han adaptado a las necesidades actuales, haciendose un modo de fabricacion flexible y versatil. Los resultados obtenidos numericamente en el proceso de soldadura son validados experimentalmente. Los principales metodos numericos mas empleados en la actualidad son tres, metodo por diferencias finitas, metodos por elementos finitos y metodo por volumenes finitos. El metodo numerico mas empleado para el modelado de uniones soldadas, es el metodo por elementos finitos, debido a que presenta una buena adaptacion a las condiciones geometricas y de contorno ademas de que existe una diversidad de programas comerciales que utilizan el metodo por elementos finitos como base de calculo. Este trabajo de investigacion presenta un estudio experimental de una union soldada mediante el proceso MIG de la aleacion de aluminio 6063-T5. El metodo numerico se valida experimentalmente aplicando el metodo de los elementos finitos con el programa de calculo ANSYS. Los resultados experimentales obtenidos son: las curvas de enfriamiento, el tiempo critico de enfriamiento t4/3, geometria del cordon, microdurezas obtenidas en la union soldada, zona afectada termicamente y metal base, dilucion del proceso, areas criticas intersecadas entre las curvas de enfriamiento y la curva TTP. Los resultados numericos son: las curvas del ciclo termico, que representan tanto el calentamiento hasta alcanzar la temperatura maxima y un posterior enfriamiento. Se calculan el tiempo critico de enfriamiento t4/3, el rendimiento termico y se representa la geometria del cordon obtenida experimentalmente. La zona afectada termicamente se obtiene diferenciando las zonas que se encuentran a diferentes temperaturas, las areas criticas intersecadas entre las curvas de enfriamiento y la curva TTP. Para finalizar el trabajo de investigacion se ha realizado una optimizacion, con la aplicacion del metodo de Taguchi, de los parametros de soldeo con el objetivo de obtener una mejora sustancial en las propiedades mecanicas de las uniones metalicas de aluminio.

  17. Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using Taguchi and Box-Behnken design

    PubMed Central

    Emami, J.; Mohiti, H.; Hamishehkar, H.; Varshosaz, J.

    2015-01-01

    Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion method. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. Taguchi design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7® software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the application of SLNs in pulmonary delivery system of budesonide. PMID:26430454

  18. Workbook for Taguchi Methods for Product Quality Improvement.

    ERIC Educational Resources Information Center

    Zarghami, Ali; Benbow, Don

    Taguchi methods are methods of product quality improvement that analyze major contributions and how they can be controlled to reduce variability of poor performance. In this approach, knowledge is used to shorten testing. Taguchi methods are concerned with process improvement rather than with process measurement. This manual is designed to be used…

  19. Taguchi methods in electronics: A case study

    NASA Technical Reports Server (NTRS)

    Kissel, R.

    1992-01-01

    Total Quality Management (TQM) is becoming more important as a way to improve productivity. One of the technical aspects of TQM is a system called the Taguchi method. This is an optimization method that, with a few precautions, can reduce test effort by an order of magnitude over conventional techniques. The Taguchi method is specifically designed to minimize a product's sensitivity to uncontrollable system disturbances such as aging, temperature, voltage variations, etc., by simultaneously varying both design and disturbance parameters. The analysis produces an optimum set of design parameters. A 3-day class on the Taguchi method was held at the Marshall Space Flight Center (MSFC) in May 1991. A project was needed as a follow-up after the class was over, and the motor controller was selected at that time. Exactly how to proceed was the subject of discussion for some months. It was not clear exactly what to measure, and design kept getting mixed with optimization. There was even some discussion about why the Taguchi method should be used at all.

  20. Design of a robust fuzzy controller for the arc stability of CO(2) welding process using the Taguchi method.

    PubMed

    Kim, Dongcheol; Rhee, Sehun

    2002-01-01

    CO(2) welding is a complex process. Weld quality is dependent on arc stability and minimizing the effects of disturbances or changes in the operating condition commonly occurring during the welding process. In order to minimize these effects, a controller can be used. In this study, a fuzzy controller was used in order to stabilize the arc during CO(2) welding. The input variable of the controller was the Mita index. This index estimates quantitatively the arc stability that is influenced by many welding process parameters. Because the welding process is complex, a mathematical model of the Mita index was difficult to derive. Therefore, the parameter settings of the fuzzy controller were determined by performing actual control experiments without using a mathematical model of the controlled process. The solution, the Taguchi method was used to determine the optimal control parameter settings of the fuzzy controller to make the control performance robust and insensitive to the changes in the operating conditions. PMID:18238115

  1. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  2. Evaluation of Listeria monocytogenes survival in ice cream mixes flavored with herbal tea using Taguchi method.

    PubMed

    Ozturk, Ismet; Golec, Adem; Karaman, Safa; Sagdic, Osman; Kayacier, Ahmed

    2010-10-01

    In this study, the effects of the incorporation of some herbal teas at different concentrations into the ice cream mix on the population of Listeria monocytogenes were studied using Taguchi method. The ice cream mix samples flavored with herbal teas were prepared using green tea and sage at different concentrations. Afterward, fresh culture of L. monocytogenes was inoculated into the samples and the L. monocytogenes was counted at different storage periods. Taguchi method was used for experimental design and analysis. In addition, some physicochemical properties of samples were examined. Results suggested that there was some effect, although little, on the population of L. monocytogenes when herbal tea was incorporated into the ice cream mix. Additionally, the use of herbal tea caused a decrease in the pH values of the samples and significant changes in the color values. PMID:20590424

  3. Multi-Response Optimization of Carbidic Austempered Ductile Iron Production Parameters using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Dhanapal, P.; Mohamed Nazirudeen, S. S.; Chandrasekar, A.

    2012-04-01

    Carbide Austempered Ductile Iron (CADI) is the family of ductile iron containing wear resistance alloy carbides in the ausferrite matrix. This CADI is manufactured by selecting and characterizing the proper material composition through the melting route done. In an effort to arrive the optimal production parameters of multi responses, Taguchi method and Grey relational analysis have been applied. To analyze the effect of production parameters on the mechanical properties signal-to-noise ratio and Grey relational grade have been calculated based on the design of experiments. An analysis of variance was calculated to find the amount of contribution of factors on mechanical properties and their significance. The analytical results of Taguchi method were compared with the experimental values, and it shows that both are identical.

  4. Optimisation of Lime-Soda process parameters for reduction of hardness in aqua-hatchery practices using Taguchi methods.

    PubMed

    Yavalkar, S P; Bhole, A G; Babu, P V Vijay; Prakash, Chandra

    2012-04-01

    This paper presents the optimisation of Lime-Soda process parameters for the reduction of hardness in aqua-hatchery practices in the context of M. rosenbergii. The fresh water in the development of fisheries needs to be of suitable quality. Lack of desirable quality in available fresh water is generally the confronting restraint. On the Indian subcontinent, groundwater is the only source of raw water, having varying degree of hardness and thus is unsuitable for the fresh water prawn hatchery practices (M. rosenbergii). In order to make use of hard water in the context of aqua-hatchery, Lime-Soda process has been recommended. The efficacy of the various process parameters like lime, soda ash and detention time, on the reduction of hardness needs to be examined. This paper proposes to determine the parameter settings for the CIFE well water, which is pretty hard by using Taguchi experimental design method. Orthogonal Arrays of Taguchi, Signal-to-Noise Ratio, the analysis of variance (ANOVA) have been applied to determine their dosage and analysed for their effect on hardness reduction. The tests carried out with optimal levels of Lime-Soda process parameters confirmed the efficacy of the Taguchi optimisation method. Emphasis has been placed on optimisation of chemical doses required to reduce the total hardness using Taguchi method and ANOVA, to suit the available raw water quality for aqua-hatchery practices, especially for fresh water prawn M. rosenbergii. PMID:24749379

  5. Experimental and Quasi-Experimental Design.

    ERIC Educational Resources Information Center

    Cottrell, Edward B.

    With an emphasis on the problems of control of extraneous variables and threats to internal and external validity, the arrangement or design of experiments is discussed. The purpose of experimentation in an educational institution, and the principles governing true experimentation (randomization, replication, and control) are presented, as are…

  6. Designing an Experimental "Accident"

    ERIC Educational Resources Information Center

    Picker, Lester

    1974-01-01

    Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)

  7. Application of the nonlinear, double-dynamic Taguchi method to the precision positioning device using combined piezo-VCM actuator.

    PubMed

    Liu, Yung-Tien; Fung, Rong-Fong; Wang, Chun-Chao

    2007-02-01

    In this research, the nonlinear, double-dynamic Taguchi method was used as design and analysis methods for a high-precision positioning device using the combined piezo-voice-coil motor (VCM) actuator. An experimental investigation into the effects of two input signals and three control factors were carried out to determine the optimum parametric configuration of the positioning device. The double-dynamic Taguchi method, which permits optimization of several control factors concurrently, is particularly suitable for optimizing the performance of a positioning device with multiple actuators. In this study, matrix experiments were conducted with L9(3(4)) orthogonal arrays (OAs). The two most critical processes for the optimization of positioning device are the identification of the nonlinear ideal function and the combination of the double-dynamic signal factors for the ideal function's response. The driving voltage of the VCM and the waveform amplitude of the PZT actuator are combined into a single quality characteristic to evaluate the positioning response. The application of the double-dynamic Taguchi method, with dynamic signal-to-noise ratio (SNR) and L9(3(4)) OAs, reduced the number of necessary experiments. The analysis of variance (ANOVA) was applied to set the optimum parameters based on the high-precision positioning process. PMID:17328322

  8. Taguchi Optimization of Pulsed Current GTA Welding Parameters for Improved Corrosion Resistance of 5083 Aluminum Welds

    NASA Astrophysics Data System (ADS)

    Rastkerdar, E.; Shamanian, M.; Saatchi, A.

    2013-04-01

    In this study, the Taguchi method was used as a design of experiment (DOE) technique to optimize the pulsed current gas tungsten arc welding (GTAW) parameters for improved pitting corrosion resistance of AA5083-H18 aluminum alloy welds. A L9 (34) orthogonal array of the Taguchi design was used, which involves nine experiments for four parameters: peak current ( P), base current ( B), percent pulse-on time ( T), and pulse frequency ( F) with three levels was used. Pitting corrosion resistance in 3.5 wt.% NaCl solution was evaluated by anodic polarization tests at room temperature and calculating the width of the passive region (∆ E pit). Analysis of variance (ANOVA) was performed on the measured data and S/ N (signal to noise) ratios. The "bigger is better" was selected as the quality characteristic (QC). The optimum conditions were found as 170 A, 85 A, 40%, and 6 Hz for P, B, T, and F factors, respectively. The study showed that the percent pulse-on time has the highest influence on the pitting corrosion resistance (50.48%) followed by pulse frequency (28.62%), peak current (11.05%) and base current (9.86%). The range of optimum ∆ E pit at optimum conditions with a confidence level of 90% was predicted to be between 174.81 and 177.74 mVSCE. Under optimum conditions, the confirmation test was carried out, and the experimental value of ∆ E pit of 176 mVSCE was in agreement with the predicted value from the Taguchi model. In this regard, the model can be effectively used to predict the ∆ E pit of pulsed current gas tungsten arc welded joints.

  9. The use of Taguchi technique to optimize the compression moulding cycle to process acetabular cup components.

    PubMed

    Fonseca, A; Inácio, N; Kanagaraj, S; Oliveira, M S A; Simões, J A O

    2011-06-01

    Taguchi technique is a powerful method of solving engineering problems in order to improve the performance of a process and to enhance the productivity. The methodology for the design of the experiment is proposed in order to find the best parameters for better experimental results with less number of experiments as possible. In this study, Taguchi technique was applied to optimize the compression moulding cycle for processing the Acetabular cup prototype. For the design of the experiments, three main factors such as processing temperature, pressure and the time of compaction were identified which directly influence the quality of the final product. For each factor three levels were considered and an orthogonal array L9 was associated. With the L9 orthogonal array, a total of 9 trial experiments have been performed and the optimum parameters were identified. An experimental test was performed in order to validate the founded conditions. The optimized conditions encountered were: processing temperature of 160 degrees C, processing pressure of 1000 psi and the compaction time of 90 s. With these optimized parameters, the acetabular cup prototypes were processed for nanocomposites having ultra-high molecular weight (UHMWPE) reinforced with different volume fractions of carbon nanotubes (CNTs) in the range of 0.2 to 2.0 vol.%. PMID:21770185

  10. A feasibility investigation for modeling and optimization of temperature in bone drilling using fuzzy logic and Taguchi optimization methodology.

    PubMed

    Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar

    2014-11-01

    Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. PMID:25500858

  11. Multi-response analysis in the material characterisation of electrospun poly (lactic acid)/halloysite nanotube composite fibres based on Taguchi design of experiments: fibre diameter, non-intercalation and nucleation effects

    NASA Astrophysics Data System (ADS)

    Dong, Yu; Bickford, Thomas; Haroosh, Hazim J.; Lau, Kin-Tak; Takagi, Hitoshi

    2013-09-01

    Poly (lactic acid) (PLA)/halloysite nanotube (HNT) composite fibres were prepared by using a simple and versatile electrospinning technique. The systematic approach via Taguchi design of experiments (DoE) was implemented to investigate factorial effects of applied voltage, feed rate of solution, collector distance and HNT concentration on the fibre diameter, HNT non-intercalation and nucleation effects. The HNT intercalation level, composite fibre morphology, their associated fibre diameter and thermal properties were evaluated by means of X-ray diffraction (XRD) analysis, scanning electron microscopy (SEM), imaging analysis and differential scanning calorimetry (DSC), respectively. HNT non-intercalation phenomenon appears to be manifested as reflected by the minimal shift of XRD peaks for all electrospun PLA/HNT composite fibres. The smaller-fibre-diameter characteristic was found to be sequentially associated with the feed rate of solution, collector distance and applied voltage. The glass transition temperature ( T g) and melting temperature ( T m) are not highly affected by varying the material and electrospinning parameters. However, as the indicator of the nucleation effect, the crystallisation temperature ( T c) of PLA/HNT composite fibres is predominantly impacted by HNT concentration and applied voltage. It is evident that HNT's nucleating agent role is confirmed when embedded with HNTs to accelerate the cold crystallisation of composite fibres. Taguchi DoE method has been found to be an effective approach to statistically optimise critical parameters used in electrospinning in order to effectively tailor the resulting physical features and thermal properties of PLA/HNT composite fibres.

  12. Improving dimensional accuracy of SLS processed part using Taguchi method

    NASA Astrophysics Data System (ADS)

    Cheng, Rong; Wu, Xiaoyu; Zheng, Jianping

    2011-05-01

    This paper presents experimental investigations on influence of important process parameters: laser power, scan speed, layer thickness, hatching space along with their interactions on dimensional accuracy of Selective Laser Sintering (SLS) processed pro-coated sand mold. It is observed that dimensional error is dominant along length and width direction of built mold. Optimum parameters setting to minimize percentage change in length and width of standard test specimen have been found out using Taguchi's parameter design. Optimum process conditions are obtained by analysis of variance (ANOVA) is used to understand the significance of process variables affecting dimension accuracy. Scan speed and hatching space are found to be most significant process variables influencing the dimension accuracy in length and width. And laser power and layer thickness are less influence on the dimension accuracy. The optimum processing parameters are attained in this paper: laser power 11 W; scan speed 1200 mm/s; layer thickness 0.5 mm and hatching space 0.25 mm. It has been shown that, on average, the dimensional accuracy under this processing parameters combination could be improved by approximately up to 25% compared to other processing parameters combinations.

  13. Improving dimensional accuracy of SLS processed part using Taguchi method

    NASA Astrophysics Data System (ADS)

    Cheng, Rong; Wu, Xiaoyu; Zheng, Jianping

    2010-12-01

    This paper presents experimental investigations on influence of important process parameters: laser power, scan speed, layer thickness, hatching space along with their interactions on dimensional accuracy of Selective Laser Sintering (SLS) processed pro-coated sand mold. It is observed that dimensional error is dominant along length and width direction of built mold. Optimum parameters setting to minimize percentage change in length and width of standard test specimen have been found out using Taguchi's parameter design. Optimum process conditions are obtained by analysis of variance (ANOVA) is used to understand the significance of process variables affecting dimension accuracy. Scan speed and hatching space are found to be most significant process variables influencing the dimension accuracy in length and width. And laser power and layer thickness are less influence on the dimension accuracy. The optimum processing parameters are attained in this paper: laser power 11 W; scan speed 1200 mm/s; layer thickness 0.5 mm and hatching space 0.25 mm. It has been shown that, on average, the dimensional accuracy under this processing parameters combination could be improved by approximately up to 25% compared to other processing parameters combinations.

  14. Modified Artificial Diet for Rearing of Tobacco Budworm, Helicoverpa armigera, using the Taguchi Method and Derringer's Desirability Function

    PubMed Central

    Assemi, H.; Rezapanah, M.; Vafaei-Shoushtari, R.

    2012-01-01

    With the aim to improve the mass rearing feasibility of tobacco budworm, Helicoverpa armigera Hübner (Lepidoptera: Noctuidae), design of experimental methodology using Taguchi orthogonal array was applied. To do so, the effect of 16 ingredients of an artificial diet including bean, wheat germ powder, Nipagin, ascorbic acid, formaldehyde, oil, agar, distilled water, ascorbate, yeast, chloramphenicol, benomyl, penicillin, temperature, humidity, and container size on some biological characteristics of H. armigera was evaluated. The selected 16 factors were considered at two levels (32 experiments) in the experimental design. Among the selected factors, penicillin, container size, formaldehyde, chloramphenicol, wheat germ powder, and agar showed significant effect on the mass rearing performance. Derringer's desirability function was used for simultaneous optimization of mass rearing of tobacco budworm, H. armigera, on a modified artificial diet. Derived optimum operating conditions obtained by Derringer's desirability function and Taguchi methodology decreased larval period from 19 to 15.5 days (18.42 % improvement), decreased the pupal period from 12.29 to 11 days (10.49 % improvement), increased the longevity of adults from 14.51 to 21 days (44.72 % improvement), increased the number of eggs/female from 211.21 to 260, and increased egg hatchability from 54.2% to 72% (32.84 % improvement). The proposed method facilitated a systematic mathematical approach with a few well-defined experimental sets. PMID:23425103

  15. Total Quality Management: Statistics and Graphics III - Experimental Design and Taguchi Methods. AIR 1993 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schwabe, Robert A.

    Interest in Total Quality Management (TQM) at institutions of higher education has been stressed in recent years as an important area of activity for institutional researchers. Two previous AIR Forum papers have presented some of the statistical and graphical methods used for TQM. This paper, the third in the series, first discusses some of the…

  16. Optimizing Aqua Splicer Parameters for Lycra-Cotton Core Spun Yarn Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Midha, Vinay Kumar; Hiremath, ShivKumar; Gupta, Vaibhav

    2015-10-01

    In this paper, optimization of the aqua splicer parameters viz opening time, splicing time, feed arm code (i.e. splice length) and duration of water joining was carried out for 37 tex lycra-cotton core spun yarn for better retained splice strength (RSS%), splice abrasion resistance (RYAR%) and splice appearance (RYA%) using Taguchi experimental design. It is observed that as opening time, splicing time and duration of water joining increase, the RSS% and RYAR% increases, whereas increase in feed arm code leads to decrease in both. The opening time and feed arm code do not have significant effect on RYA%. The optimum RSS% of 92.02 % was obtained at splicing parameters of 350 ms opening time, 180 ms splicing time, 65 feed arm code and 600 ms duration of water joining.

  17. Optimization of glucose formation in karanja biomass hydrolysis using Taguchi robust method.

    PubMed

    Radhakumari, M; Ball, Andy; Bhargava, Suresh K; Satyavathi, B

    2014-08-01

    The main objective of the present study is aimed to optimize the process parameters for the production of glucose from karanja seed cake. The Taguchi robust design method with L9 orthogonal array was applied to optimize hydrolysis reaction conditions and maximize sugar yield. Effect of temperature, acid concentration, and acid to cake weight ratio were considered as the main influencing factors which effects the percentage of glucose and amount of glucose formed. The experimental results indicated that acid concentration and liquid to solid ratio had a principal effect on the amount of glucose formed when compared to that of temperature. The maximum glucose formed was 245 g/kg extractive free cake. PMID:24951940

  18. Application of Taguchi method in Nd-YAG laser welding of super duplex stainless steel

    SciTech Connect

    Yip, W.M.; Man, H.C.; Ip, W.H.

    1996-12-31

    This investigation is aimed at achieving a near 50-50 % ferrite-austenite ratio of laser welded super duplex stainless steel, UNS S 32760 (Zeron 100). Bead-on-plate welding has been carried out using a 2 kW Nd-YAG laser with 3 different kinds of wave form, Continuous, Sine and Square wave. The weld metals were examined with respect to the phase volume contents by X-ray diffraction. Laser welding involved a large number of variables, interaction and levels of variables. Taguchi Method was selected and used to reduce the number of experimental conditions and to identify the dominant factors. The optimum combinations of controllable factors were found from each set of wave form. The optimum 40-60% ferrite-austenite ratio were realized on some of the combination parameter groups after using the Parameter Design method.

  19. Optimization of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design

    NASA Astrophysics Data System (ADS)

    Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.

    2016-04-01

    Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.

  20. Optimization of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design

    NASA Astrophysics Data System (ADS)

    Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.

    2016-03-01

    Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.

  1. Multi-response optimization in the development of oleo-hydrophobic cotton fabric using Taguchi based grey relational analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Naseer; Kamal, Shahid; Raza, Zulfiqar Ali; Hussain, Tanveer; Anwar, Faiza

    2016-03-01

    Present study under takes multi-response optimization of water and oil repellent finishing of bleached cotton fabric under Taguchi based grey relational analysis. We considered three input variables, viz. concentrations of the finish (Oleophobol CP-C) and cross linking agent (Knittex FEL), and curing temperature. The responses included: water and oil contact angles, air permeability, crease recovery angle, stiffness, and tear and tensile strengths of the finished fabric. The experiments were conducted under L9 orthogonal array in Taguchi design. The grey relational analysis was also included to set the quality characteristics as reference sequence and to decide the optimal parameter combinations. Additionally, the analysis of variance was employed to determine the most significant factor. The results demonstrate great improvement in the desired quality parameters of the developed fabric. The optimization approach reported in this study could be effectively used to reduce expensive trial and error experimentation for new product development and process optimization involving multiple responses. The product optimized in this study was characterized by using advanced analytical techniques, and has potential applications in rainwear and other outdoor apparel.

  2. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  3. From Cookbook to Experimental Design

    ERIC Educational Resources Information Center

    Flannagan, Jenny Sue; McMillan, Rachel

    2009-01-01

    Developing expertise, whether from cook to chef or from student to scientist, occurs over time and requires encouragement, guidance, and support. One key goal of an elementary science program should be to move students toward expertise in their ability to design investigative questions. The ability to design a testable question is difficult for

  4. Parametric optimization of selective laser melting for forming Ti6Al4V samples by Taguchi method

    NASA Astrophysics Data System (ADS)

    Sun, Jianfeng; Yang, Yongqiang; Wang, Di

    2013-07-01

    In this study, a selective laser melting experiment was carried out with Ti6Al4V alloy powders. To produce samples with maximum density, selective laser melting parameters of laser power, scanning speed, powder thickness, hatching space and scanning strategy were carefully selected. As a statistical design of experimental technique, the Taguchi method was used to optimize the selected parameters. The results were analyzed using analyses of variance (ANOVA) and the signal-to-noise (S/N) ratios by design-expert software for the optimal parameters, and a regression model was established. The regression equation revealed a linear relationship among the density, laser power, scanning speed, powder thickness and scanning strategy. From the experiments, sample with density higher than 95% was obtained. The microstructure of obtained sample was mainly composed of acicular martensite, α phase and β phase. The micro-hardness was 492 HV0.2.

  5. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  6. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.

  7. The photocatalytic degradation of cationic surfactant from wastewater in the presence of nano-zinc oxide using Taguchi method

    NASA Astrophysics Data System (ADS)

    Giahi, M.; Moradidoost, A.; Bagherinia, M. A.; Taghavi, H.

    2013-12-01

    The photocatalytic degradation of cetyl pyridinium chloride (CPC) has been investigated in aqueous phase using ultraviolet (UV) and ZnO nanopowder. Kinetic analysis showed that the extent of surfactant photocatalytic degradation can be fitted with pseudo-first-order model and photochemical elimination of CPC could be studied by Taguchi method. Our experimental design was based on testing five factors, i.e., dosage of K2S2O8, concentration of CPC, amount of ZnO, irradiation time and initial pH. Each factor was tested at four levels. The optimum parameters were found to be pH 5.0; amount of ZnO 11 mg; K2S2O8 3 mM; CPC 10 mg/L; irradiation time, 8 h.

  8. Designing High Quality Research in Special Education: Group Experimental Designs.

    ERIC Educational Resources Information Center

    Gersten, Russell; Lloyd, John Wills; Baker, Scott

    This paper, a result of a series of meetings of researchers, discusses critical issues related to the conduct of high-quality intervention research in special education using experimental and quasi-experimental designs that compare outcomes for different groups of students. It stresses the need to balance design components that satisfy laboratory…

  9. Experimental design of a waste glass study

    SciTech Connect

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150{degrees}C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases.

  10. More efficiency in fuel consumption using gearbox optimization based on Taguchi method

    NASA Astrophysics Data System (ADS)

    Goharimanesh, Masoud; Akbari, Aliakbar; Akbarzadeh Tootoonchi, Alireza

    2014-05-01

    Automotive emission is becoming a critical threat to today's human health. Many researchers are studying engine designs leading to less fuel consumption. Gearbox selection plays a key role in an engine design. In this study, Taguchi quality engineering method is employed, and optimum gear ratios in a five speed gear box is obtained. A table of various gear ratios is suggested by design of experiment techniques. Fuel consumption is calculated through simulating the corresponding combustion dynamics model. Using a 95 % confidence level, optimal parameter combinations are determined using the Taguchi method. The level of importance of the parameters on the fuel efficiency is resolved using the analysis of signal-to-noise ratio as well as analysis of variance.

  11. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  12. Optimal experimental design strategies for detecting hormesis.

    PubMed

    Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee

    2011-12-01

    Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations. PMID:21545627

  13. Optimal Experimental Design Strategies for Detecting Hormesis

    PubMed Central

    Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee

    2011-01-01

    Hormesis is a widely observed phenomenon in many branches of life sciences ranging from toxicology studies to agronomy with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations. PMID:21545627

  14. Taguchi's off line method and Multivariate loss function approach for quality management and optimization of process parameters -A review

    NASA Astrophysics Data System (ADS)

    Bharti, P. K.; Khan, M. I.; Singh, Harbinder

    2010-10-01

    Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.

  15. Using experimental design to define boundary manikins.

    PubMed

    Bertilsson, Erik; Högberg, Dan; Hanson, Lars

    2012-01-01

    When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design. PMID:22317428

  16. Taguchi Method Applied in Optimization of Shipley SJR 5740 Positive Resist Deposition

    NASA Technical Reports Server (NTRS)

    Hui, A.; Blosiu, J. O.; Wiberg, D. V.

    1998-01-01

    Taguchi Methods of Robust Design presents a way to optimize output process performance through an organized set of experiments by using orthogonal arrays. Analysis of variance and signal-to-noise ratio is used to evaluate the contribution of each of the process controllable parameters in the realization of the process optimization. In the photoresist deposition process, there are numerous controllable parameters that can affect the surface quality and thickness of the final photoresist layer.

  17. Experimental design for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    McPhee, James; Yeh, William W.-G.

    2006-02-01

    This study aims to develop a methodology for data collection that accounts for the application of simulation models in decision making for groundwater management. Simulation model reliability is estimated by comparing the effects that a perturbation in the model parameter space has over the model output as well as over the solution of a multiobjective optimization problem. The problem of experimental design for parameter estimation is formulated and solved using a combination of genetic algorithm and gradient-based optimization. Gaussian quadrature and Bayesian decision theory are combined for selecting the best design under parameter uncertainty. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential "true" states of the system. Results also show that the uncertainty analysis is able to identify complex interactions among the model parameters that may affect the performance of the experimental designs as well as the attainability of management objectives.

  18. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nystrm, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469]. PMID:19786177

  19. Status of Fusion Experimental Reactor (FER) design

    SciTech Connect

    Tone, T.; Fujisawa, N.; Sugihara, M.

    1985-07-01

    Conceptual design studies of the Fusion Experimental Reactor (FER) have been conducted at JAERI in line with a long-range plan for fusion reactor development laid out in the long-term program of the Atomic Energy Commission issued in 1982. The FER succeeding the tokamak device JT-60 is a tokamak reactor with a major mission of realizing a self-ignited long-burning DT plasma and demonstrating engineering feasibility. The paper describes recent developments of the FER design concept.

  20. Surface Roughness Prediction Model using Zirconia Toughened Alumina (ZTA) Turning Inserts: Taguchi Method and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Mandal, Nilrudra; Doloi, Biswanath; Mondal, Biswanath

    2016-01-01

    In the present study, an attempt has been made to apply the Taguchi parameter design method and regression analysis for optimizing the cutting conditions on surface finish while machining AISI 4340 steel with the help of the newly developed yttria based Zirconia Toughened Alumina (ZTA) inserts. These inserts are prepared through wet chemical co-precipitation route followed by powder metallurgy process. Experiments have been carried out based on an orthogonal array L9 with three parameters (cutting speed, depth of cut and feed rate) at three levels (low, medium and high). Based on the mean response and signal to noise ratio (SNR), the best optimal cutting condition has been arrived at A3B1C1 i.e. cutting speed is 420 m/min, depth of cut is 0.5 mm and feed rate is 0.12 m/min considering the condition smaller is the better approach. Analysis of Variance (ANOVA) is applied to find out the significance and percentage contribution of each parameter. The mathematical model of surface roughness has been developed using regression analysis as a function of the above mentioned independent variables. The predicted values from the developed model and experimental values are found to be very close to each other justifying the significance of the model. A confirmation run has been carried out with 95 % confidence level to verify the optimized result and the values obtained are within the prescribed limit.

  1. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  2. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it

  3. Model Averaging Method for Supersaturated Experimental Design

    NASA Astrophysics Data System (ADS)

    Salaki, Deiby T.; Kurnia, Anang; Sartono, Bagus

    2016-01-01

    In this paper, a new modified model averaging method was proposed. The candidate model construction was performed by distinguishing the covariates into focus variables and auxiliary variables whereas the weights selection was implemented using Mallows criterion. In addition, the illustration result shows that the applied model averaging method could be considered as a new alternative method for supersaturated experimental design as a typical form of high dimensional data. A supersaturated factorial design is an experimental series in which the number of factors exceeds the number of runs, so its size is not enough to estimate all the main effect. By using the model averaging method, the estimation or prediction power is significantly enhanced. In our illustration, the main factors are regarded as focus variables in order to give more attention to them whereas the lesser factors are regarded as auxiliary variables, which is along with the hierarchical ordering principle in experimental research. The limited empirical study shows that this method produces good prediction.

  4. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, D.; Curtis, A.

    2009-12-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome of our study is a simple rule-of-thumb for prescribing optimal experiments for ellipse fitting that bypasses the computational expense of SED. In closing, we discuss the relevance of our findings for the NFL theorems as they might apply to more sophisticated design methods such as nonlinear and Bayesian SED.

  5. Optimization of Welding Parameters of Submerged Arc Welding Using Analytic Hierarchy Process (AHP) Based on Taguchi Technique

    NASA Astrophysics Data System (ADS)

    Sarkar, A.; Roy, J.; Majumder, A.; Saha, S. C.

    2014-04-01

    The present paper reports a new procedure using an analytic hierarchy process (AHP) based Taguchi method for the selection of the best welding parameters to fabricate submerged arc welding of plain carbon steel. Selection of best welding parameters is an unstructured decision problem involving process parameters for multiple weldments. In the present investigation, three process parameter variables i.e. wire feed rate (Wf), stick out (So) and traverse speed (Ts) and the three response parameters i.e. penetration, bead width and bead reinforcement have been considered. The objective of the present work is thus to improve the quality of the welded elements by using AHP analysis based Taguchi method. Taguchi L16 orthogonal array is used to perform with less number of experimental runs. Taguchi approach is insufficient to solve a multi response optimization problem. In order to overcome this limitation, a multi criteria decision making method, AHP is applied in the present study. The optimal condition to have a quality weld (i.e. bead geometry) is found at 210 mm/min of wire feed rate, 15 mm of stick out and 0.75 m/min of traverse speed and also observed that the effect of wire feed rate on the overall bead geometry properties is more significant than other welding parameters. Finally, a confirmatory test has been carried out to verify the optimal setting so obtained.

  6. Conceptual design of Fusion Experimental Reactor (FER)

    SciTech Connect

    Tone, T.; Fujisawa, N.

    1983-09-01

    Conceptual design studies of the Fusion Experimental Reactor (FER) have been performed. The FER has an objective of achieving selfignition and demonstrating engineering feasibility as a next generation tokamak to JT-60. Various concepts of the FER have been considered. The reference design is based on a double-null divertor. Optional design studies with some attractive features based on advanced concepts such as pumped limiter and RF current drive have been carried out. Key design parameters are; fusion power of 440 MW, average neutron wall loading of 1MW/m/sup 2/, major radius of 5.5m, plasma minor radius of 1.1m, plasma elongation of 1.5, plasma current of 5.3MA, toroidal beta of 4%, toroidal field on plasma axis of 5.7T and tritium breeding ratio of above unity.

  7. Evaluation of Thermal Interruption Capability in SF6 Gas Circuit Breakers with Re-ignition Voltage and its Application to Experimental Design

    NASA Astrophysics Data System (ADS)

    Urai, Hajime; Ooshita, Youichi; Koizumi, Makoto; Tsukushi, Masanori

    Thermal interruption characteristics of SF6 gas circuit breakers were investigated by voltage measurements around current zero. We found that re-ignition peak voltage in the case of an interruption failure could effectively indicate the thermal interruption capability. The efficiency of interruption to puffer pressure could also be evaluated by analyzing the dependency of re-ignition peak voltage on puffer pressure. This technique was applied to experimental design based on the Taguchi method. We successfully optimized the balance of the puffer pressure build-up performance and thermal interruption efficiency to puffer pressure. Finally we demonstrated that a small size research circuit breaker with self-blast interrupter was able to successfully clear the 90% short line fault interruption duty corresponding to 50kA rating in the thermal interruption region.

  8. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims. PMID:16161223

  9. Analyzing split-plot experimental design using partitioned design matrices

    NASA Astrophysics Data System (ADS)

    Nugroho, Sigit

    2016-02-01

    In the Linear Models, QR Decomposition could be used to calculate the sum of squares. But, it has limitation that the number of rows, which is also the number of observations or responses, has to be greater than the total number of parameters used in the model. Partitioned design matrix method may be used to calculate the sum of squares of the models whenever the total number of parameters is greater than the number of observations. Such partitioned is discussed in a Split-Plot Experimental Design.

  10. Application of Taguchi based Response Surface Method (TRSM) for Optimization of Multi Responses in Drilling Al/SiC/Al2O3 Hybrid Composite

    NASA Astrophysics Data System (ADS)

    Adalarasan, R.; Santhanakumar, M.

    2015-01-01

    The emerging industrial applications of second generation hybrid composites demand an organised study of their drilling characteristics as drilling is an essential metal removal process in the final fabrication stage. In the present work, surface finish and burr height were observed while drilling Al6061/SiC/Al2O3 composite for various combinations of drilling parameters like the feed rate, spindle speed and point angle of tool. The experimental trials were designed by L18 orthogonal array and Taguchi based response surface method was presented for optimizing the drilling parameters. The significant improvements in the responses observed for the optimal parameter setting has validated the TRSM approach permitting its application in other areas of manufacturing.

  11. Experimental design of laminar proportional amplifiers

    NASA Technical Reports Server (NTRS)

    Hellbaum, R. F.

    1976-01-01

    An experimental program was initiated at Langley Research Center to study the effects of various parameters on the design of laminar proportional beam deflection amplifiers. Matching and staging of amplifiers to obtain high-pressure gain was also studied. Variable parameters were aspect ratio, setback, control length, receiver distance, receiver width, width of center vent, and bias pressure levels. Usable pressure gains from 4 to 19 per stage can now be achieved, and five amplifiers were staged together to yield pressure gains up to 2,000,000.

  12. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240

  13. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  14. Design and Experimental Applications of Acoustic Metamaterials

    NASA Astrophysics Data System (ADS)

    Zigoneanu, Lucian

    Acoustic metamaterials are engineered materials that were extensively investigated over the last years mainly because they promise properties otherwise hard or impossible to find in nature. Consequently, they open the door for improved or completely new applications (e.g. acoustic superlens that can exceed the diffraction limit in imaging or acoustic absorbing panels with higher transmission loss and smaller thickness than regular absorbers). Our objective is to surpass the limited frequency operating range imposed by the resonant mechanism that s1ome of these materials have. In addition, we want acoustic metamaterials that could be experimentally demonstrated and used to build devices with overall performances better than the previous ones reported in the literature. Here, we start by focusing on the need of engineered metamaterials in general and acoustic metamaterials in particular. Also, the similarities between electromagnetic metamaterials and acoustic metamaterials and possible ways to realize broadband acoustic metamaterials are briefly discussed. Then, we present the experimental realization and characterization of a two-dimensional (2D) broadband acoustic metamaterial with strongly anisotropic effective mass density. We use this metamaterial to realize a 2D broadband gradient index acoustic lens in air. Furthermore, we optimize the lens design by improving each unit cell's performance and we also realize a 2D acoustic ground cloak in air. In addition, we explore the performance of some novel applications (a 2D acoustic black hole and a three-dimensional acoustic cloak) using the currently available acoustic metamaterials. In order to overcome the limitations of our designs, we approach the active acoustic metamaterials path, which offers a broader range for the material parameters values and a better control over them. We propose two structures which contain a sensing element (microphone) and an acoustic driver (piezoelectric membrane or speaker). The material properties are controlled by tuning the response of the unit cell to the incident wave. Several samples with interesting effective mass density and bulk modulus are presented. We conclude by suggesting few natural directions that could be followed for the future research based on the theoretical and experimental results presented in this work.

  15. Optimization of parameters for the synthesis of Y2Cu2O5 nanoparticles by Taguchi method and comparison of their magnetic and optical properties with their bulk counterpart

    NASA Astrophysics Data System (ADS)

    Farbod, Mansoor; Rafati, Zahra; Shoushtari, Morteza Zargar

    2016-06-01

    Y2Cu2O5 nanoparticles were synthesized by sol-gel combustion method and effects of different factors on the size of nanoparticles were investigated. In order to reduce the experimental stages, Taguchi robust design method was employed. Acid citric:Cu+2 M ratio, pH, sintering temperature and time were chosen as the parameters for optimization. Among these factors the solution pH had the most influence and the others had nearly the same influence on the nanoparticles sizes. Based on the predicted conditions by Taguchi design, the sample with a minimum particle size of 47 nm was prepared. The magnetic behavior of Y2Cu2O5 nanoparticles were measured and found that at low fields they are soft ferromagnetic but at high fields they behave paramagnetically. The magnetic behavior of nanoparticles were compared to their bulk counterparts and found that the Mr of the samples was slightly different, but the Hc of the nanoparticles was 76% of the bulk sample. The maximum absorbance peak of UV-vis spectrum showed a blue shift for the smaller particles.

  16. [Design and experimentation of marine optical buoy].

    PubMed

    Yang, Yue-Zhong; Sun, Zhao-Hua; Cao, Wen-Xi; Li, Cai; Zhao, Jun; Zhou, Wen; Lu, Gui-Xin; Ke, Tian-Cun; Guo, Chao-Ying

    2009-02-01

    Marine optical buoy is of important value in terms of calibration and validation of ocean color remote sensing, scientific observation, coastal environment monitoring, etc. A marine optical buoy system was designed which consists of a main and a slave buoy. The system can measure the distribution of irradiance and radiance over the sea surface, in the layer near sea surface and in the euphotic zone synchronously, during which some other parameters are also acquired such as spectral absorption and scattering coefficients of the water column, the velocity and direction of the wind, and so on. The buoy was positioned by GPS. The low-power integrated PC104 computer was used as the control core to collect data automatically. The data and commands were real-timely transmitted by CDMA/GPRS wireless networks or by the maritime satellite. The coastal marine experimentation demonstrated that the buoy has small pitch and roll rates in high sea state conditions and thus can meet the needs of underwater radiometric measurements, the data collection and remote transmission are reliable, and the auto-operated anti-biofouling devices can ensure that the optical sensors work effectively for a period of several months. PMID:19445253

  17. An optimization of superhydrophobic polyvinylidene fluoride/zinc oxide materials using Taguchi method

    NASA Astrophysics Data System (ADS)

    Mohamed, Adel M. A.; Jafari, Reza; Farzaneh, Masoud

    2014-01-01

    This article is focused on the preparation and characterization of PVDF/ZnO composite materials. The superhydrophobic surface was prepared through spray coating of a mixture of PVDF polymer and ZnO nanoparticles on aluminum substrate. Stearic acid was added to improve the dispersion of ZnO. Taguchi's design of experiment method using MINITAB15 was used to rank several factors that may affect the superhydrophobic properties in order to formulate the optimum conditions. The Taguchi orthogonal array L9 was applied with three level of consideration for each factor. ANOVA were carried out to identify the significant factors that affect the water contact angle. Confirmation tests were performed on the predicted optimum process parameters. The crystallinity and morphology of PVDF-ZnO membranes were determined by Fourier transform infrared (FTIR) spectroscopy and scanning electron microscopy (SEM). The results of Taguchi method indicate that the ZnO and stearic acid contents were the parameters making significant contribution toward improvement in hydrophobicity of PVDF materials. As the content of ZnO nanoparticles increased, the values of water contact angle increased, ranging from 122° to 159°, while the contact angle hysteresis and sliding angle decreased to 3.5° and 2.5°, respectively. The SEM results show that hierarchical micro-nanostructure of ZnO plays an important role in the formation of the superhydrophobic surface. FTIR results showed that, in the absence or present ZnO nanoparticles, the crystallization of the PVDF occurred predominantly in the β-phase.

  18. Identification of Dysfunctional Cooperative Learning Teams Using Taguchi Quality Indexes

    ERIC Educational Resources Information Center

    Hsiung, Chin-Min

    2011-01-01

    In this study, dysfunctional cooperative learning teams are identified by comparing the Taguchi "larger-the-better" quality index for the academic achievement of students in a cooperative learning condition with that of students in an individualistic learning condition. In performing the experiments, 42 sophomore mechanical engineering students…

  19. Identification of Dysfunctional Cooperative Learning Teams Using Taguchi Quality Indexes

    ERIC Educational Resources Information Center

    Hsiung, Chin-Min

    2011-01-01

    In this study, dysfunctional cooperative learning teams are identified by comparing the Taguchi "larger-the-better" quality index for the academic achievement of students in a cooperative learning condition with that of students in an individualistic learning condition. In performing the experiments, 42 sophomore mechanical engineering students

  20. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.

  1. Laser Doppler vibrometer: unique use of DOE/Taguchi methodologies in the arena of pyroshock (10 to 100,000 HZ) response spectrum

    NASA Astrophysics Data System (ADS)

    Litz, C. J., Jr.

    1994-09-01

    Discussed is the unique application of design of experiment (DOE) to structure and test a Taguchi L9 (32) factorial experimental matrix (nine tests to study two factors, each factor at three levels), utilizing an HeNe laser Doppler vibrometer and piezocrystal accelerometers to monitor the explosively induced vibrations through the frequency range of 10 to 105 Hz on a flat steel plate (96 X 48 X 0.25 in.). An initial discussion is presented of pyrotechnic shock, or pyroshock, which is a short-duration, high-amplitude, high-frequency transient structural response in aerospace vehicle structures following firing of an ordnance item to separate, sever missile skin, or release a structural member. The development of the shock response spectra (SRS) is detailed. The use of a laser doppler for generating velocity- acceleration-time histories near and at a separation distance from the explosive and the resulting generated shock response spectra plots is detailed together with the laser doppler vibrometer setup as used. The use of DOE/Taguchi as a means of generating performance metrics, prediction equations, and response surface plots is presented as a means to statistically compare and rate the performance of the NeHe laser Doppler vibrometer with respect to two different piezoelectric crystal accelerometers of the contact type mounted directly to the test plate at the frequencies in the 300, 3000, and 10,000 Hz range. Specific constructive conclusions and recommendations are presented on the totally new dimension of understanding the pyroshock phenomenon with respect to the effects and interrelationships of explosive charge weight, location, and the laser Doppler recording system. The use of these valuable statistical tools on other experiments can be cost-effective and provide valuable insight to aid understanding of testing or process control by the engineering community. The superiority of the HeNe laser Doppler vibrometer performance is demonstrated.

  2. Preparation of photocatalytic ZnO nanoparticles and application in photochemical degradation of betamethasone sodium phosphate using taguchi approach

    NASA Astrophysics Data System (ADS)

    Giahi, M.; Farajpour, G.; Taghavi, H.; Shokri, S.

    2014-07-01

    In this study, ZnO nanoparticles were prepared by a sol-gel method for the first time. Taguchi method was used to identify the several factors that may affect degradation percentage of betamethasone sodium phosphate in wastewater in UV/K2S2O8/nano-ZnO system. Our experimental design consisted of testing five factors, i.e., dosage of K2S2O8, concentration of betamethasone sodium phosphate, amount of ZnO, irradiation time and initial pH. With four levels of each factor tested. It was found that, optimum parameters are irradiation time, 180 min; pH 9.0; betamethasone sodium phosphate, 30 mg/L; amount of ZnO, 13 mg; K2S2O8, 1 mM. The percentage contribution of each factor was determined by the analysis of variance (ANOVA). The results showed that irradiation time; pH; amount of ZnO; drug concentration and dosage of K2S2O8 contributed by 46.73, 28.56, 11.56, 6.70, and 6.44%, respectively. Finally, the kinetics process was studied and the photodegradation rate of betamethasone sodium phosphate was found to obey pseudo-first-order kinetics equation represented by the Langmuir-Hinshelwood model.

  3. Enhancement of process capability for strip force of tight sets of optical fiber using Taguchi's Quality Engineering

    NASA Astrophysics Data System (ADS)

    Lin, Wen-Tsann; Wang, Shen-Tsu; Li, Meng-Hua; Huang, Chiao-Tzu

    2012-03-01

    Strip force is the key to identifying the quality of product during manufacturing tight sets of fiber. This study used Integrated computer-aided manufacturing DEFinition 0 (IDEF0) modeling to discuss detailed cladding processes of tight sets of fiber in transnational optical connector manufacturing. The results showed that, the key factor causing an instable interface connection is the extruder adjustment process. The factors causing improper strip force were analyzed through literature, practice, and gray relational analysis. The parameters design method of Taguchi's Quality Engineering was used to determine the optimal experimental combinations for processes of tight sets of fiber. This study employed case empirical analysis to obtain a model for improving the process of strip force of tight sets of fiber, and determines the correlation factors that affect the processes of quality for tight sets of fiber. The findings indicated that, process capability index (CPK) increased significantly, which can facilitate improvement of the product process capability and quality. The empirical results can serve as a reference for improving the product quality of the optical fiber industry.

  4. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  5. Multiple performance characteristics optimization for Al 7075 on electric discharge drilling by Taguchi grey relational theory

    NASA Astrophysics Data System (ADS)

    Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj

    2015-05-01

    Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.

  6. EXPERIMENTAL DESIGN: STATISTICAL CONSIDERATIONS AND ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this book chapter, information on how field experiments in invertebrate pathology are designed and the data collected, analyzed, and interpreted is presented. The practical and statistical issues that need to be considered and the rationale and assumptions behind different designs or procedures ...

  7. EBTS:DESIGN AND EXPERIMENTAL STUDY.

    SciTech Connect

    PIKIN,A.; ALESSI,J.; BEEBE,E.; KPONOU,A.; PRELEC,K.; KUZNETSOV,G.; TIUNOV,M.

    2000-11-06

    Experimental study of the BNL Electron Beam Test Stand (EBTS), which is a prototype of the Relativistic Heavy Ion Collider (RHIC) Electron Beam Ion Source (EBIS), is currently underway. The basic physics and engineering aspects of a high current EBIS implemented in EBTS are outlined and construction of its main systems is presented. Efficient transmission of a 10 A electron beam through the ion trap has been achieved. Experimental results on generation of multiply charged ions with both continuous gas and external ion injection confirm stable operation of the ion trap.

  8. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  9. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  10. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agencys (EPA) Office of Research and Developments (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  11. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  12. Irradiation Design for an Experimental Murine Model

    NASA Astrophysics Data System (ADS)

    Ballesteros-Zebadúa, P.; Lárraga-Gutierrez, J. M.; García-Garduño, O. A.; Rubio-Osornio, M. C.; Custodio-Ramírez, V.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Paz, C.; Celis, M. A.

    2010-12-01

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  13. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  14. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to

  15. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    ERIC Educational Resources Information Center

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  16. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  17. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  18. Collimator design for experimental minibeam radiation therapy

    SciTech Connect

    Babcock, Kerry; Sidhu, Narinder; Kundapur, Vijayananda; Ali, Kaiser

    2011-04-15

    Purpose: To design and optimize a minibeam collimator for minibeam radiation therapy studies using a 250 kVp x-ray machine as a simulated synchrotron source. Methods: A Philips RT250 orthovoltage x-ray machine was modeled using the EGSnrc/BEAMnrc Monte Carlo software. The resulting machine model was coupled to a model of a minibeam collimator with a beam aperture of 1 mm. Interaperture spacing and collimator thickness were varied to produce a minibeam with the desired peak-to-valley ratio. Results: Proper design of a minibeam collimator with Monte Carlo methods requires detailed knowledge of the x-ray source setup. For a cathode-ray tube source, the beam spot size, target angle, and source shielding all determine the final valley-to-peak dose ratio. Conclusions: A minibeam collimator setup was created, which can deliver a 30 Gy peak dose minibeam radiation therapy treatment at depths less than 1 cm with a valley-to-peak dose ratio on the order of 23%.

  19. Optimal experimental design for diffusion kurtosis imaging.

    PubMed

    Poot, Dirk H J; den Dekker, Arnold J; Achten, Eric; Verhoye, Marleen; Sijbers, Jan

    2010-03-01

    Diffusion kurtosis imaging (DKI) is a new magnetic resonance imaging (MRI) model that describes the non-Gaussian diffusion behavior in tissues. It has recently been shown that DKI parameters, such as the radial or axial kurtosis, are more sensitive to brain physiology changes than the well-known diffusion tensor imaging (DTI) parameters in several white and gray matter structures. In order to estimate either DTI or DKI parameters with maximum precision, the diffusion weighting gradient settings that are applied during the acquisition need to be optimized. Indeed, it has been shown previously that optimizing the set of diffusion weighting gradient settings can have a significant effect on the precision with which DTI parameters can be estimated. In this paper, we focus on the optimization of DKI gradients settings. Commonly, DKI data are acquired using a standard set of diffusion weighting gradients with fixed directions and with regularly spaced gradient strengths. In this paper, we show that such gradient settings are suboptimal with respect to the precision with which DKI parameters can be estimated. Furthermore, the gradient directions and the strengths of the diffusion-weighted MR images are optimized by minimizing the Cramr-Rao lower bound of DKI parameters. The impact of the optimized gradient settings is evaluated, both on simulated as well as experimentally recorded datasets. It is shown that the precision with which the kurtosis parameters can be estimated, increases substantially by optimizing the gradient settings. PMID:20199917

  20. Important considerations in experimental design for large scale simulation analyses

    SciTech Connect

    Rutherford, B.

    1998-05-01

    Economic and other factors accompanying developments in physics, mathematics and particularly in computer technology are shifting a substantial portion of the experimental resources associated with large scale engineering projects from physical testing to modeling and simulation. In the process, the priorities of selecting meaningful and informative tests and simulations to perform are also changing. This paper describes issues related to experimental design and how the goals and priorities of the experimental design for these problems are changing to accommodate the this shift in experimentation. Issues, priorities and new methods of approach are discussed.

  1. Optimization of laser cladding process using taguchi and EM methods for MMC coating production

    NASA Astrophysics Data System (ADS)

    Dubourg, L.; St-Georges, L.

    2006-12-01

    This study investigates the influence of laser cladding parameters on the geometry and composition of metalmatrix composite (MMC) coatings. Composite coatings are made of a Ni-Cr-B-Si metallic matrix and of WC reinforcement with a volume fraction of 50%. Optical microscopy is used to characterize the coating geometry (height, width, and penetration depth) and to determine the real volumetric content of WC. Laser cladding on low-carbon steel substrate is carried out using a cw neodymium:yttrium-aluminum-garnet (Nd:YAG) laser, a coaxial powder injection system, and a combination of Taguchi and EM methods to design the experiments. This combination explores efficiently the multidimensional volume of laser cladding parameters. The results, which express the interrelationship between laser cladding parameters and the characteristics of the clad produced, can be used to find optimum laser parameters, to predict the responses, and to improve the understanding of laser cladding process.

  2. Using Taguchi method to optimize welding pool of dissimilar laser-welded components

    NASA Astrophysics Data System (ADS)

    Anawa, E. M.; Olabi, A. G.

    2008-03-01

    In the present work, CO 2 continuous laser welding process was successfully applied and optimized for joining a dissimilar AISI 316 stainless-steel and AISI 1009 low carbon steel plates. Laser power, welding speed and defocusing distance combinations were carefully selected with the objective of producing welded joint with complete penetration, minimum fusion zone size and acceptable welding profile. Fusion zone area and shape of dissimilar austenitic stainless-steel with ferritic low carbon steel were evaluated as a function of the selected laser welding parameters. Taguchi approach was used as statistical design of experiment (DOE) technique for optimizing the selected welding parameters in terms of minimizing the fusion zone. Mathematical models were developed to describe the influence of the selected parameters on the fusion zone area and shape, to predict its value within the limits of the variables being studied. The result indicates that the developed models can predict the responses satisfactorily.

  3. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,

  4. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  5. A Single Group Experimental Design for Educational Research

    ERIC Educational Resources Information Center

    Papillon, Alfred L.

    1972-01-01

    The concept of the single group experimental research design is to test the null hypothesis that there is no significant difference between the achievement of the pupils under the experimental treatment and their achievement at their previous rate of progress. (Author)

  6. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  7. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  8. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  9. Fundamentals of experimental design: lessons from beyond the textbook world

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  10. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment

  11. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  12. Parametric study of the biopotential equation for breast tumour identification using ANOVA and Taguchi method.

    PubMed

    Ng, Eddie Y K; Ng, W Kee

    2006-03-01

    Extensive literatures have shown significant trend of progressive electrical changes according to the proliferative characteristics of breast epithelial cells. Physiologists also further postulated that malignant transformation resulted from sustained depolarization and a failure of the cell to repolarize after cell division, making the area where cancer develops relatively depolarized when compared to their non-dividing or resting counterparts. In this paper, we present a new approach, the Biofield Diagnostic System (BDS), which might have the potential to augment the process of diagnosing breast cancer. This technique was based on the efficacy of analysing skin surface electrical potentials for the differential diagnosis of breast abnormalities. We developed a female breast model, which was close to the actual, by considering the breast as a hemisphere in supine condition with various layers of unequal thickness. Isotropic homogeneous conductivity was assigned to each of these compartments and the volume conductor problem was solved using finite element method to determine the potential distribution developed due to a dipole source. Furthermore, four important parameters were identified and analysis of variance (ANOVA, Yates' method) was performed using design (n = number of parameters, 4). The effect and importance of these parameters were analysed. The Taguchi method was further used to optimise the parameters in order to ensure that the signal from the tumour is maximum as compared to the noise from other factors. The Taguchi method used proved that probes' source strength, tumour size and location of tumours have great effect on the surface potential field. For best results on the breast surface, while having the biggest possible tumour size, low amplitudes of current should be applied nearest to the breast surface. PMID:16929931

  13. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  14. Design and Experimental Study on Spinning Solid Rocket Motor

    NASA Astrophysics Data System (ADS)

    Xue, Heng; Jiang, Chunlan; Wang, Zaicheng

    The study on spinning solid rocket motor (SRM) which used as power plant of twice throwing structure of aerial submunition was introduced. This kind of SRM which with the structure of tangential multi-nozzle consists of a combustion chamber, propellant charge, 4 tangential nozzles, ignition device, etc. Grain design, structure design and prediction of interior ballistic performance were described, and problem which need mainly considered in design were analyzed comprehensively. Finally, in order to research working performance of the SRM, measure pressure-time curve and its speed, static test and dynamic test were conducted respectively. And then calculated values and experimental data were compared and analyzed. The results indicate that the designed motor operates normally, and the stable performance of interior ballistic meet demands. And experimental results have the guidance meaning for the pre-research design of SRM.

  15. Experimental designs for mixtures of chemicals along fixed ratio rays.

    PubMed Central

    Meadows, Stephanie L; Gennings, Chris; Carter, W Hans; Bae, Dong-Soon

    2002-01-01

    Experimental design is important when studying mixtures/combinations of chemicals. The traditional approach for studying mixtures/combinations of multiple chemicals involves response surface methodology, often supported by factorial designs. Although such an approach permits the investigation of both the effects of individual chemicals and their interactions, the number of design points needed to study the chemical mixtures becomes prohibitive when the number of compounds increases. Fixed ratio ray designs have been developed to reduce the amount of experimental effort when interest can be restricted to a specific ray. We focus on the design and analysis issues involved in studying mixtures/combinations of compounds along fixed ratio rays of the compounds. To obtain the inference regarding the interactions among the compounds, we show that the only data required are those along the fixed ratio ray. PMID:12634128

  16. Characterizing the Experimental Procedure in Science Laboratories: A Preliminary Step towards Students Experimental Design

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cedric; Ney, Muriel; Sanchez, Eric; Wajeman, Claire

    2012-01-01

    Many studies have stressed students' lack of understanding of experiments in laboratories. Some researchers suggest that if students design all or parts of entire experiment, as part of an inquiry-based approach, it would overcome certain difficulties. It requires that a procedure be written for experimental design. The aim of this paper is to…

  17. Phylogenetic information and experimental design in molecular systematics.

    PubMed Central

    Goldman, N

    1998-01-01

    Despite the widespread perception that evolutionary inference from molecular sequences is a statistical problem, there has been very little attention paid to questions of experimental design. Previous consideration of this topic has led to little more than an empirical folklore regarding the choice of suitable genes for analysis, and to dispute over the best choice of taxa for inclusion in data sets. I introduce what I believe are new methods that permit the quantification of phylogenetic information in a sequence alignment. The methods use likelihood calculations based on Markov-process models of nucleotide substitution allied with phylogenetic trees, and allow a general approach to optimal experimental design. Two examples are given, illustrating realistic problems in experimental design in molecular phylogenetics and suggesting more general conclusions about the choice of genomic regions, sequence lengths and taxa for evolutionary studies. PMID:9787470

  18. Symposium: experimental design for poultry production and genomics research.

    PubMed

    Pesti, Gene M; Aggrey, Samuel E; Fancher, Bryan I

    2013-09-01

    This symposium dealt with the theoretical and practical aspects of choosing and evaluating experimental designs, and how experimental results may be related to poultry production through modeling. Additionally, recent advances in techniques for generating high-throughput genomic sequencing data, genomic breeding values, genomics selection, and genome-wide association studies have provided unique computational challenges to the poultry industry. Such challenges were presented and discussed. PMID:23960133

  19. A comparison of controller designs for an experimental flexible structure

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Maghami, P. G.; Joshi, S. M.

    1991-01-01

    Control systems design and hardware testing are addressed for an experimental structure that displays the characteristics of a typical flexible spacecraft. The results of designing and implementing various control design methodologies are described. The design methodologies under investigation include linear quadratic Gaussian control, static and dynamic dissipative controls, and H-infinity optimal control. Among the three controllers considered, it is shown, through computer simulation and laboratory experiments on the evolutionary structure, that the dynamic dissipative controller gave the best results in terms of vibration suppression and robustness with respect to modeling errors.

  20. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER. PMID:27008024

  1. Optimization of delignification of two Pennisetum grass species by NaOH pretreatment using Taguchi and ANN statistical approach.

    PubMed

    Mohaptra, Sonali; Dash, Preeti Krishna; Behera, Sudhanshu Shekar; Thatoi, Hrudayanath

    2016-04-01

    In the bioconversion of lignocelluloses for bioethanol, pretreatment seems to be the most important step which improves the elimination of the lignin and hemicelluloses content, exposing cellulose to further hydrolysis. The present study discusses the application of dynamic statistical techniques like the Taguchi method and artificial neural network (ANN) in the optimization of pretreatment of lignocellulosic biomasses such as Hybrid Napier grass (HNG) (Pennisetum purpureum) and Denanath grass (DG) (Pennisetum pedicellatum), using alkali sodium hydroxide. This study analysed and determined a parameter combination with a low number of experiments by using the Taguchi method in which both the substrates can be efficiently pretreated. The optimized parameters obtained from the L16 orthogonal array are soaking time (18 and 26 h), temperature (60°C and 55°C), and alkali concentration (1%) for HNG and DG, respectively. High performance liquid chromatography analysis of the optimized pretreated grass varieties confirmed the presence of glucan (47.94% and 46.50%), xylan (9.35% and 7.95%), arabinan (2.15% and 2.2%), and galactan/mannan (1.44% and 1.52%) for HNG and DG, respectively. Physicochemical characterization studies of native and alkali-pretreated grasses were carried out by scanning electron microscopy and Fourier transformation Infrared spectroscopy which revealed some morphological differences between the native and optimized pretreated samples. Model validation by ANN showed a good agreement between experimental results and the predicted responses. PMID:26584152

  2. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and

  3. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a

  4. Model selection in systems biology depends on experimental design.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483

  5. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  6. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors

  7. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  8. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  9. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  10. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and

  11. Design and experimental study of microcantilever ultrasonic detection transducers.

    PubMed

    Chen, Xuesheng; Stratoudaki, Theodosia; Sharples, Steve D; Clark, Matt

    2009-12-01

    This paper presents the analysis, design, and experimental study of a microcantilever optically-activated ultrasonic detection transducer. An analytical model was derived using 1-D cantilever structural dynamics, leading to the optimization of the transducer design. Finite element modeling enabled dynamic simulation to be performed, with results in good agreement with the analytical model. Transducers were fabricated using MEMS (microelectromechanical systems) techniques. Experimental results are presented on remote noncontact detection of ultrasound using the fabricated transducers; high SNR is achieved for the detected signals, even for relatively low ultrasonic amplitudes. Both analysis and experimental study show that the transducer has a sensitivity approximately 1 to 2 orders of magnitude higher than that of conventional optical detection techniques. Furthermore, we show that the dominant factor in the increased sensitivity of the transducer is the resonant nature of the finger structure. PMID:20040409

  12. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  13. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  14. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  15. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  16. Microradiometric characterization of experimental thermal pixel array designs

    NASA Astrophysics Data System (ADS)

    Marlin, H. Ronald; Bates, Richard L.; Offord, Bruce W.

    1999-07-01

    Advanced in integrated circuit design and micro-machining of silicon have enabled the fabrication of inexpensive, 2D arrays of resistively heated hot-plates, monolithically integrated with addressing the drive circuitry. Infrared scene simulators using these devices have broad-band spectral radiance which approximates naturally occurring thermal radiation. Characterization of these devices involves near field, far field, temporal and electrical measurements. Devices characterized here are experimental SPAWARSYSCEN, San Diego designs which test concepts for inexpensive fabrication, and a British Aerospace experimental hot-plate design for radiant uniformity improvement. Measurements reported here in the mid-band IR include effective temperature, radiance uniformity, temporal response, radiance distributions over single pixels, and effective fill factor. Also included are near field and far field measurements to characterize an add-on device for effective fill factor and efficiency improvements.

  17. Application of Taguchi technique coupled with grey relational analysis for multiple performance characteristics optimization of EDM parameters on ST 42 steel

    NASA Astrophysics Data System (ADS)

    Prayogo, Galang Sandy; Lusi, Nuraini

    2016-04-01

    The optimization technique of machining parameters considering multiple performance characteristics of non conventional machining EDM process using Taguchi method combined with grey relational analysis (GRA) is presented in this study. ST 42 steel was chosen as material work piece and graphite as electrode during this experiment. Performance characteristics such as material removal rate and overcut are selected to evaluated the effect of machining parameters. Current, pulse on time, pulse off time and discharging time/ Z down were selected as machining parameters. The experiments was conducted by varying that machining parameters in three different levels. Based on the Taguchi quality design concept, a L27 orthogonal array table was chosen for the experiments. By using the combination of GRA and Taguchi, the optimization of complicated multiple performance characteristics was transformed into the optimization of a single response performance index. Optimal levels of machining parameters were identified by using Grey Relational Analysis method. The statistical application of analysis of variance was used to determine the relatively significant machining parameters. The result of confirmation test indicted that the determined optimal combination of machining parameters effectively improve the performance characteristics of the machining EDM process on ST 42 steel.

  18. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design

    PubMed Central

    YANG, YU; BAI, WENKUN; CHEN, YINI; LIN, YANDUAN; HU, BING

    2015-01-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm2; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained. PMID:26722279

  19. Development of a cell formation heuristic by considering realistic data using principal component analysis and Taguchi's method

    NASA Astrophysics Data System (ADS)

    Kumar, Shailendra; Sharma, Rajiv Kumar

    2015-12-01

    Over the last four decades of research, numerous cell formation algorithms have been developed and tested, still this research remains of interest to this day. Appropriate manufacturing cells formation is the first step in designing a cellular manufacturing system. In cellular manufacturing, consideration to manufacturing flexibility and production-related data is vital for cell formation. The consideration to this realistic data makes cell formation problem very complex and tedious. It leads to the invention and implementation of highly advanced and complex cell formation methods. In this paper an effort has been made to develop a simple and easy to understand/implement manufacturing cell formation heuristic procedure with considerations to the number of production and manufacturing flexibility-related parameters. The heuristic minimizes inter-cellular movement cost/time. Further, the proposed heuristic is modified for the application of principal component analysis and Taguchi's method. Numerical example is explained to illustrate the approach. A refinement in the results is observed with adoption of principal component analysis and Taguchi's method.

  20. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  1. New Design of Control and Experimental System of Windy Flap

    NASA Astrophysics Data System (ADS)

    Yu, Shanen; Wang, Jiajun; Chen, Zhangping; Sun, Weihua

    Experiments associated with control principle for automation major generally are based on MATLAB simulation, and they are not combined very well with the control objects. The experimental system aims to meets the teaching and studying requirements, provide experimental platform for learning the principle of automatic control, MCU, embedded system, etc. The main research contents contains design of angular surveying, control & drive module, and PC software. MPU6050 was used for angular surveying, PID control algorithm was used to control the flap go to the target angular, PC software was used for display, analysis, and processing.

  2. Design and Implementation of an Experimental Segway Model

    NASA Astrophysics Data System (ADS)

    Younis, Wael; Abdelati, Mohammed

    2009-03-01

    The segway is the first transportation product to stand, balance, and move in the same way we do. It is a truly 21st-century idea. The aim of this research is to study the theory behind building segway vehicles based on the stabilization of an inverted pendulum. An experimental model has been designed and implemented through this study. The model has been tested for its balance by running a Proportional Derivative (PD) algorithm on a microprocessor chip. The model has been identified in order to serve as an educational experimental platform for segways.

  3. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  4. Constrained Response Surface Optimisation and Taguchi Methods for Precisely Atomising Spraying Process

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.; Suwankham, Y.; Homrossukon, S.

    2010-10-01

    This research presents a development of a design of experiment technique for quality improvement in automotive manufacturing industrial. The quality of interest is the colour shade, one of the key feature and exterior appearance for the vehicles. With low percentage of first time quality, the manufacturer has spent a lot of cost for repaired works as well as the longer production time. To permanently dissolve such problem, the precisely spraying condition should be optimized. Therefore, this work will apply the full factorial design, the multiple regression, the constrained response surface optimization methods or CRSOM, and Taguchi's method to investigate the significant factors and to determine the optimum factor level in order to improve the quality of paint shop. Firstly, 2κ full factorial was employed to study the effect of five factors including the paint flow rate at robot setting, the paint levelling agent, the paint pigment, the additive slow solvent, and non volatile solid at spraying of atomizing spraying machine. The response values of colour shade at 15 and 45 degrees were measured using spectrophotometer. Then the regression models of colour shade at both degrees were developed from the significant factors affecting each response. Consequently, both regression models were placed into the form of linear programming to maximize the colour shade subjected to 3 main factors including the pigment, the additive solvent and the flow rate. Finally, Taguchi's method was applied to determine the proper level of key variable factors to achieve the mean value target of colour shade. The factor of non volatile solid was found to be one more additional factor at this stage. Consequently, the proper level of all factors from both experiment design methods were used to set a confirmation experiment. It was found that the colour shades, both visual at 15 and 45 angel of measurement degrees of spectrophotometer, were nearly closed to the target and the defective at quality gate was also reduced from 0.35 WDPV to 0.10 WDPV. This reveals that the objective of this research is met and this procedure can be used as quality improvement guidance for paint shop of automotive vehicle.

  5. Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using Taguchi method

    PubMed Central

    2014-01-01

    Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues. The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. Taguchi method was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions. PMID:24620852

  6. ITER (International Thermonuclear Experimental Reactor) reactor building design study

    SciTech Connect

    Thomson, S.L.; Blevins, J.D.; Delisle, M.W.; Canadian Fusion Fuels Technology Project, Mississauga, ON )

    1989-01-01

    The International Thermonuclear Experimental Reactor (ITER) is at the midpoint of a two-year conceptual design. The ITER reactor building is a reinforced concrete structure that houses the tokamak and associated equipment and systems and forms a barrier between the tokamak and the external environment. It provides radiation shielding and controls the release of radioactive materials to the environment during both routine operations and accidents. The building protects the tokamak from external events, such as earthquakes or aircraft strikes. The reactor building requirements have been developed from the component designs and the preliminary safety analysis. The equipment requirements, tritium confinement, and biological shielding have been studied. The building design in progress requires continuous iteraction with the component and system designs and with the safety analysis. 8 figs.

  7. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  8. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  9. Optimal active vibration absorber - Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1993-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  10. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    SciTech Connect

    Henning, C.D.; Logan, B.G.; Barr, W.L.; Bulmer, R.H.; Doggett, J.N.; Johnson, B.M.; Lee, J.D.; Hoard, R.W.; Miller, J.R.; Slack, D.S.

    1985-11-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs. (WRF)

  11. Experimental Study on Abrasive Waterjet Polishing of Hydraulic Turbine Blades

    NASA Astrophysics Data System (ADS)

    Khakpour, H.; Birglenl, L.; Tahan, A.; Paquet, F.

    2014-03-01

    In this paper, an experimental investigation is implemented on the abrasive waterjet polishing technique to evaluate its capability in polishing of surfaces and edges of hydraulic turbine blades. For this, the properties of this method are studied and the main parameters affecting its performance are determined. Then, an experimental test-rig is designed, manufactured and tested to be used in this study. This test-rig can be used to polish linear and planar areas on the surface of the desired workpieces. Considering the number of parameters and their levels, the Taguchi method is used to design the preliminary experiments. All experiments are then implemented according to the Taguchi L18 orthogonal array. The signal-to-noise ratios obtained from the results of these experiments are used to determine the importance of the controlled polishing parameters on the final quality of the polished surface. The evaluations on these ratios reveal that the nozzle angle and the nozzle diameter have the most important impact on the results. The outcomes of these experiments can be used as a basis to design a more precise set of experiments in which the optimal values of each parameter can be estimated.

  12. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  13. Development of the Biological Experimental Design Concept Inventory (BEDCI).

    PubMed

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non-expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non-expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  14. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  15. Validation of erythromycin microbiological assay using an alternative experimental design.

    PubMed

    Lourenço, Felipe Rebello; Kaneko, Telma Mary; Pinto, Terezinha de Jesus Andreoli

    2007-01-01

    The agar diffusion method, widely used in antibiotic dosage, relates the diameter of the inhibition zone to the dose of the substance assayed. An experimental plan is proposed that may provide better results and an indication of the assay validity. The symmetric or balanced assays (2 x 2) as well as those with interpolation in standard curve (5 x 1) are the main designs used in the dosage of antibiotics. This study proposes an alternative experimental design for erythromycin microbiological assay with the evaluation of the validation parameters of the method referring to linearity, precision, and accuracy. The design proposed (3 x 1) uses 3 doses of standard and 1 dose of sample applied in a unique plate, aggregating the characteristics of the 2 x 2 and 5 x 1 assays. The method was validated for erythromycin microbiological assay through agar diffusion, revealing its adequacy to linearity, precision, and accuracy standards. Likewise, the statistical methods used demonstrated their accordance with the method concerning the parameters evaluated. The 3 x 1 design proved to be adequate for the dosage of erythromycin and thus a good alternative for erythromycin assay. PMID:17760348

  16. Laser spark plug numerical design process with experimental validation

    SciTech Connect

    McIntyre, D.; Woodruff, S.

    2011-01-01

    This work reports the numerical modeling design procedure for a miniaturized laser spark plug. In previous work both side pumped and end pumped laser spark plugs were empirically designed and tested. Experimental data from the previous laser spark plug development cycles is compared to the output predicted by a known set of rate equations. The rate equations are used to develop interrelated inter cavity time dependent waveforms that are then used to identify key variables. These variables are then input to a set of secondary equations for determining the output pulse energy, output power, and output pulse width of the simulated laser system. The physical meaning and the operation of the rate equations is explained in detail. This paper concentrates on the process and decision points needed to successfully design a solid state passively Q-switched laser system, either side pumped or end pumped, that produces the appropriate output needed for use as a laser spark plug for internal combustion engines.

  17. A Hierarchical Adaptive Approach to Optimal Experimental Design

    PubMed Central

    Kim, Woojae; Pitt, Mark A.; Lu, Zhong-Lin; Steyvers, Mark; Myung, Jay I.

    2014-01-01

    Experimentation is at the core of research in the behavioral and neural sciences, yet observations can be expensive and time-consuming to acquire (e.g., MRI scans, responses from infant participants). A major interest of researchers is designing experiments that lead to maximal accumulation of information about the phenomenon under study with the fewest possible number of observations. In addressing this challenge, statisticians have developed adaptive design optimization methods. This letter introduces a hierarchical Bayes extension of adaptive design optimization that provides a judicious way to exploit two complementary schemes of inference (with past and future data) to achieve even greater accuracy and efficiency in information gain. We demonstrate the method in a simulation experiment in the field of visual perception. PMID:25149697

  18. Parametric optimization for tumour identification: bioheat equation using ANOVA and the Taguchi method.

    PubMed

    Sudharsan, N M; Ng, E Y

    2000-01-01

    Breast cancer is the number one killer disease among women. It is known that early detection of a tumour ensures better prognosis and higher survival rate. In this paper an intelligent, inexpensive and non-invasive diagnostic tool is developed for aiding breast cancer detection objectively. This tool is based on thermographic scanning of the breast surface in conjunction with numerical simulation of the breast using the bioheat equation. The medical applications of thermographic scanning make use of the skin temperature as an indication of an underlying pathological process. The thermal pattern over a breast tumour reflects the vascular reaction to the abnormality. Hence an abnormal temperature pattern may be an indicator of an underlying tumour. Seven important parameters are identified and analysis of variance (ANOVA) is performed using a 2n design (n = number of parameters, 7). The effect and importance of the various parameters are analysed. Based on the above 2(7) design, the Taguchi method is used to optimize the parameters in order to ensure the signal from the tumour maximized compared with the noise from the other factors. The model predicts that the ideal setting for capturing the signal from the tumour is when the patient is at basal metabolic activity with a correspondingly lower subcutaneous perfusion in a low temperature environment. PMID:11109858

  19. Optimization of microchannel heat sink using genetic algorithm and Taguchi method

    NASA Astrophysics Data System (ADS)

    Singh, Bhanu Pratap; Garg, Harry; Lall, Arun K.

    2016-04-01

    Active cooling using microchannel is a challenging area. The optimization and miniaturization of the devices is increasing the heat loads and affecting the operating performance of the system. The microchannel based cooling systems are widely used and overcomes most of the limitations of the existing solutions. Microchannels help in reducing dimensions and therefore finding many important applications in the microfluidics domain. The microchannel performance is related to the geometry, material and flow conditions. Optimized selection of controllable parameters is a key issue while designing the microchannel based cooling system. The proposed work presents a simulation based study according to Taguchi design of experiment with Reynolds number, aspect ratio and plenum length as input parameters to determine SN ratio. The objective of this study is to maximize the heat transfer. Mathematical models based on these parameters were developed which helps in global optimization using Genetic Algorithm. Genetic algorithm further employed to optimize the input parameters and generates global solution points for the proposed work. It was concluded that the optimized value for heat transfer coefficient and Nusselt number was 2620.888 W/m2K and 3.4708 as compare to values obtained through SN ratio based parametric study i.e. 2601.3687 W/m2K and 3.447 respectively. Hence an error of 0.744% and 0.68% was detected in heat transfer coefficient and Nusselt number respectively.

  20. Optimization of catalyst formation conditions for synthesis of carbon nanotubes using Taguchi method

    NASA Astrophysics Data System (ADS)

    Pander, Adam; Hatta, Akimitsu; Furuta, Hiroshi

    2016-05-01

    A growth of Carbon Nanotubes (CNTs) suffers many difficulties in finding optimum growth parameters, reproducibility and mass-production, related to the large number of parameters influencing synthesis process. Choosing the proper parameters can be a time consuming process, and still may not give the optimal growth values. One of the possible solutions to decrease the number of the experiments, is to apply optimization methods to the design of the experiment parameter matrix. In this work, Taguchi method of designing experiments is applied to optimize the formation of iron catalyst during annealing process by analyzing average roughness and size of particles. The annealing parameters were: annealing time (tAN), hydrogen flow rate (fH2), temperature (TAN) and argon flow rate (fAr). Plots of signal-to-noise ratios showed that temperature and annealing time have the highest impact on final results of experiment. For more detailed study of the influence of parameters, the interaction plots of tested parameters were analyzed. For the final evaluation, CNT forests were grown on silicon substrates with AlOX/Fe catalyst by thermal chemical vapor deposition method. Based on obtained results, the average diameter of CNTs was decreased by 67% and reduced from 9.1 nm (multi-walled CNTs) to 3.0 nm (single-walled CNTs).

  1. Computational design and experimental verification of a symmetric protein homodimer.

    PubMed

    Mou, Yun; Huang, Po-Ssu; Hsu, Fang-Ciao; Huang, Shing-Jong; Mayo, Stephen L

    2015-08-25

    Homodimers are the most common type of protein assembly in nature and have distinct features compared with heterodimers and higher order oligomers. Understanding homodimer interactions at the atomic level is critical both for elucidating their biological mechanisms of action and for accurate modeling of complexes of unknown structure. Computation-based design of novel protein-protein interfaces can serve as a bottom-up method to further our understanding of protein interactions. Previous studies have demonstrated that the de novo design of homodimers can be achieved to atomic-level accuracy by β-strand assembly or through metal-mediated interactions. Here, we report the design and experimental characterization of a α-helix-mediated homodimer with C2 symmetry based on a monomeric Drosophila engrailed homeodomain scaffold. A solution NMR structure shows that the homodimer exhibits parallel helical packing similar to the design model. Because the mutations leading to dimer formation resulted in poor thermostability of the system, design success was facilitated by the introduction of independent thermostabilizing mutations into the scaffold. This two-step design approach, function and stabilization, is likely to be generally applicable, especially if the desired scaffold is of low thermostability. PMID:26269568

  2. Computational design and experimental verification of a symmetric protein homodimer

    PubMed Central

    Mou, Yun; Huang, Po-Ssu; Hsu, Fang-Ciao; Huang, Shing-Jong; Mayo, Stephen L.

    2015-01-01

    Homodimers are the most common type of protein assembly in nature and have distinct features compared with heterodimers and higher order oligomers. Understanding homodimer interactions at the atomic level is critical both for elucidating their biological mechanisms of action and for accurate modeling of complexes of unknown structure. Computation-based design of novel protein–protein interfaces can serve as a bottom-up method to further our understanding of protein interactions. Previous studies have demonstrated that the de novo design of homodimers can be achieved to atomic-level accuracy by β-strand assembly or through metal-mediated interactions. Here, we report the design and experimental characterization of a α-helix–mediated homodimer with C2 symmetry based on a monomeric Drosophila engrailed homeodomain scaffold. A solution NMR structure shows that the homodimer exhibits parallel helical packing similar to the design model. Because the mutations leading to dimer formation resulted in poor thermostability of the system, design success was facilitated by the introduction of independent thermostabilizing mutations into the scaffold. This two-step design approach, function and stabilization, is likely to be generally applicable, especially if the desired scaffold is of low thermostability. PMID:26269568

  3. The Concept of Fashion Design on the Basis of Color Coordination Using White LED Lighting

    NASA Astrophysics Data System (ADS)

    Mizutani, Yumiko; Taguchi, Tsunemasa

    This thesis focuses on the development of fashion design, especially a dress coordinated with White LED Lighting (=LED). As for the design concept a fusion of the advanced science and local culture was aimed for. For such a reason this development is a very experimental one. Here in particular I handled an Imperial Court dinner dress for the last Japanese First Lady, Mrs. Akie Abe who wore it at the Imperial Court dinner for the Indonesian First Couple held on November 2006 to. This dress made by Prof. T. Taguchi and I open up a new field in the dress design.

  4. Preliminary structural design of a lunar transfer vehicle aerobrake. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.

    1992-01-01

    An aerobrake concept for a Lunar transfer vehicle was weight optimized through the use of the Taguchi design method, structural finite element analyses and structural sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter to depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The minimum weight aerobrake configuration resulting from the study was approx. half the weight of the average of all twenty seven experimental configurations. The parameters having the most significant impact on the aerobrake structural weight were identified.

  5. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  6. Amplified energy harvester from footsteps: design, modeling, and experimental analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ya; Chen, Wusi; Guzman, Plinio; Zuo, Lei

    2014-04-01

    This paper presents the design, modeling and experimental analysis of an amplified footstep energy harvester. With the unique design of amplified piezoelectric stack harvester the kinetic energy generated by footsteps can be effectively captured and converted into usable DC power that could potentially be used to power many electric devices, such as smart phones, sensors, monitoring cameras, etc. This doormat-like energy harvester can be used in crowded places such as train stations, malls, concerts, airport escalator/elevator/stairs entrances, or anywhere large group of people walk. The harvested energy provides an alternative renewable green power to replace power requirement from grids, which run on highly polluting and global-warming-inducing fossil fuels. In this paper, two modeling approaches are compared to calculate power output. The first method is derived from the single degree of freedom (SDOF) constitutive equations, and then a correction factor is applied onto the resulting electromechanically coupled equations of motion. The second approach is to derive the coupled equations of motion with Hamilton's principle and the constitutive equations, and then formulate it with the finite element method (FEM). Experimental testing results are presented to validate modeling approaches. Simulation results from both approaches agree very well with experimental results where percentage errors are 2.09% for FEM and 4.31% for SDOF.

  7. Optimizing Multi Characterstics in Machining of AISI 4340 Steel Using Taguchi's Approach and Utility Concept

    NASA Astrophysics Data System (ADS)

    Gupta, Munish Kumar; Sood, Pardeep Kumar

    2016-01-01

    This paper aims to develop the multi response optimization technique for predict and select the optimal setting of machining parameters while machining AISI 4340 steel using utility concept. The experimental studies in machining were carried out under varying conditions of process parameters, such as cutting speed (v), feed (f) and different cooling conditions (i.e. dry, wet and cryogenic in which liquid nitrogen used as a coolant) by using uncoated tungsten carbide insert tool. Experiments were carried out as per Taguchi's L9 orthogonal array with the utility concept and multi response optimization were performed for minimization of specific cutting force ( K S ) and surface roughness ( R a ). Further statistical analysis of variation (ANOVA) and analysis of mean (ANOM) were used to determine the effect of process parameters on responses K S and R a based on their P value and F value at 95 % confidence level. The optimization results proved that, cutting speed 57 m/min, feed 0.248 mm/min and cryogenic cooling is required for minimizes K S and R a .

  8. Fatigue of NiTi SMA–pulley system using Taguchi and ANOVA

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Jaronie; Leary, Martin; Subic, Aleksandar

    2016-05-01

    Shape memory alloy (SMA) actuators can be integrated with a pulley system to provide mechanical advantage and to reduce packaging space; however, there appears to be no formal investigation of the effect of a pulley system on SMA structural or functional fatigue. In this work, cyclic testing was conducted on nickel–titanium (NiTi) SMA actuators on a pulley system and a control experiment (without pulley). Both structural and functional fatigues were monitored until fracture, or a maximum of 1E5 cycles were achieved for each experimental condition. The Taguchi method and analysis of the variance (ANOVA) were used to optimise the SMA–pulley system configurations. In general, one-way ANOVA at the 95% confidence level showed no significant difference between the structural or functional fatigue of SMA–pulley actuators and SMA actuators without pulley. Within the sample of SMA–pulley actuators, the effect of activation duration had the greatest significance for both structural and functional fatigue, and the pulley configuration (angle of wrap and sheave diameter) had a greater statistical significance than load magnitude for functional fatigue. This work identified that structural and functional fatigue performance of SMA–pulley systems is optimised by maximising sheave diameter and using an intermediate wrap-angle, with minimal load and activation duration. However, these parameters may not be compatible with commercial imperatives. A test was completed for a commercially optimal SMA–pulley configuration. This novel observation will be applicable to many areas of SMA–pulley system applications development.

  9. Optimization of the ASPN Process to Bright Nitriding of Woodworking Tools Using the Taguchi Approach

    NASA Astrophysics Data System (ADS)

    Walkowicz, J.; Staśkiewicz, J.; Szafirowicz, K.; Jakrzewski, D.; Grzesiak, G.; Stępniak, M.

    2013-02-01

    The subject of the research is optimization of the parameters of the Active Screen Plasma Nitriding (ASPN) process of high speed steel planing knives used in woodworking. The Taguchi approach was applied for development of the plan of experiments and elaboration of obtained experimental results. The optimized ASPN parameters were: process duration, composition and pressure of the gaseous atmosphere, the substrate BIAS voltage and the substrate temperature. The results of the optimization procedure were verified by the tools' behavior in the sharpening operation performed in normal industrial conditions. The ASPN technology proved to be extremely suitable for nitriding the woodworking planing tools, which because of their specific geometry, in particular extremely sharp wedge angles, could not be successfully nitrided using conventional direct current plasma nitriding method. The carried out research proved that the values of fracture toughness coefficient K Ic are in correlation with maximum spalling depths of the cutting edge measured after sharpening, and therefore may be used as a measure of the nitrided planing knives quality. Based on this criterion the optimum parameters of the ASPN process for nitriding high speed planing knives were determined.

  10. Spent Fuel Transportation Package Performance Study - Experimental Design Challenges

    SciTech Connect

    Snyder, A. M.; Murphy, A. J.; Sprung, J. L.; Ammerman, D. J.; Lopez, C.

    2003-02-25

    Numerous studies of spent nuclear fuel transportation accident risks have been performed since the late seventies that considered shipping container design and performance. Based in part on these studies, NRC has concluded that the level of protection provided by spent nuclear fuel transportation package designs under accident conditions is adequate. [1] Furthermore, actual spent nuclear fuel transport experience showcase a safety record that is exceptional and unparalleled when compared to other hazardous materials transportation shipments. There has never been a known or suspected release of the radioactive contents from an NRC-certified spent nuclear fuel cask as a result of a transportation accident. In 1999 the United States Nuclear Regulatory Commission (NRC) initiated a study, the Package Performance Study, to demonstrate the performance of spent fuel and spent fuel packages during severe transportation accidents. NRC is not studying or testing its current regulations, a s the rigorous regulatory accident conditions specified in 10 CFR Part 71 are adequate to ensure safe packaging and use. As part of this study, NRC currently plans on using detailed modeling followed by experimental testing to increase public confidence in the safety of spent nuclear fuel shipments. One of the aspects of this confirmatory research study is the commitment to solicit and consider public comment during the scoping phase and experimental design planning phase of this research.

  11. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  12. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  13. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  14. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  15. Optimization of formulation variables of benzocaine liposomes using experimental design.

    PubMed

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness. PMID:18569447

  16. Design considerations for ITER (International Thermonuclear Experimental Reactor) magnet systems

    SciTech Connect

    Henning, C.D.; Miller, J.R.

    1988-10-09

    The International Thermonuclear Experimental Reactor (ITER) is now completing a definition phase as a beginning of a three-year design effort. Preliminary parameters for the superconducting magnet system have been established to guide further and more detailed design work. Radiation tolerance of the superconductors and insulators has been of prime importance, since it sets requirements for the neutron-shield dimension and sensitively influences reactor size. The major levels of mechanical stress in the structure appear in the cases of the inboard legs of the toroidal-field (TF) coils. The cases of the poloidal-field (PF) coils must be made thin or segmented to minimize eddy current heating during inductive plasma operation. As a result, the winding packs of both the TF and PF coils includes significant fractions of steel. The TF winding pack provides support against in-plane separating loads but offers little support against out-of-plane loads, unless shear-bonding of the conductors can be maintained. The removal of heat due to nuclear and ac loads has not been a fundamental limit to design, but certainly has non-negligible economic consequences. We present here preliminary ITER magnetic systems design parameters taken from trade studies, designs, and analyses performed by the Home Teams of the four ITER participants, by the ITER Magnet Design Unit in Garching, and by other participants at workshops organized by the Magnet Design Unit. The work presented here reflects the efforts of many, but the responsibility for the opinions expressed is the authors'. 4 refs., 3 figs., 4 tabs.

  17. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2015-11-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  18. Development of prilling process for biodegradable microspheres through experimental designs.

    PubMed

    Fabien, Violet; Minh-Quan, Le; Michelle, Sergent; Guillaume, Bastiat; Van-Thanh, Tran; Marie-Claire, Venier-Julienne

    2016-02-10

    The prilling process proposes a microparticle formulation easily transferable to the pharmaceutical production, leading to monodispersed and highly controllable microspheres. PLGA microspheres were used for carrying an encapsulated protein and adhered stem cells on its surface, proposing a tool for regeneration therapy against injured tissue. This work focused on the development of the production of PLGA microspheres by the prilling process without toxic solvent. The required production quality needed a complete optimization of the process. Seventeen parameters were studied through experimental designs and led to an acceptable production. The key parameters and mechanisms of formation were highlighted. PMID:26656302

  19. Assessing accuracy of measurements for a Wingate Test using the Taguchi method.

    PubMed

    Franklin, Kathryn L; Gordon, Rae S; Davies, Bruce; Baker, Julien S

    2008-01-01

    The purpose of this study was to establish the effects of four variables on the results obtained for a Wingate Anaerobic Test (WAnT). This study used a 30 second WAnT and compared data collection and analysed in different ways in order to form conclusions as to the relative importance of the variables on the results. Data was collected simultaneously by a commercially available software correction system manufactured by Cranlea Ltd., (Birmingham, England) system and an alternative method of data collection which involves the direct measurement of the flywheel velocity and the brake force. Data was compared using a design of experiments technique, the Taguchi method. Four variables were examined - flywheel speed, braking force, moment of inertia of the flywheel, and time intervals over which the work and power were calculated. The choice of time interval was identified as the most influential variable on the results. While the other factors have an influence on the results, the decreased time interval over which the data is averaged gave 9.8% increase in work done, 40.75% increase in peak power and 13.1% increase in mean power. PMID:18373285

  20. Parameters optimization of laser brazing in crimping butt using Taguchi and BPNN-GA

    NASA Astrophysics Data System (ADS)

    Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu

    2015-04-01

    The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by Taguchi L25 orthogonal array through the statistical design method. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid method, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.

  1. Use of Flow and Transport Models for Experimental Design for Model Calibration and Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Yeh, W.

    2002-12-01

    Groundwater flow and contaminant transport in subsurface are governed by partial differential equations. With appropriate initial and boundary conditions specified, the governing equations are solved by either the finite-difference or the finite-element method. Groundwater models can be used for prediction as well as for guiding field data collection. This paper reviews the use of such models for experimental design for model calibration and monitoring network design for plume characterization. In general, experimental design concerns with the selection of a set of experimental conditions, including data collection strategies, such that the information collected will minimize either the parameter uncertainty in the parameter space or the prediction uncertainty in the prediction space. The minimization is subject to a set of constraints, most importantly, the budgetary constraint. To estimate the uncertainty in either the parameter or the prediction space, it requires the use of the flow and transport models to derive the covariance matrix of the model parameters. To improve the efficiency and reliability of a remediation design, the spread of a contaminant plume in time and space must be predicted by the flow and transport models and monitored accurately. Experimental design techniques have been applied to construct groundwater quality monitoring networks that maximize plume characterization while minimizing the construction and sampling costs.

  2. fMRI reliability: influences of task and experimental design.

    PubMed

    Bennett, Craig M; Miller, Michael B

    2013-12-01

    As scientists, it is imperative that we understand not only the power of our research tools to yield results, but also their ability to obtain similar results over time. This study is an investigation into how common decisions made during the design and analysis of a functional magnetic resonance imaging (fMRI) study can influence the reliability of the statistical results. To that end, we gathered back-to-back test-retest fMRI data during an experiment involving multiple cognitive tasks (episodic recognition and two-back working memory) and multiple fMRI experimental designs (block, event-related genetic sequence, and event-related m-sequence). Using these data, we were able to investigate the relative influences of task, design, statistical contrast (task vs. rest, target vs. nontarget), and statistical thresholding (unthresholded, thresholded) on fMRI reliability, as measured by the intraclass correlation (ICC) coefficient. We also utilized data from a second study to investigate test-retest reliability after an extended, six-month interval. We found that all of the factors above were statistically significant, but that they had varying levels of influence on the observed ICC values. We also found that these factors could interact, increasing or decreasing the relative reliability of certain Task × Design combinations. The results suggest that fMRI reliability is a complex construct whose value may be increased or decreased by specific combinations of factors. PMID:23934630

  3. Experimental Vertical Stability Studies for ITER Performance and Design Guidance

    SciTech Connect

    Humphreys, D A; Casper, T A; Eidietis, N; Ferrera, M; Gates, D A; Hutchinson, I H; Jackson, G L; Kolemen, E; Leuer, J A; Lister, J; LoDestro, L L; Meyer, W H; Pearlstein, L D; Sartori, F; Walker, M L; Welander, A S; Wolfe, S M

    2008-10-13

    Operating experimental devices have provided key inputs to the design process for ITER axisymmetric control. In particular, experiments have quantified controllability and robustness requirements in the presence of realistic noise and disturbance environments, which are difficult or impossible to characterize with modeling and simulation alone. This kind of information is particularly critical for ITER vertical control, which poses some of the highest demands on poloidal field system performance, since the consequences of loss of vertical control can be very severe. The present work describes results of multi-machine studies performed under a joint ITPA experiment on fundamental vertical control performance and controllability limits. We present experimental results from Alcator C-Mod, DIII-D, NSTX, TCV, and JET, along with analysis of these data to provide vertical control performance guidance to ITER. Useful metrics to quantify this control performance include the stability margin and maximum controllable vertical displacement. Theoretical analysis of the maximum controllable vertical displacement suggests effective approaches to improving performance in terms of this metric, with implications for ITER design modifications. Typical levels of noise in the vertical position measurement which can challenge the vertical control loop are assessed and analyzed.

  4. Skin Microbiome Surveys Are Strongly Influenced by Experimental Design.

    PubMed

    Meisel, Jacquelyn S; Hannigan, Geoffrey D; Tyldsley, Amanda S; SanMiguel, Adam J; Hodkinson, Brendan P; Zheng, Qi; Grice, Elizabeth A

    2016-05-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provides more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e., gastrointestinal) and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource and cost intensive, provides evidence of a community's functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This study highlights the importance of experimental design for downstream results in skin microbiome surveys. PMID:26829039

  5. Experimental vertical stability studies for ITER performance and design guidance

    NASA Astrophysics Data System (ADS)

    Humphreys, D. A.; Casper, T. A.; Eidietis, N.; Ferrara, M.; Gates, D. A.; Hutchinson, I. H.; Jackson, G. L.; Kolemen, E.; Leuer, J. A.; Lister, J.; Lo Destro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Portone, A.; Sartori, F.; Walker, M. L.; Welander, A. S.; Wolfe, S. M.

    2009-11-01

    Operating experimental devices have provided key inputs to the design process for ITER axisymmetric control. In particular, experiments have quantified controllability and robustness requirements in the presence of realistic noise and disturbance environments, which are difficult or impossible to characterize with modelling and simulation alone. This kind of information is particularly critical for ITER vertical control, which poses the highest demands on poloidal field system performance, since the consequences of loss of vertical control can be severe. This work describes results of multi-machine studies performed under a joint ITPA experiment (MDC-13) on fundamental vertical control performance and controllability limits. We present experimental results from Alcator C-Mod, DIII-D, NSTX, TCV and JET, along with analysis of these data to provide vertical control performance guidance to ITER. Useful metrics to quantify this control performance include the stability margin and maximum controllable vertical displacement. Theoretical analysis of the maximum controllable vertical displacement suggests effective approaches to improving performance in terms of this metric, with implications for ITER design modifications. Typical levels of noise in the vertical position measurement and several common disturbances which can challenge the vertical control loop are assessed and analysed.

  6. Cutting the wires: modularization of cellular networks for experimental design.

    PubMed

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  7. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence for detecting contamination or clearing uncontaminated (or decontaminated) areas. Following sample collection for a given test event, the INL building will be decontaminated using Cl2O gas. For possibly contaminated areas (individual rooms or the whole floor of a building), the numbers of probabilistic samples were chosen to provide 95% confidence of detecting contaminated areas of specified sizes. The numbers of judgmental samples were chosen based on guidance from experts in judgmental sampling. For rooms that may be uncontaminated following a contamination event, or for whole floors after decontamination, the numbers of judgmental and probabilistic samples were chosen using a Bayesian approach that provides for combining judgmental and probabilistic samples to make a clearance statement of the form “95% confidence that at least 99% of the room (or floor) is not contaminated”. The experimental design also provides for making 95%/Y% clearance statements using only probabilistic samples, where Y < 99. For each test event, the numbers of samples were selected for a minimal plan (containing fewer samples) and a preferred plan (containing more samples). The preferred plan is recommended over the minimal plan. The preferred plan specifies a total of 1452 samples, 912 after contamination and 540 after decontamination. The minimal plan specifies a total of 1119 samples, 744 after contamination and 375 after decontamination. If the advantages of the “after decontamination” portion of the preferred plan are judged to be small compared to the “after decontamination” portion of the minimal plan, it is an option to combine the “after contamination” portion of the preferred plan (912 samples) with the “after decontamination” portion of the minimal plan (375 samples). This hybrid plan would involve a total of 1287 samples.

  8. An application of the Taguchi method to the development of a supplementary power source for the hybrid bicycle

    SciTech Connect

    Yamamoto, Hiroshi; Katsuoka, Tatsuzo; Igarashi, Nihaku; Koyama, Hiroyuki

    1995-12-31

    Yamaha Motor has developed and marketed a hybrid bicycle with an electric supplemental power source which generates assist power in proportion to the pedal torque by riders. The key function required for this assist power control system is that the variation of the assist ratio should be as small as possible over wide range of riding conditions. The assist power control system consists of mechanical and electrical components and requires a very tight quality control of each component if the design is to be robust to disturbances such as pedal torque or vehicle speed. The authors applied the Taguchi method to this development and succeeded in selecting the optimum combination of component levels in the system.

  9. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs

  10. Protein design algorithms predict viable resistance to an experimental antifolate

    PubMed Central

    Reeve, Stephanie M.; Gainza, Pablo; Frey, Kathleen M.; Georgiev, Ivelin; Donald, Bruce R.; Anderson, Amy C.

    2015-01-01

    Methods to accurately predict potential drug target mutations in response to early-stage leads could drive the design of more resilient first generation drug candidates. In this study, a structure-based protein design algorithm (K* in the OSPREY suite) was used to prospectively identify single-nucleotide polymorphisms that confer resistance to an experimental inhibitor effective against dihydrofolate reductase (DHFR) from Staphylococcus aureus. Four of the top-ranked mutations in DHFR were found to be catalytically competent and resistant to the inhibitor. Selection of resistant bacteria in vitro reveals that two of the predicted mutations arise in the background of a compensatory mutation. Using enzyme kinetics, microbiology, and crystal structures of the complexes, we determined the fitness of the mutant enzymes and strains, the structural basis of resistance, and the compensatory relationship of the mutations. To our knowledge, this work illustrates the first application of protein design algorithms to prospectively predict viable resistance mutations that arise in bacteria under antibiotic pressure. PMID:25552560

  11. Comparing simulated emission from molecular clouds using experimental design

    SciTech Connect

    Yeremi, Miayan; Flynn, Mallory; Loeppky, Jason; Rosolowsky, Erik; Offner, Stella

    2014-03-10

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.

  12. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  13. Simulation-based optimal Bayesian experimental design for nonlinear systems

    SciTech Connect

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters. Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics.

  14. Bearing diagnosis based on Mahalanobis-Taguchi-Gram-Schmidt method

    NASA Astrophysics Data System (ADS)

    Shakya, Piyush; Kulkarni, Makarand S.; Darpe, Ashish K.

    2015-02-01

    A methodology is developed for defect type identification in rolling element bearings using the integrated Mahalanobis-Taguchi-Gram-Schmidt (MTGS) method. Vibration data recorded from bearings with seeded defects on outer race, inner race and balls are processed in time, frequency, and time-frequency domains. Eleven damage identification parameters (RMS, Peak, Crest Factor, and Kurtosis in time domain, amplitude of outer race, inner race, and ball defect frequencies in FFT spectrum and HFRT spectrum in frequency domain and peak of HHT spectrum in time-frequency domain) are computed. Using MTGS, these damage identification parameters (DIPs) are fused into a single DIP, Mahalanobis distance (MD), and gain values for the presence of all DIPs are calculated. The gain value is used to identify the usefulness of DIP and the DIPs with positive gain are again fused into MD by using Gram-Schmidt Orthogonalization process (GSP) in order to calculate Gram-Schmidt Vectors (GSVs). Among the remaining DIPs, sign of GSVs of frequency domain DIPs is checked to classify the probable defect. The approach uses MTGS method for combining the damage parameters and in conjunction with the GSV classifies the defect. A Defect Occurrence Index (DOI) is proposed to rank the probability of existence of a type of bearing damage (ball defect/inner race defect/outer race defect/other anomalies). The methodology is successfully validated on vibration data from a different machine, bearing type and shape/configuration of the defect. The proposed methodology is also applied on the vibration data acquired from the accelerated life test on the bearings, which established the applicability of the method on naturally induced and naturally progressed defect. It is observed that the methodology successfully identifies the correct type of bearing defect. The proposed methodology is also useful in identifying the time of initiation of a defect and has potential for implementation in a real time environment.

  15. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    NASA Astrophysics Data System (ADS)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  16. Experimental Reality: Principles for the Design of Augmented Environments

    NASA Astrophysics Data System (ADS)

    Lahlou, Saadi

    The Laboratory of Design for Cognition at EDF R&D (LDC) is a living laboratory, which we created to develop Augmented Environment (AE) for collaborative work, more specifically “cognitive work” (white collars, engineers, office workers). It is a corporate laboratory in a large industry, where natural activity of real users is observed in a continuous manner in various spaces (project space, meeting room, lounge, etc.) The RAO room, an augmented meeting room, is used daily for “normal” meetings; it is also the “mother room” of all augmented meeting rooms in the company, where new systems, services, and devices are tested. The LDC has gathered a unique set of data on the use of AE, and developed various observation and design techniques, described in this chapter. LDC uses novel techniques of digital ethnography, some of which were invented there (SubCam, offsat) and some of which were developed elsewhere and adapted (360° video, WebDiver, etc.). At LDC, some new theories have also been developed to explain behavior and guide innovation: cognitive attractors, experimental reality, and the triple-determination framework.

  17. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  18. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  19. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  20. Quiet Clean Short-Haul Experimental Engine (QSCEE). Preliminary analyses and design report, volume 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental propulsion systems to be built and tested in the 'quiet, clean, short-haul experimental engine' program are presented. The flight propulsion systems are also presented. The following areas are discussed: acoustic design; emissions control; engine cycle and performance; fan aerodynamic design; variable-pitch actuation systems; fan rotor mechanical design; fan frame mechanical design; and reduction gear design.

  1. Validation of a buffet meal design in an experimental restaurant.

    PubMed

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes. PMID:22349779

  2. Plackett-Burman experimental design to facilitate syntactic foam development

    DOE PAGESBeta

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less

  3. Optimal experimental design with the sigma point method.

    PubMed

    Schenkendorf, R; Kremling, A; Mangold, M

    2009-01-01

    Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model. PMID:19154081

  4. Plackett-Burman experimental design to facilitate syntactic foam development

    SciTech Connect

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix and the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.

  5. Experimental Designs for Testing Differences in Survival Among Salmonid Populations.

    SciTech Connect

    Hoffman, Annette; Busack, Craig; Knudsen, Craig

    1994-11-01

    The Yakima Fisheries Project (YFP) is a supplementation plan for enhancing salmon runs in the Yakima River basin. It is presumed that inadequate spawning and rearing habitat are limiting factors to population abundance of spring chinook salmon (Oncorhynchus tshawyacha). Therefore, the supplementation effort for spring chinook salmon is focused on introducing hatchery-raised smolts into the basin to compensate for the lack of spawning habitat. However, based on empirical evidence in the Yakima basin, hatchery-reared salmon have survived poorly compared to wild salmon. Therefore, the YFP has proposed to alter the optimal conventional treatment (OCT), which is the state-of-the-art hatchery rearing method, to a new innovative treatment (NIT). The NIT is intended to produce hatchery fish that mimic wild fish and thereby to enhance their survival over that of OCT fish. A limited application of the NIT (LNIT) has also been proposed to reduce the cost of applying the new treatment, yet retain the benefits of increased survival. This research was conducted to test whether the uncertainty using the experimental design was within the limits specified by the Planning Status Report (PSR).

  6. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  7. Experimental Design on Laminated Veneer Lumber Fiber Composite: Surface Enhancement

    NASA Astrophysics Data System (ADS)

    Meekum, U.; Mingmongkol, Y.

    2010-06-01

    Thick laminate veneer lumber(LVL) fibre reinforced composites were constructed from the alternated perpendicularly arrayed of peeled rubber woods. Glass woven was laid in between the layers. Native golden teak veneers were used as faces. In house formulae epoxy was employed as wood adhesive. The hand lay-up laminate was cured at 150° C for 45 mins. The cut specimen was post cured at 80° C for at least 5 hours. The 2k factorial design of experimental(DOE) was used to verify the parameters. Three parameters by mean of silane content in epoxy formulation(A), smoke treatment of rubber wood surface(B) and anti-termite application(C) on the wood surface were analysed. Both low and high levels were further subcategorised into 2 sub-levels. Flexural properties were the main respond obtained. ANOVA analysis of the Pareto chart was engaged. The main effect plot was also testified. The results showed that the interaction between silane quantity and termite treatment is negative effect at high level(AC+). Vice versa, the interaction between silane and smoke treatment was positive significant effect at high level(AB+). According to this research work, the optimal setting to improve the surface adhesion and hence flexural properties enhancement were high level of silane quantity, 15% by weight, high level of smoked wood layers, 8 out of 14 layers, and low anti termite applied wood. The further testes also revealed that the LVL composite had superior properties that the solid woods but slightly inferior in flexibility. The screw withdrawn strength of LVL showed the higher figure than solid wood. It is also better resistance to moisture and termite attack than the rubber wood.

  8. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  9. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  10. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    NASA Astrophysics Data System (ADS)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular split plots (diameter: 3.65 m & surface area: 10.5 m2) composed of one half amended with biochar and one control half not amended were prepared. Five of these plots are equipped with a warming system, while the other five were equipped with dummies. Each warmed plot is collocated with a control plot within one block. While split plots are all oriented in the same direction the position of blocks is randomized to eliminate the effect of the spatial variability. Biochar was incorporated in the first 20 cm of the soil with a rototiller. Warming system is provided by hexagonal arrays of infrared heaters. The temperature of the plots is monitored with infrared cameras. The 3oC increase of temperature is obtained by dynamically monitoring the temperature difference between warmed and control plots within blocks via improved software. Each plot is further equipped with a soil temperature and moisture sensor.

  11. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Allowable design changes; modification of... Conditions of an Experimental Permit 437.85 Allowable design changes; modification of an experimental... make to the reusable suborbital rocket design without invalidating the permit. (b) Except for...

  12. Experimental verification of Space Platform battery discharger design optimization

    NASA Technical Reports Server (NTRS)

    Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.

    1991-01-01

    The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.

  13. Critical Zone Experimental Design to Assess Soil Processes and Function

    NASA Astrophysics Data System (ADS)

    Banwart, Steve

    2010-05-01

    Through unsustainable land use practices, mining, deforestation, urbanisation and degradation by industrial pollution, soil losses are now hypothesized to be much faster (100 times or more) than soil formation - with the consequence that soil has become a finite resource. The crucial challenge for the international research community is to understand the rates of processes that dictate soil mass stocks and their function within Earth's Critical Zone (CZ). The CZ is the environment where soils are formed, degrade and provide their essential ecosystem services. Key among these ecosystem services are food and fibre production, filtering, buffering and transformation of water, nutrients and contaminants, storage of carbon and maintaining biological habitat and genetic diversity. We have initiated a new research project to address the priority research areas identified in the European Union Soil Thematic Strategy and to contribute to the development of a global network of Critical Zone Observatories (CZO) committed to soil research. Our hypothesis is that the combined physical-chemical-biological structure of soil can be assessed from first-principles and the resulting soil functions can be quantified in process models that couple the formation and loss of soil stocks with descriptions of biodiversity and nutrient dynamics. The objectives of this research are to 1. Describe from 1st principles how soil structure influences processes and functions of soils, 2. Establish 4 European Critical Zone Observatories to link with established CZOs, 3. Develop a CZ Integrated Model of soil processes and function, 4. Create a GIS-based modelling framework to assess soil threats and mitigation at EU scale, 5. Quantify impacts of changing land use, climate and biodiversity on soil function and its value and 6. Form with international partners a global network of CZOs for soil research and deliver a programme of public outreach and research transfer on soil sustainability. The experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  14. Introduction to Experimental Design: Can You Smell Fear?

    ERIC Educational Resources Information Center

    Willmott, Chris J. R.

    2011-01-01

    The ability to design appropriate experiments in order to interrogate a research question is an important skill for any scientist. The present article describes an interactive lecture-based activity centred around a comparison of two contrasting approaches to investigation of the question "Can you smell fear?" A poorly designed experiment (a video…

  15. Introduction to Experimental Design: Can You Smell Fear?

    ERIC Educational Resources Information Center

    Willmott, Chris J. R.

    2011-01-01

    The ability to design appropriate experiments in order to interrogate a research question is an important skill for any scientist. The present article describes an interactive lecture-based activity centred around a comparison of two contrasting approaches to investigation of the question "Can you smell fear?" A poorly designed experiment (a video

  16. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous biodrying reactor were the type of biomass feed and the outlet relative humidity profiles. The biomass feed is mill specific and since one mill was studied for this study, the nutrition level of the biomass feed was found adequate for the microbial activity, and hence the type of biomass is a fixed parameter. The influence of outlet relative humidity profile was investigated on the overall performance and the complexity index of the continuous biodrying reactor. The best biodrying efficiency was achieved at an outlet relative humidity profile which controls the removal of unbound water at the wet-bulb temperature in the 1st and 2nd compartments of the reactor, and the removal of bound water at the dry-bulb temperature in the 3rd and 4th compartments. Through a systematic modeling approach, a 2-D model was developed to describe the transport phenomena in the continuous biodrying reactor. The results of the 2-D model were in satisfactory agreement with the experimental data. It was found that about 30% w/w of the total water removal (drying rate) takes place in the 1st and 2nd compartments mainly under a convection dominated mechanism, whereas about 70% w/w of the total water removal takes place in the 3rd and 4th compartments where a bioheat-diffusion dominated mechanism controls the transport phenomena. The 2-D model was found to be an appropriate tool for the estimation of the total water removal rate (drying rate) in the continuous biodrying reactor when compared to the 1-D model. A dimensionless analysis was performed on the 2-D model and established the preliminary criteria for the scale-up of the continuous biodrying process. Finally, a techno-economic assessment of the continuous biodrying process revealed that there is great potential for the implementation of the biodrying process in Canadian pulp and paper mills. The techno-economic results were compared to the other competitive existing drying technologies. It was proven that the continuous biodrying process results in significant economic benefits and has great potential to address the current industrial problems associated with sludge management.

  17. Music and video iconicity: theory and experimental design.

    PubMed

    Kendall, Roger A

    2005-01-01

    Experimental studies on the relationship between quasi-musical patterns and visual movement have largely focused on either referential, associative aspects or syntactical, accent-oriented alignments. Both of these are very important, however, between the referential and areferential lays a domain where visual pattern perceptually connects to musical pattern; this is iconicity. The temporal syntax of accent structures in iconicity is hypothesized to be important. Beyond that, a multidimensional visual space connects to musical patterning through mapping of visual time/space to musical time/magnitudes. Experimental visual and musical correlates are presented and comparisons to previous research provided. PMID:15684561

  18. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  19. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of

  20. Analysis of natural convective heat transfer of nano coated aluminium fins using Taguchi method

    NASA Astrophysics Data System (ADS)

    Senthilkumar, R.; Nandhakumar, A. J. D.; Prabhu, S.

    2013-01-01

    Rectangular aluminium fins were preferred for analysis and coated by carbon nano tubes using PVD to enhance the heat transfer rate of fins. Convective heat transfer rates for coated and non-coated surfaces were calculated and compared. The temperature and heat transfer characteristics were investigated using Nusselt, Grashof, Prandtl and Rayleigh numbers and also optimized by Taguchi method and ANOVA analysis. The average percentage of increase in fin efficiency is 5 %.

  1. Experimental design for research on shock-turbulence interaction

    NASA Technical Reports Server (NTRS)

    Radcliffe, S. W.

    1969-01-01

    Report investigates the production of acoustic waves in the interaction of a supersonic shock and a turbulence environment. The five stages of the investigation are apparatus design, development of instrumentation, preliminary experiment, turbulence generator selection, and main experiments.

  2. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  3. Experimental design: computer simulation for improving the precision of an experiment.

    PubMed

    van Wilgenburg, Henk; Zillesen, Piet G van Schaick; Krulichova, Iva

    2004-06-01

    An interactive computer-assisted learning program, ExpDesign, that has been developed for simulating animal experiments, is introduced. The program guides students through the steps for designing animal experiments and estimating optimal sample sizes. Principles are introduced for controlling variation, establishing the experimental unit, selecting randomised block and factorial experimental designs, and applying the appropriate statistical analysis. Sample Power is a supporting tool that visualises the process of estimating the sample size. The aim of developing the ExpDesign program has been to make biomedical research workers more familiar with some basic principles of experimental design and statistics and to facilitate discussions with statisticians. PMID:23581147

  4. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  5. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  6. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-01

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model. PMID:26575545

  7. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    SciTech Connect

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  8. Optimization of Physical Working Environment Setting to Improve Productivity and Minimize Error by Taguchi and VIKOR Methods

    NASA Astrophysics Data System (ADS)

    Ilma Rahmillah, Fety

    2016-01-01

    The working environment is one factor that has contribution to the worker's performance, especially for continuous and monotonous works. L9 Taguchi design experiment for inner array is used to design the experiment which was carried out in laboratory whereas L4 is for outer array. Four control variables with three levels of each are used to get the optimal combination of working environment setting. Four responses are also measured to know the effect of four control factors. Results shown that by using ANOVA, the effect of illumination, temperature, and instrumental music to the number of ouput, number of error, and rating perceived discomfort is significant with the total variance explained of 54,67%, 60,67%, and 75,22% respectively. By using VIKOR method, it yields the optimal combination of experiment 66 with the setting condition of A3-B2-C1-D3. The illumination is 325-350 lux, temperature is 240-260C, fast category of instrumental music, and 70-80 dB for intensity of the music being played.

  9. Teaching Simple Experimental Design to Undergraduates: Do Your Students Understand the Basics?

    ERIC Educational Resources Information Center

    Hiebert, Sara M.

    2007-01-01

    This article provides instructors with guidelines for teaching simple experimental design for the comparison of two treatment groups. Two designs with specific examples are discussed along with common misconceptions that undergraduate students typically bring to the experiment design process. Features of experiment design that maximize power and

  10. Teaching Simple Experimental Design to Undergraduates: Do Your Students Understand the Basics?

    ERIC Educational Resources Information Center

    Hiebert, Sara M.

    2007-01-01

    This article provides instructors with guidelines for teaching simple experimental design for the comparison of two treatment groups. Two designs with specific examples are discussed along with common misconceptions that undergraduate students typically bring to the experiment design process. Features of experiment design that maximize power and…

  11. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The paper summarizes the results obtained in an exploratory evaluation of ceramics for automobile thermal reactors. Candidate ceramic materials were evaluated in several reactor designs using both engine dynamometer and vehicle road tests. Silicon carbide contained in a corrugated metal support structure exhibited the best performance, lasting 1100 hours in engine dynamometer tests and for more than 38,600 kilimeters (24,000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  12. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The results obtained in an exploratory evaluation of ceramics for automobile thermal reactors are summarized. Candidate ceramic materials were evaluated in several reactor designs by using both engine-dynamometer and vehicle road tests. Silicon carbide contained in a corrugated-metal support structure exhibited the best performance, lasting 1100 hr in engine-dynamometer tests and more than 38,600 km (24000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as those containing silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  13. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  14. An Experimental Design For Summative Evaluation of Proprietary Reading Materials.

    ERIC Educational Resources Information Center

    Murray, James R.

    A summative evaluation design was developed as a framework for evaluating instructional materials in remedial reading. The paradigm includes the selection of (1) relevant variables for study and (2) the method of study. Two types of reading materials used in Chicago schools were studied--Cracking the Code (CTC) and the Mott Semi-Programmed Series…

  15. Creativity in Advertising Design Education: An Experimental Study

    ERIC Educational Resources Information Center

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  16. The Inquiry Flame: Scaffolding for Scientific Inquiry through Experimental Design

    ERIC Educational Resources Information Center

    Pardo, Richard; Parker, Jennifer

    2010-01-01

    In the lesson presented in this article, students learn to organize their thinking and design their own inquiry experiments through careful observation of an object, situation, or event. They then conduct these experiments and report their findings in a lab report, poster, trifold board, slide, or video that follows the typical format of the…

  17. Creativity in Advertising Design Education: An Experimental Study

    ERIC Educational Resources Information Center

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising

  18. EXPERIMENTAL DESIGN AND INSTRUMENTATION FOR A FIELD EXPERIMENT

    EPA Science Inventory

    This report concerns the design of a field experiment for a military setting in which the effects of carbon monoxide on neurobehavioral variables are to be studied. ield experiment is distinguished from a survey by the fact that independent variables are manipulated, just as in t...

  19. HEAO C-1 gamma-ray spectrometer. [experimental design

    NASA Technical Reports Server (NTRS)

    Mahoney, W. A.; Ling, J. C.; Willett, J. B.; Jacobson, A. S.

    1978-01-01

    The gamma-ray spectroscopy experiment to be launched on the third High Energy Astronomy Observatory (HEAO C) will perform a complete sky search for narrow gamma-ray line emission to the level of about 00001 photons/sq cm -sec for steady point sources. The design of this experiment and its performance based on testing and calibration to date are discussed.

  20. Applying Taguchi methods for solvent-assisted PMMA bonding technique for static and dynamic micro-TAS devices.

    PubMed

    Hsu, Yi-Chu; Chen, Tang-Yuan

    2007-08-01

    This work examines numerous significant process parameters in the solvent-assistant Polymethyl methacrylate (PMMA) bonding scheme and presents two Micro-total-analysis System (micro-TAS) devices generated by adopting the optimal bonding parameters. The process parameters considered were heating temperature, applied loading, duration and solution. The effects of selected process parameters on bonding dimensions loss and strength, and subsequent optimal setting of the parameters were accomplished using Taguchi's scheme. Additionally, two micro-TAS devices were realized using a static paraffin microvalve and a dynamic diffuser micropump. The PMMA chips were carved using a CO2 laser that patterned device microchannels and microchambers. The operation principles, fabrication processes and experimental performance of the devices are discussed. This bonding technique has numerous benefits, including high bonding strength (240 kgf/cm2) and low dimension loss (2-6%). For comparison, this work also demonstrates that the normal stress of this technology is 2-15 times greater than that of other bonding technologies, including hot embossing, anodic bonding, direct bonding and thermal fusion bonding. PMID:17516175

  1. Experimental design for the simulation of combined-acting pollutants

    SciTech Connect

    Diehl, H.; Gaeting, J.H.; Habermann, D.

    1985-06-01

    A short review is given of concepts which are used to evaluate effects from combined-acting agents. To explore those effects exactly a high multiplicity of experiments must be performed, as an algebraic derivation from the effects of the single-acting agents is not usually possible. For practical purposes (as, for instance, to establish threshold limit values with respect to occupational health standards) a more pragmatic concept is proposed. Accordingly, an experimental setup has been developed which enables researchers to expose four groups of rodents simultaneously to two different single-acting chemical and/or physical stress parameters, both separately and in combination, and to control conditions.

  2. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  3. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  4. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  5. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  6. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making this type of measurement. Attention is also given to ways in which in-water fluorescence measurements have revolutionized biogeochemical studies of CDOM and how those measurements can be used in conjunction with remotely sense satellite data to understand better the biogeochemistry of DOM in aquatic environments.

  7. Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design

    NASA Technical Reports Server (NTRS)

    Page, L. W.; From, T. P.

    1977-01-01

    Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.

  8. A new dietary model to study colorectal carcinogenesis: experimental design, food preparation, and experimental findings.

    PubMed

    Rozen, P; Liberman, V; Lubin, F; Angel, S; Owen, R; Trostler, N; Shkolnik, T; Kritchevsky, D

    1996-01-01

    Experimental dietary studies of human colorectal carcinogenesis are usually based on the AIN-76A diet, which is dissimilar to human food in source, preparation, and content. The aims of this study were to examine the feasibility of preparing and feeding rats the diet of a specific human population at risk for colorectal neoplasia and to determine whether changes in the colonic morphology and metabolic contents would differ from those resulting from a standard rat diet. The mean daily food intake composition of a previously evaluated adenoma patient case-control study was used for the "human adenoma" (HA) experimental diet. Foods were prepared as for usual human consumption and processed by dehydration to the physical characteristics of an animal diet. Sixty-four female Sprague-Dawley rats were randomized and fed ad libitum the HA or the AIN-76A diet. Every eight weeks, eight rats from each group were sacrificed, and the colons and contents were examined. Analysis of the prepared food showed no significant deleterious changes; food intake and weight gain were similar in both groups. Compared with the controls, the colonic contents of rats fed the HA diet contained significantly less calcium, concentrations of neutral sterols, total lipids, and cholic and deoxycholic acids were increased, and there were no colonic histological changes other than significant epithelial hyperproliferation. This initial study demonstrated that the HA diet can be successfully processed for feeding to experimental animals and is acceptable and adequate for growth but induces significant metabolic and hyperproliferative changes in the rat colon. This dietary model may be useful for studies of human food, narrowing the gap between animal experimentation and human nutritional research. PMID:8837864

  9. Shock-driven mixing: Experimental design and initial conditions

    NASA Astrophysics Data System (ADS)

    Friedman, Gavin; Prestridge, Katherine; Mejia-Alvarez, Ricardo; Leftwich, Megan

    2012-03-01

    A new Vertical Shock Tube (VST) has been designed to study shock-induced mixing due to the Richtmyer-Meshkov Instability (RMI) developing on a 3-D multi-mode interface between two gases. These studies characterize how interface contours, gas density difference, and Mach No. affect the ensuing mixing by using simultaneous measurements of velocity/density fields. The VST allows for the formation of a single stably-stratified interface, removing complexities of the dual interface used in prior RMI work. The VST also features a new diaphragmless driver, making feasible larger ensembles of data by reducing intra-shot time, and a larger viewing window allowing new observations of late-time mixing. The initial condition (IC) is formed by a co-flow system, chosen to minimize diffusion at the gas interface. To ensure statistically stationary ICs, a contoured nozzle has been manufactured to form repeatable co-flowing jets that are manipulated by a flapping splitter plate to generate perturbations that span the VST. This talk focuses on the design of the IC flow system and shows initial results characterizing the interface.

  10. Application of Experimental Design in Preparation of Nanoliposomes Containing Hyaluronidase

    PubMed Central

    Kasinathan, Narayanan; Volety, Subrahmanyam Mallikarjuna; Josyula, Venkata Rao

    2014-01-01

    Hyaluronidase is an enzyme that catalyzes breakdown of hyaluronic acid. This property is utilized for hypodermoclysis and for treating extravasation injury. Hyaluronidase is further studied for possible application as an adjuvant for increasing the efficacy of other drugs. Development of suitable carrier system for hyaluronidase would help in coadministration of other drugs. In the present study, the hyaluronidase was encapsulated in liposomes. The effect of variables, namely, phosphatidylcholine (PC), cholesterol, temperature during film formation (T1), and speed of rotation of the flask during film formation (SPR) on percentage of protein encapsulation, was first analyzed using factorial design. The study showed that level of phosphatidylcholine had the maximum effect on the outcome. The effect of interaction of PC and SPR required for preparation of nanoliposomes was identified by central composite design (CCD). The dependent variables were percentage protein encapsulation, particle size, and zeta potential. The study showed that ideal conditions for production of hyaluronidase loaded nanoliposomes are PC—140 mg and cholesterol 1/5th of PC when the SPR is 150 rpm and T1 is 50°C. PMID:25295195

  11. The ISR Asymmetrical Capacitor Thruster: Experimental Results and Improved Designs

    NASA Technical Reports Server (NTRS)

    Canning, Francis X.; Cole, John; Campbell, Jonathan; Winet, Edwin

    2004-01-01

    A variety of Asymmetrical Capacitor Thrusters has been built and tested at the Institute for Scientific Research (ISR). The thrust produced for various voltages has been measured, along with the current flowing, both between the plates and to ground through the air (or other gas). VHF radiation due to Trichel pulses has been measured and correlated over short time scales to the current flowing through the capacitor. A series of designs were tested, which were increasingly efficient. Sharp features on the leading capacitor surface (e.g., a disk) were found to increase the thrust. Surprisingly, combining that with sharp wires on the trailing edge of the device produced the largest thrust. Tests were performed for both polarizations of the applied voltage, and for grounding one or the other capacitor plate. In general (but not always) it was found that the direction of the thrust depended on the asymmetry of the capacitor rather than on the polarization of the voltage. While no force was measured in a vacuum, some suggested design changes are given for operation in reduced pressures.

  12. Environmental sex determination in reptiles: ecology, evolution, and experimental design.

    PubMed

    Janzen, F J; Paukstis, G L

    1991-06-01

    Sex-determining mechanisms in reptiles can be divided into two convenient classifications: genotypic (GSD) and environmental (ESD). While a number of types of GSD have been identified in a wide variety of reptilian taxa, the expression of ESD in the form of temperature-dependent sex determination (TSD) in three of the five major reptilian lineages has drawn considerable attention to this area of research. Increasing interest in sex-determining mechanisms in reptiles has resulted in many data, but much of this information is scattered throughout the literature and consequently difficult to interpret. It is known, however, that distinct sex chromosomes are absent in the tuatara and crocodilians, rare in amphisbaenians (worm lizards) and turtles, and common in lizards and snakes (but less than 20% of all species of living reptiles have been karyotyped). With less than 2 percent of all reptilian species examined, TSD apparently is absent in the tuatara, amphisbaenians and snakes; rare in lizards, frequent in turtles, and ubiquitous in crocodilians. Despite considerable inter- and intraspecific variation in the threshold temperature (temperature producing a 1:1 sex ratio) of gonadal sex determination, this variation cannot confidently be assigned a genetic basis owing to uncontrolled environmental factors or to differences in experimental protocol among studies. Laboratory studies have identified the critical period of development during which gonadal sex determination occurs for at least a dozen species. There are striking similarities in this period among the major taxa with TSD. Examination of TSD in the field indicates that sex ratios of hatchlings are affected by location of the nests, because some nests produce both sexes whereas the majority produce only one sex. Still, more information is needed on how TSD operates under natural conditions in order to fully understand its ecological and conservation implications. TSD may be the ancestral sex-determining condition in reptiles, but this result remains tentative. Physiological investigations of TSD have clarified the roles of steroid hormones, various enzymes, and H-Y antigen in sexual differentiation, whereas molecular studies have identified several plausible candidates for sex-determining genes in species with TSD. This area of research promises to elucidate the mechanism of TSD in reptiles and will have obvious implications for understanding the basis of sex determination in other vertebrates. Experimental and comparative investigations of the potential adaptive significance of TSD appear equally promising, although much work remains to be performed. The distribution of TSD within and among the major reptilian lineages may be related to the life span of individuals of a species and to the biogeography of these species.(ABSTRACT TRUNCATED AT 400 WORDS) PMID:1891591

  13. Synthetic tracked aperture ultrasound imaging: design, simulation, and experimental evaluation.

    PubMed

    Zhang, Haichong K; Cheng, Alexis; Bottenus, Nick; Guo, Xiaoyu; Trahey, Gregg E; Boctor, Emad M

    2016-04-01

    Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA-called synthetic tracked aperture ultrasound (STRATUS) imaging-by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality. PMID:27088108

  14. Patient reactions to personalized medicine vignettes: An experimental design

    PubMed Central

    Butrick, Morgan; Roter, Debra; Kaphingst, Kimberly; Erby, Lori H.; Haywood, Carlton; Beach, Mary Catherine; Levy, Howard P.

    2011-01-01

    Purpose Translational investigation on personalized medicine is in its infancy. Exploratory studies reveal attitudinal barriers to “race-based medicine” and cautious optimism regarding genetically personalized medicine. This study describes patient responses to hypothetical conventional, race-based, or genetically personalized medicine prescriptions. Methods Three hundred eighty-seven participants (mean age = 47 years; 46% white) recruited from a Baltimore outpatient center were randomized to this vignette-based experimental study. They were asked to imagine a doctor diagnosing a condition and prescribing them one of three medications. The outcomes are emotional response to vignette, belief in vignette medication efficacy, experience of respect, trust in the vignette physician, and adherence intention. Results Race-based medicine vignettes were appraised more negatively than conventional vignettes across the board (Cohen’s d = −0.51−0.57−0.64, P < 0.001). Participants rated genetically personalized comparably with conventional medicine (− 0.14−0.15−0.17, P = 0.47), with the exception of reduced adherence intention to genetically personalized medicine (Cohen’s d = −0.38−0.41−0.44, P = 0.009). This relative reluctance to take genetically personalized medicine was pronounced for racial minorities (Cohen’s d =−0.38−0.31−0.25, P = 0.02) and was related to trust in the vignette physician (change in R2 = 0.23, P < 0.001). Conclusions This study demonstrates a relative reluctance to embrace personalized medicine technology, especially among racial minorities, and highlights enhancement of adherence through improved doctor-patient relationships. PMID:21270639

  15. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have

  16. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  17. A rational design change methodology based on experimental and analytical modal analysis

    SciTech Connect

    Weinacht, D.J.; Bennett, J.G.

    1993-08-01

    A design methodology that integrates analytical modeling and experimental characterization is presented. This methodology represents a powerful tool for making rational design decisions and changes. An example of its implementation in the design, analysis, and testing of a precisions machine tool support structure is given.

  18. Recent developments in optimal experimental designs for functional magnetic resonance imaging

    PubMed Central

    Kao, Ming-Hung; Temkit, M'hamed; Wong, Weng Kee

    2014-01-01

    Functional magnetic resonance imaging (fMRI) is one of the leading brain mapping technologies for studying brain activity in response to mental stimuli. For neuroimaging studies utilizing this pioneering technology, there is a great demand of high-quality experimental designs that help to collect informative data to make precise and valid inference about brain functions. This paper provides a survey on recent developments in experimental designs for fMRI studies. We briefly introduce some analytical and computational tools for obtaining good designs based on a specified design selection criterion. Research results about some commonly considered designs such as blocked designs, and m-sequences are also discussed. Moreover, we present a recently proposed new type of fMRI designs that can be constructed using a certain type of Hadamard matrices. Under certain assumptions, these designs can be shown to be statistically optimal. Some future research directions in design of fMRI experiments are also discussed. PMID:25071884

  19. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus report by submitting written comments during the review process and oral comments during two forum presentations at the ISPOR 16th and 17th Annual International Meetings held in Baltimore (2011) and Washington, DC (2012). PMID:23337210

  20. Experimental design of double-cladding planar waveguide laser amplifier

    NASA Astrophysics Data System (ADS)

    Wang, Juntao; Wang, Xiaojun; Zhou, Tangjian; Hu, Hao; Gao, Qingsong

    2015-02-01

    An end-pumped laser amplifier with high efficiency and compactness was designed. The thermal stress of symmetrical single-cladding and double-cladding planar waveguide with Nd:YAG as its core were analyzed theoretically, and the maximum thermal load for each were also obtained according to the stress fracture limit. For different inner cladding thickness, the maximum pump power that could be absorbed was calculated. The laser medium was chosen to be a symmetrical double-cladding planar waveguide with Nd:YAG as its core, the dimensions of the gain area were 50mm12mm100?m, and the waveguide were 60mm12mm2mm. The single thickness of inner cladding YAG and the outer cladding sapphire were 250?m and 700?m respectively. The size of LDA pump light was 18.9mm and 10mm respectively in fast axis and slow axis. The seeder was coupled into the waveguide from one end, and the outer cladding were welded with two heat sink for heat transfer. By theoretical calculation, if the seeder into the waveguide is 0.1W, the outputting power of 1651W could be obtained when the pump power from the LD array was 3600W, with the optical-optical efficiency of 46%.

  1. Experimental Design of a Magnetic Flux Compression Experiment

    NASA Astrophysics Data System (ADS)

    Fuelling, Stephan; Awe, Thomas J.; Bauer, Bruno S.; Goodrich, Tasha; Lindemuth, Irvin R.; Makhin, Volodymyr; Siemon, Richard E.; Atchison, Walter L.; Reinovsky, Robert E.; Salazar, Mike A.; Scudder, David W.; Turchi, Peter J.; Degnan, James H.; Ruden, Edward L.

    2007-06-01

    Generation of ultrahigh magnetic fields is an interesting topic of high-energy-density physics, and an essential aspect of Magnetized Target Fusion (MTF). To examine plasma formation from conductors impinged upon by ultrahigh magnetic fields, in a geometry similar to that of the MAGO experiments, an experiment is under design to compress magnetic flux in a toroidal cavity, using the Shiva Star or Atlas generator. An initial toroidal bias magnetic field is provided by a current on a central conductor. The central current is generated by diverting a fraction of the liner current using an innovative inductive current divider, thus avoiding the need for an auxiliary power supply. A 50-mm-radius cylindrical aluminum liner implodes along glide planes with velocity of about 5 km/s. Inward liner motion causes electrical closure of the toroidal chamber, after which flux in the chamber is conserved and compressed, yielding magnetic fields of 2-3 MG. Plasma is generated on the liner and central rod surfaces by Ohmic heating. Diagnostics include B-dot probes, Faraday rotation, radiography, filtered photodiodes, and VUV spectroscopy. Optical access to the chamber is provided through small holes in the walls.

  2. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  3. Experimental design for assessing the effectiveness of autonomous countermine systems

    NASA Astrophysics Data System (ADS)

    Chappell, Isaac; May, Michael; Moses, Franklin L.

    2010-04-01

    The countermine mission (CM) is a compelling example of what autonomous systems must address to reduce risks that Soldiers take routinely. The list of requirements is formidable and includes autonomous navigation, autonomous sensor scanning, platform mobility and stability, mobile manipulation, automatic target recognition (ATR), and systematic integration and control of components. This paper compares and contrasts how the CM is done today against the challenges of achieving comparable performance using autonomous systems. The Soldier sets a high standard with, for example, over 90% probability of detection (Pd) of metallic and low-metal mines and a false alarm rate (FAR) as low as 0.05/m2. In this paper, we suggest a simplification of the semi-autonomous CM by breaking it into three components: sensor head maneuver, robot navigation, and kill-chain prosecution. We also discuss the measurements required to map the system's physical and state attributes to performance specifications and note that current Army countermine metrics are insufficient to the guide the design of a semi-autonomous countermine system.

  4. Model parameterization and experimental design issues in nearshore bathymetry inversion

    NASA Astrophysics Data System (ADS)

    Narayanan, C.; Rama Rao, V. N.; Kaihatu, J. M.

    2004-08-01

    We present a general method for approaching inverse problems for bathymetric determination under shoaling waves. We run the Korteweg-de Vries (KdV) model for various bathymetric representations while collecting data in the form of free-surface imagery and time series. The sensitivity matrix provides information on the range of influence of data on the parameter space. By minimizing the parameter variances, three metrics based on the sensitivity matrix are derived that can be systematically used to make choices of experiment design and model parameterization. This analysis provides insights that are useful, irrespective of the minimization scheme chosen for inversion. We identify the characteristics of the data (time series versus snapshots, early time measurements versus long-duration measurements, nearshore measurements versus offshore measurements), and model (bathymetry parameterizations) for inversion to be possible. We show that Bruun/Dean and Exponential bathymetric parameterizations are preferred over polynomial parameterizations. The former can be used for inversion with both time series and snapshot data, while the latter is preferably used only with snapshot data. Also, guidelines for time separation between snapshots and spatial separation between time series measurements are derived.

  5. A Modified Experimental Hut Design for Studying Responses of Disease-Transmitting Mosquitoes to Indoor Interventions: The Ifakara Experimental Huts

    PubMed Central

    Okumu, Fredros O.; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J.

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  6. A modified experimental hut design for studying responses of disease-transmitting mosquitoes to indoor interventions: the Ifakara experimental huts.

    PubMed

    Okumu, Fredros O; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  7. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    SciTech Connect

    Dobler, Gregory; Fassnacht, Christopher D.; Rumbaugh, Nicholas; Treu, Tommaso; Liao, Kai; Marshall, Phil; Hojjati, Alireza; Linder, Eric

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  8. Low haemolysis pulsatile impeller pump: design concepts and experimental results.

    PubMed

    Qian, K X

    1989-11-01

    A pulsatile fully implantable impeller pump with low haemolysis has been produced by developing a pulsatile impeller for a nonpulsatile pump also developed in this laboratory. The impeller was designed according to the 3-dimensional theory of fluid dynamics. The impeller shroud retains the same parabolic form and the vane has a form compacted by a radial logarithmic spiral and an axial helical spiral so that the absolute vibration velocity of the blood in a peripheral direction is a minimum as the impeller changes its speed periodically to generate a physiological pulsatile blood flow. Thus the Reynolds shear and the Newton shear are a minimum for the required pulse pressure. The mean volume and mean pressure are controlled by adjusting the voltage. The shape of the pressure pulse is determined by a square wave of voltage and the systole/diastole ratio. In order to abolish regurgitation of the pump, a 40 per cent systole period and a 5 V voltage pulse are desirable for 40 mmHg pulse pressure (80 120 mmHg mean pressure). The pulse frequency has almost no effect on pump output. The pump can delivery 4 l/min mean volume and 100 mmHg mean pressure (40 mmHg pulse pressure), and these conditions result in an index of haemolysis (IH) for porcine blood of 0.020--only slightly more than the nonpulsatile pump (0.016). When the pulsatile impeller was used under nonpulsatile conditions its IH was almost doubled, but when the nonpulsatile impeller was used under pulsatile conditions the IH reached 0.13. The power consumption is approximately equal to that for the nonpulsatile pump: 3W for 4 l/min and 100 mmHg output.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2811347

  9. Designing an experimenters database using the Nijssen Information Analysis Methodology (NIAM)

    SciTech Connect

    Eaton, M.J.

    1990-01-01

    This paper presents a discussion of the use of the Nijssen Information Analysis Methodology (NIAM) in the design of an experimenters database. This database is used by physicists and technicians to describe the configuration and diagnostic systems used on Sandia National Laboratories Particle Beam Fusion Accelerator II (PBFA II). The design of this database presented some unique challenges because of the large degree of flexibility required to enable timely response to changing experimental configurations. The NIAM user-oriented technique proved to be invaluable in translating experimenter's requirements into an information model and then to a normalized relational design.

  10. Bayesian experimental design of a multichannel interferometer for Wendelstein 7-Xa)

    NASA Astrophysics Data System (ADS)

    Dreier, H.; Dinklage, A.; Fischer, R.; Hirsch, M.; Kornejew, P.

    2008-10-01

    Bayesian experimental design (BED) is a framework for the optimization of diagnostics basing on probability theory. In this work it is applied to the design of a multichannel interferometer at the Wendelstein 7-X stellarator experiment. BED offers the possibility to compare diverse designs quantitatively, which will be shown for beam-line designs resulting from different plasma configurations. The applicability of this method is discussed with respect to its computational effort.

  11. Design and experimental tests of a novel neutron spin analyzer for wide angle spin echo spectrometers

    SciTech Connect

    Fouquet, Peter; Farago, Bela; Andersen, Ken H.; Bentley, Phillip M.; Pastrello, Gilles; Sutton, Iain; Thaveron, Eric; Thomas, Frederic; Moskvin, Evgeny; Pappas, Catherine

    2009-09-15

    This paper describes the design and experimental tests of a novel neutron spin analyzer optimized for wide angle spin echo spectrometers. The new design is based on nonremanent magnetic supermirrors, which are magnetized by vertical magnetic fields created by NdFeB high field permanent magnets. The solution presented here gives stable performance at moderate costs in contrast to designs invoking remanent supermirrors. In the experimental part of this paper we demonstrate that the new design performs well in terms of polarization, transmission, and that high quality neutron spin echo spectra can be measured.

  12. Optimization of experimental designs and model parameters exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schürch, M.; Slawig, T.

    2014-09-01

    The weighted least squares estimator for model parameters was presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs was described together with a lesser known approach which takes into account a potential nonlinearity of the model parameters. These two approaches were combined with two different methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and handling was described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two models for sediment concentration in seawater of different complexity served as application example. The advantages and disadvantages of the different approaches were compared, and an evaluation of the approaches was performed.

  13. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.; Collaboration: DarkLight Collaboration

    2013-11-07

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c{sup 2} mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  14. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.

    2013-11-01

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c 2 mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  15. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters

  16. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  17. 76 FR 28715 - Endangered and Threatened Species: Designation of a Nonessential Experimental Population for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ...We, the National Marine Fisheries Service (NMFS), propose to designate the Middle Columbia River (MCR) steelhead (Oncorhynchus mykiss), recently reintroduced into the upper Deschutes River basin in central Oregon, as a nonessential experimental population (NEP) under the Endangered Species Act (ESA). This NEP designation would expire 12 years after the first generation of adults return to the......

  18. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological

  19. Experimental Design for Local School Districts (July 18-August 26, 1966). Final Report.

    ERIC Educational Resources Information Center

    Norton, Daniel P.

    A 6-week summer institute on experimental design was conducted for public school personnel who had been designated by their school administrations as having responsibility for research together with some time released for devotion to research. Of the 32, 17 came from Indiana, 15 from 12 other states. Lectures on statistical principles of design…

  20. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    ERIC Educational Resources Information Center

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  1. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  2. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cédric

    2014-01-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…

  3. Exploiting Distance Technology to Foster Experimental Design as a Neglected Learning Objective in Labwork in Chemistry

    ERIC Educational Resources Information Center

    d'Ham, Cedric; de Vries, Erica; Girault, Isabelle; Marzin, Patricia

    2004-01-01

    This paper deals with the design process of a remote laboratory for labwork in chemistry. In particular, it focuses on the mutual dependency of theoretical conjectures about learning in the experimental sciences and technological opportunities in creating learning environments. The design process involves a detailed analysis of the expert task and…

  4. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    ERIC Educational Resources Information Center

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps

  5. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of

  6. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  7. A Cross-Over Experimental Design for Testing Audiovisual Training Materials.

    ERIC Educational Resources Information Center

    Stolovitch, Harold D.; Bordeleau, Pierre

    This paper contains a description of the cross-over type of experimental design as well as a case study of its use in field testing audiovisual materials related to teaching handicapped children. Increased efficiency is an advantage of the cross-over design, while difficulty in selecting similar format audiovisual materials for field testing is a…

  8. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological…

  9. Experimental Control and Threats to Internal Validity of Concurrent and Nonconcurrent Multiple Baseline Designs

    ERIC Educational Resources Information Center

    Christ, Theodore J.

    2007-01-01

    Single-case research designs are often applied within school psychology. This article provides a critical review of the scientific merit of both concurrent and nonconcurrent multiple baseline (MB) designs, relative to their capacity to assess threats of internal validity and establish experimental control. Distinctions are established between AB…

  10. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Bjrklund, Tua A.; Nordstrm, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary

  11. The use of Taguchi method to optimize the laser welding of sealing neuro-stimulator

    NASA Astrophysics Data System (ADS)

    Xiansheng, Ni; Zhenggan, Zhou; Xiongwei, Wen; Luming, Li

    2011-03-01

    Titanium has high strength-to-weight ratio, corrosion resistance, excellent weldability and well biocompatibility. It is applied in various fields, such as medical and aerospace industry. Laser welding is major joint technology for titanium sealing in medical field. It is difficult for sealing thin titanium shell of the neuro-stimulator by laser lap welding. The key point for welding quality is the combination of laser welding parameters. In this paper, the effects of the Nd: YAG laser welding parameters is discussed and analyzed at first, and then a novel application of Taguchi's matrix method is proposed to optimize the selection of laser seal welding thin titanium shell, including the main parameters such as laser power, welding speed, defocusing amount and shield gas, finally the manufacture process for sealing neuro-stimulator is confirmed. The results show that Taguchi method has optimized process parameters of the laser welding on sealing thin titanium shell of the neuro-stimulator, and the device is equipped properly in medical treatments.

  12. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation. PMID:25155192

  13. Experimental library screening demonstrates the successful application of computational protein design to large structural ensembles

    PubMed Central

    Allen, Benjamin D.; Nisthal, Alex; Mayo, Stephen L.

    2010-01-01

    The stability, activity, and solubility of a protein sequence are determined by a delicate balance of molecular interactions in a variety of conformational states. Even so, most computational protein design methods model sequences in the context of a single native conformation. Simulations that model the native state as an ensemble have been mostly neglected due to the lack of sufficiently powerful optimization algorithms for multistate design. Here, we have applied our multistate design algorithm to study the potential utility of various forms of input structural data for design. To facilitate a more thorough analysis, we developed new methods for the design and high-throughput stability determination of combinatorial mutation libraries based on protein design calculations. The application of these methods to the core design of a small model system produced many variants with improved thermodynamic stability and showed that multistate design methods can be readily applied to large structural ensembles. We found that exhaustive screening of our designed libraries helped to clarify several sources of simulation error that would have otherwise been difficult to ascertain. Interestingly, the lack of correlation between our simulated and experimentally measured stability values shows clearly that a design procedure need not reproduce experimental data exactly to achieve success. This surprising result suggests potentially fruitful directions for the improvement of computational protein design technology. PMID:21045132

  14. Adaptive combinatorial design to explore large experimental spaces: approach and validation.

    PubMed

    Lejay, L V; Shasha, D E; Palenchar, P M; Kouranov, A Y; Cruikshank, A A; Chou, M F; Coruzzi, G M

    2004-12-01

    Systems biology requires mathematical tools not only to analyse large genomic datasets, but also to explore large experimental spaces in a systematic yet economical way. We demonstrate that two-factor combinatorial design (CD), shown to be useful in software testing, can be used to design a small set of experiments that would allow biologists to explore larger experimental spaces. Further, the results of an initial set of experiments can be used to seed further 'Adaptive' CD experimental designs. As a proof of principle, we demonstrate the usefulness of this Adaptive CD approach by analysing data from the effects of six binary inputs on the regulation of genes in the N-assimilation pathway of Arabidopsis. This CD approach identified the more important regulatory signals previously discovered by traditional experiments using far fewer experiments, and also identified examples of input interactions previously unknown. Tests using simulated data show that Adaptive CD suffers from fewer false positives than traditional experimental designs in determining decisive inputs, and succeeds far more often than traditional or random experimental designs in determining when genes are regulated by input interactions. We conclude that Adaptive CD offers an economical framework for discovering dominant inputs and interactions that affect different aspects of genomic outputs and organismal responses. PMID:17051692

  15. Experimental validation of optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Joshi, Suresh M.; Walz, Joseph E.

    1993-01-01

    An optimization-based integrated design approach for flexible space structures is experimentally validated using three types of dissipative controllers, including static, dynamic, and LQG dissipative controllers. The nominal phase-0 of the controls structure interaction evolutional model (CEM) structure is redesigned to minimize the average control power required to maintain specified root-mean-square line-of-sight pointing error under persistent disturbances. The redesign structure, phase-1 CEM, was assembled and tested against phase-0 CEM. It is analytically and experimentally demonstrated that integrated controls-structures design is substantially superior to that obtained through the traditional sequential approach. The capability of a software design tool based on an automated design procedure in a unified environment for structural and control designs is demonstrated.

  16. Process optimization for Ni(II) removal from wastewater by calcined oyster shell powders using Taguchi method.

    PubMed

    Yen, Hsing Yuan; Li, Jun Yan

    2015-09-15

    Waste oyster shells cause great environmental concerns and nickel is a harmful heavy metal. Therefore, we applied the Taguchi method to take care of both issues by optimizing the controllable factors for Ni(II) removal by calcined oyster shell powders (OSP), including the pH (P), OSP calcined temperature (T), Ni(II) concentration (C), OSP dose (D), and contact time (t). The results show that their percentage contribution in descending order is P (64.3%) > T (18.9%) > C (8.8%) > D (5.1%) > t (1.7%). The optimum condition is pH of 10 and OSP calcined temperature of 900 °C. Under the optimum condition, the Ni(II) can be removed almost completely; the higher the pH, the more the precipitation; the higher the calcined temperature, the more the adsorption. The latter is due to the large number of porosities created at the calcination temperature of 900 °C. The porosities generate a large amount of cavities which significantly increase the surface area for adsorption. A multiple linear regression equation obtained to correlate Ni(II) removal with the controllable factors is: Ni(II) removal(%) = 10.35 × P + 0.045 × T - 1.29 × C + 19.33 × D + 0.09 × t - 59.83. This equation predicts Ni(II) removal well and can be used for estimating Ni(II) removal during the design stage of Ni(II) removal by calcined OSP. Thus, OSP can be used to remove nickel effectively and the formula for removal prediction is developed for practical applications. PMID:26203873

  17. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  18. Experimental verification of the sparse design of a square partial discharge acoustic emission array sensor

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Liu, Xiong; Tao, Junhan; Li, Tong; Cheng, Shuyi; Lu, Fangcheng

    2015-04-01

    This study experimentally verified the sparse design of a square partial discharge (PD) acoustic emission array sensor proposed in Xie et al (2014 Meas. Sci. Technol. 25 035102). Firstly, this study developed a square PD acoustic emission array sensor and determined the material, centre frequency, thickness, radius, etc of the element of this array sensor through analysis and comparison with others. Moreover, in combination with a sound-absorbing backing and a matching layer, a single acoustic emission array sensor element was designed, which laid the basis for the experimental verification of the ensuing sparse design. On this basis, the assembly of the square acoustic emission array sensor was designed. It realised the plug-and-play ability of the array elements and formed the basis for the experimental study of the following sparse design. Subsequently, this study established and introduced an experimental system and methods for PD positioning. Finally, it experimentally investigated the sparse design of a square PD acoustic emission array sensor. The 9-element square PD acoustic emission array sensor was used as an example to study the positioning effects on PD using the acoustic emission array sensor in optimum and random sparse structures respectively. The results suggested that: (1) the PD acoustic emission array sensor and corresponding experimental system were effective in detecting and positioning the PD; (2) the square PD acoustic emission array sensor proposed in Xie et al (2014 Meas. Sci. Technol. 25 035102) was feasible. Using this array sensor, it was possible to optimise the sparse distribution structure of this acoustic emission array sensor.

  19. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy J

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new "experimentation assessments," 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  20. Experimental Design and Data collection of a finishing end milling operation of AISI 1045 steel.

    PubMed

    Dias Lopes, Luiz Gustavo; de Brito, Tarcsio Gonalves; de Paiva, Anderson Paulo; Peruchi, Rogrio Santana; Balestrassi, Pedro Paulo

    2016-03-01

    In this Data in Brief paper, a central composite experimental design was planned to collect the surface roughness of an end milling operation of AISI 1045 steel. The surface roughness values are supposed to suffer some kind of variation due to the action of several factors. The main objective here was to present a multivariate experimental design and data collection including control factors, noise factors, and two correlated responses, capable of achieving a reduced surface roughness with minimal variance. Lopes et al. (2016) [1], for example, explores the influence of noise factors on the process performance. PMID:26909374

  1. Efficient experimental design and analysis of real-time PCR assays

    PubMed Central

    Hui, Kwokyin; Feng, Zhong-Ping

    2013-01-01

    Real-time polymerase chain reaction (qPCR) is currently the standard for gene quantification studies and has been extensively used in large-scale basic and clinical research. The operational costs and technical errors can become a significant issue due to the large number of sample reactions. In this paper, we present an experimental design strategy and an analysis procedure that are more efficient requiring fewer sample reactions than the traditional approach. We verified mathematically and experimentally the new design on a well-characterized model, to evaluate the gene expression levels of CACNA1C and CACNA1G in hypertrophic ventricular myocytes induced by phenylephrine treatment. PMID:23510941

  2. Experimental Design and Data collection of a finishing end milling operation of AISI 1045 steel

    PubMed Central

    Dias Lopes, Luiz Gustavo; de Brito, Tarcísio Gonçalves; de Paiva, Anderson Paulo; Peruchi, Rogério Santana; Balestrassi, Pedro Paulo

    2016-01-01

    In this Data in Brief paper, a central composite experimental design was planned to collect the surface roughness of an end milling operation of AISI 1045 steel. The surface roughness values are supposed to suffer some kind of variation due to the action of several factors. The main objective here was to present a multivariate experimental design and data collection including control factors, noise factors, and two correlated responses, capable of achieving a reduced surface roughness with minimal variance. Lopes et al. (2016) [1], for example, explores the influence of noise factors on the process performance. PMID:26909374

  3. Design of Experimental Data Publishing Software for Neutral Beam Injector on EAST

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Zhang, Xiaodan; Wu, Deyun

    2015-02-01

    Neutral Beam Injection (NBI) is one of the most effective means for plasma heating. Experimental Data Publishing Software (EDPS) is developed to publish experimental data to get the NBI system under remote monitoring. In this paper, the architecture and implementation of EDPS including the design of the communication module and web page display module are presented. EDPS is developed based on the Browser/Server (B/S) model, and works under the Linux operating system. Using the data source and communication mechanism of the NBI Control System (NBICS), EDPS publishes experimental data on the Internet.

  4. Experimental and Numerical Investigations on the Ballistic Performance of Polymer Matrix Composites Used in Armor Design

    NASA Astrophysics Data System (ADS)

    Colakoglu, M.; Soykasap, O.; Özek, T.

    2007-01-01

    Ballistic properties of two different polymer matrix composites used for military and non-military purposes are investigated in this study. Backside deformation and penetration speed are determined experimentally and numerically for Kevlar 29/Polivnyl Butyral and Polyethylene fiber composites because designing armors for only penetration is not enough for protection. After experimental ballistic tests, a model is constructed using finite element program, Abaqus. The backside deformation and penetration speed are determined numerically. It is found that the experimental and numeric results are in agreement and Polyethylene fiber composite has much better ballistic limit, the backside deformation, and penetration speed than those of Kevlar 29/Polivnyl Butyral composite if areal densities are considered.

  5. Quantification of pore size distribution using diffusion NMR: experimental design and physical insights.

    PubMed

    Katz, Yaniv; Nevo, Uri

    2014-04-28

    Pulsed field gradient (PFG) diffusion NMR experiments are sensitive to restricted diffusion within porous media and can thus reveal essential microstructural information about the confining geometry. Optimal design methods of inverse problems are designed to select preferred experimental settings to improve parameter estimation quality. However, in pore size distribution (PSD) estimation using NMR methods as in other ill-posed problems, optimal design strategies and criteria are scarce. We formulate here a new optimization framework for ill-posed problems. This framework is suitable for optimizing PFG experiments for probing geometries that are solvable by the Multiple Correlation Function approach. The framework is based on a heuristic methodology designed to select experimental sets which balance between lowering the inherent ill-posedness and increasing the NMR signal intensity. This method also selects favorable discrete pore sizes used for PSD estimation. Numerical simulations performed demonstrate that using this framework greatly improves the sensitivity of PFG experimental sets to the pores' sizes. The optimization also sheds light on significant features of the preferred experimental sets. Increasing the gradient strength and varying multiple experimental parameters is found to be preferable for reducing the ill-posedness. We further evaluate the amount of pore size information that can be obtained by wisely selecting the duration of the diffusion and mixing times. Finally, we discuss the ramification of using single PFG or double PFG sequences for PSD estimation. In conclusion, the above optimization method can serve as a useful tool for experimenters interested in quantifying PSDs of different specimens. Moreover, the applicability of the suggested optimization framework extends far beyond the field of PSD estimation in diffusion NMR, and reaches design of sampling schemes of other ill-posed problems. PMID:24784263

  6. Design and experimental study of high-speed low-flow-rate centrifugal compressors

    SciTech Connect

    Gui, F.; Reinarts, T.R.; Scaringe, R.P.; Gottschlich, J.M.

    1995-12-31

    This paper describes a design and experimental effort to develop small centrifugal compressors for aircraft air cycle cooling systems and small vapor compression refrigeration systems (20--100 tons). Efficiency improvements at 25% are desired over current designs. Although centrifugal compressors possess excellent performance at high flow rates, low-flow-rate compressors do not have acceptable performance when designed using current approaches. The new compressors must be designed to operate at a high rotating speed to retain efficiency. The emergence of the magnetic bearing provides the possibility of developing such compressors that run at speeds several times higher than current dominating speeds. Several low-flow-rate centrifugal compressors, featured with three-dimensional blades, have been designed, manufactured and tested in this study. An experimental investigation of compressor flow characteristics and efficiency has been conducted to explore a theory for mini-centrifugal compressors. The effects of the overall impeller configuration, number of blades, and the rotational speed on compressor flow curve and efficiency have been studied. Efficiencies as high as 84% were obtained. The experimental results indicate that the current theory can still be used as a guide, but further development for the design of mini-centrifugal compressors is required.

  7. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.; Crum, Jarrod V.

    2015-07-24

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer-layer glasses. The experimental design was completed by a center-point glass, a Vitreous State Laboratory glass, and replicates of the center point and Vitreous State Laboratory glasses.

  8. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    ERIC Educational Resources Information Center

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  9. Using Superstitions & Sayings To Teach Experimental Design in Beginning and Advanced Biology Classes.

    ERIC Educational Resources Information Center

    Hoefnagels, Marielle H.; Rippel, Scott A.

    2003-01-01

    Presents a collaborative learning exercise intended to teach the unfamiliar terminology of experimental design both in biology classes and biochemistry laboratories. The exercise promotes discussion and debate, develops communication skills, and emphasizes peer review. The effectiveness of the exercise is supported by student surveys. (SOE)

  10. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    ERIC Educational Resources Information Center

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation

  11. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  12. 78 FR 79622 - Endangered and Threatened Species: Designation of a Nonessential Experimental Population of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-31

    ... stemming from a proposed rule that was published January 16, 2013 (78 FR 3381). This final rule implements... Significant Unit (ESU; 70 FR 37160; June 28, 2005) is listed as threatened under the ESA, and its threatened... Species: Designation of a Nonessential Experimental Population of Central Valley Spring-Run Chinook...

  13. Whither Instructional Design and Teacher Training? The Need for Experimental Research

    ERIC Educational Resources Information Center

    Gropper, George L.

    2015-01-01

    This article takes a contrarian position: an "instructional design" or "teacher training" model, because of the sheer number of its interconnected parameters, is too complex to assess or to compare with other models. Models may not be the way to go just yet. This article recommends instead prior experimental research on limited…

  14. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  15. Guided-Inquiry Labs Using Bean Beetles for Teaching the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    Schlueter, Mark A.; D'Costa, Allison R.

    2013-01-01

    Guided-inquiry lab activities with bean beetles ("Callosobruchus maculatus") teach students how to develop hypotheses, design experiments, identify experimental variables, collect and interpret data, and formulate conclusions. These activities provide students with real hands-on experiences and skills that reinforce their understanding of the…

  16. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    PubMed

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged. PMID:26065534

  17. Designing Information Systems for User Abilities and Tasks: An Experimental Study.

    ERIC Educational Resources Information Center

    Allen, Bryce

    1998-01-01

    Reports on an experiment in which detailed logging of the use of experimental information systems was used to determine the optimal configuration of the systems for users. Results suggest that systems can be created for users by analysis of the interaction of design features with personal characteristics such as cognitive abilities. (Author/LRW)

  18. Whither Instructional Design and Teacher Training? The Need for Experimental Research

    ERIC Educational Resources Information Center

    Gropper, George L.

    2015-01-01

    This article takes a contrarian position: an "instructional design" or "teacher training" model, because of the sheer number of its interconnected parameters, is too complex to assess or to compare with other models. Models may not be the way to go just yet. This article recommends instead prior experimental research on limited

  19. Coupled inverse problems in groundwater modeling: 2. Identifiability and experimental design

    NASA Astrophysics Data System (ADS)

    Sun, Ne-Zheng; Yeh, William W.-G.

    1990-10-01

    Parameter identifiability and experimental design in the context of solving the coupled inverse problem in groundwater modeling are considered in part 2 of the two-paper series. The following three new definitions of extended identifiability for a distributed parameter system are presented: interval identifiability (INI), prediction equivalence identifiability (PEI), and management equivalence identifiability (MEI). The uniqueness requirement of the inverse solution is relaxed by these definitions. An identified parameter is said to be extendedly identifiable if it is able to satisfy an accuracy requirement in a specific application of the model. When a generalized least squares norm is used, the proposed identifiabilities are applicable to any coupled problems in which the state variables and parameters may have different order of magnitudes. A salient feature of the new definitions is that the extended identifiabilities are closely related to experimental design. Consequently, the sufficiency of an experimental design for assuring a given level of an extended identifiability can be estimated prior to field experiments. The methodology developed can be used to determine the reliability of an experimental design even for nonlinear models. A numerical example on the management of a wastewater disposal is given to explain how the concepts and methods presented in this paper can be used for solving practical problems.

  20. Guided-Inquiry Labs Using Bean Beetles for Teaching the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    Schlueter, Mark A.; D'Costa, Allison R.

    2013-01-01

    Guided-inquiry lab activities with bean beetles ("Callosobruchus maculatus") teach students how to develop hypotheses, design experiments, identify experimental variables, collect and interpret data, and formulate conclusions. These activities provide students with real hands-on experiences and skills that reinforce their understanding of the

  1. SELF-INSTRUCTIONAL SUPPLEMENTS FOR A TELEVISED PHYSICS COURSE, STUDY PLAN AND EXPERIMENTAL DESIGN.

    ERIC Educational Resources Information Center

    KLAUS, DAVID J.; LUMSDAINE, ARTHUR A.

    THE INITIAL PHASES OF A STUDY OF SELF-INSTRUCTIONAL AIDS FOR A TELEVISED PHYSICS COURSE WERE DESCRIBED. THE APPROACH, EXPERIMENTAL DESIGN, PROCEDURE, AND TECHNICAL ASPECTS OF THE STUDY PLAN WERE INCLUDED. THE MATERIALS WERE PREPARED TO SUPPLEMENT THE SECOND SEMESTER OF HIGH SCHOOL PHYSICS. THE MATERIAL COVERED STATIC AND CURRENT ELECTRICITY,…

  2. An Experimental Two-Way Video Teletraining System: Design, Development and Evaluation.

    ERIC Educational Resources Information Center

    Simpson, Henry; And Others

    1991-01-01

    Describes the design, development, and evaluation of an experimental two-way video teletraining (VTT) system by the Navy that consisted of two classrooms linked by a land line to enable two-way audio/video communication. Trends in communication and computer technology for training are described, and a cost analysis is included. (12 references)…

  3. Quiet Clean Short-haul Experimental Engine (QCSEE) Over The Wing (OTW) design report

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The design, fabrication, and testing of two experimental high bypass geared turbofan engines and propulsion systems for short haul passenger aircraft are described. The propulsion technology required for future externally blown flap aircraft with engines located both under the wing and over the wing is demonstrated. Composite structures and digital engine controls are among the topics included.

  4. Multiple Measures of Juvenile Drug Court Effectiveness: Results of a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Rodriguez, Nancy; Webb, Vincent J.

    2004-01-01

    Prior studies of juvenile drug courts have been constrained by small samples, inadequate comparison groups, or limited outcome measures. The authors report on a 3-year evaluation that examines the impact of juvenile drug court participation on recidivism and drug use. A quasi-experimental design is used to compare juveniles assigned to drug court…

  5. The Impact of the Hawthorne Effect in Experimental Designs in Educational Research. Final Report.

    ERIC Educational Resources Information Center

    Cook, Desmond L.

    Project objectives included (1) establishing a body of knowledge concerning the role of the Hawthorne effect in experimental designs in educational research, (2) assessing the influence of the Hawthorne effect on educational experiments conducted under varying conditions of control, (3) identifying the major components comprising the effect, and

  6. COMPARISON OF EXPERIMENTAL DESIGNS USED TO DETECT CHANGES IN YIELDS OF CROPS EXPOSED TO ACIDIC PRECIPITATION

    EPA Science Inventory

    A comparison of experimental designs used to detect changes in yield of crops exposed to simulated acidic rain was performed. Seed yields were determined from field-grown soybeans(Glycine max) exposed to simulated rainfalls in which all ambient rainfalls were excluded by automati...

  7. A Course on Experimental Design for Different University Specialties: Experiences and Changes over a Decade

    ERIC Educational Resources Information Center

    Martinez Luaces, Victor; Velazquez, Blanca; Dee, Valerie

    2009-01-01

    We analyse the origin and development of an Experimental Design course which has been taught in several faculties of the Universidad de la Republica and other institutions in Uruguay, over a 10-year period. At the end of the course, students were assessed by carrying out individual work projects on real-life problems, which was innovative for…

  8. An Experimental Two-Way Video Teletraining System: Design, Development and Evaluation.

    ERIC Educational Resources Information Center

    Simpson, Henry; And Others

    1991-01-01

    Describes the design, development, and evaluation of an experimental two-way video teletraining (VTT) system by the Navy that consisted of two classrooms linked by a land line to enable two-way audio/video communication. Trends in communication and computer technology for training are described, and a cost analysis is included. (12 references)

  9. Trade-offs in experimental designs for estimating post-release mortality in containment studies

    USGS Publications Warehouse

    Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L

    2014-01-01

    Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.

  10. Thermodynamic model using experimental loss factors for dielectric elastomer actuator design

    NASA Astrophysics Data System (ADS)

    Lucking Bigué, J.-P.; Chouinard, P.; Denninger, M.; Proulx, S.; Plante, J.-S.

    2010-04-01

    Dielectric Elastomer Actuators (DEAs) are a promising actuation technology for mobile robotics due to their high forceto- weight ratio, their potential for high efficiencies, and their low cost. The preliminary design of such actuators requires a quick and precise assessment of actuator energy conversion performance. To do so, this paper proposes a simple thermodynamic model using experimentally acquired loss factors that predict actuator mechanical work, energy consumption, and efficiency when operating under constant voltage and constant charge modes. Mechanical and electrical loss factors for both VHB 4905 (acrylic) and Nusil's CF19-2186 (silicone) are obtained by mapping the performances of cone-shaped DEAs over a broad range of actuator speeds, capacitance ratios, and applied voltages. Extensive experimental results reveal the main performance trends to follow for preliminary actuator design, which are explained by the proposed model. For the tested conditions, the maximum experimental brake efficiencies are ~35% and ~25% for VHB and CF19-2186 respectively.

  11. Design and structural verification of locomotive bogies using combined analytical and experimental methods

    NASA Astrophysics Data System (ADS)

    Manea, I.; Popa, G.; Girnita, I.; Prenta, G.

    2015-11-01

    The paper presents a practical methodology for design and structural verification of the locomotive bogie frames using a modern software package for design, structural verification and validation through combined, analytical and experimental methods. In the initial stage, the bogie geometry is imported from a CAD program into a finite element analysis program, such as Ansys. The analytical model validation is done by experimental modal analysis carried out on a finished bogie frame. The bogie frame own frequencies and own modes by both experimental and analytic methods are determined and the correlation analysis of the two types of models is performed. If the results are unsatisfactory, the structural optimization should be performed. If the results are satisfactory, the qualification procedures follow by static and fatigue tests carried out in a laboratory with international accreditation in the field. This paper presents an application made on bogie frames for the LEMA electric locomotive of 6000 kW.

  12. A three-phase series-parallel resonant converter -- analysis, design, simulation and experimental results

    SciTech Connect

    Bhat, A.K.S.; Zheng, L.

    1995-12-31

    A three-phase dc-to-dc series-parallel resonant converter is proposed and its operating modes for 180{degree} wide gating pulse scheme are explained. A detailed analysis of the converter using constant current model and Fourier series approach is presented. Based on the analysis, design curves are obtained and a design example of 1 kW converter is given. SPICE simulation results for the designed converter and experimental results for a 500 W converter are presented to verify the performance of the proposed converter for varying load conditions. The converter operates in lagging PF mode for the entire load range and requires a narrow variation in switching frequency.

  13. A multi-purpose SAIL demonstrator design and its principle experimental verification

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Yan, Aimin; Xu, Nan; Wang, Lijuan; Luan, Zhu; Sun, Jianfeng; Liu, Liren

    2009-08-01

    A fully 2-D synthetic aperture imaging ladar (SAIL) demonstrator is designed and being fabricated to experimentally investigate and theoretically analyze the beam diffraction properties, antenna function, imaging resolution and signal processing algorithm of SAIL. The design details of the multi-purpose SAIL demonstrator are given and, as the first phase, a laboratory-scaled SAIL system based on bulk optical elements has been built to verify the principle of design, which is similar in construction to the demonstrator but without the major antenna telescope. The system has the aperture diameter of about 1mm and the target distance of 3.2m.

  14. Conceptual design of a fast-ion D-alpha diagnostic on experimental advanced superconducting tokamak

    SciTech Connect

    Huang, J. Wan, B.; Hu, L.; Hu, C.; Heidbrink, W. W.; Zhu, Y.; Hellermann, M. G. von; Gao, W.; Wu, C.; Li, Y.; Fu, J.; Lyu, B.; Yu, Y.; Ye, M.; Shi, Y.

    2014-11-15

    To investigate the fast ion behavior, a fast ion D-alpha (FIDA) diagnostic system has been planned and is presently under development on Experimental Advanced Superconducting Tokamak. The greatest challenges for the design of a FIDA diagnostic are its extremely low intensity levels, which are usually significantly below the continuum radiation level and several orders of magnitude below the bulk-ion thermal charge-exchange feature. Moreover, an overlaying Motional Stark Effect (MSE) feature in exactly the same wavelength range can interfere. The simulation of spectra code is used here to guide the design and evaluate the diagnostic performance. The details for the parameters of design and hardware are presented.

  15. Design considerations and experimental results of a 100 W, 500 000 rpm electrical generator

    NASA Astrophysics Data System (ADS)

    Zwyssig, C.; Kolar, J. W.

    2006-09-01

    Mesoscale gas turbine generator systems are a promising solution for high energy and power density portable devices. This paper focuses on the design of a 100 W, 500 000 rpm generator suitable for use with a gas turbine. The design procedure selects the suitable machine type and bearing technology, and determines the electromagnetic characteristics. The losses caused by the high frequency operation are minimized by optimizing the winding and the stator core material. The final design is a permanent-magnet machine with a volume of 3 cm3 and experimental measurements from a test bench are presented.

  16. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  17. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    NASA Astrophysics Data System (ADS)

    Girault, Isabelle; d'Ham, Cédric

    2014-08-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a computer environment (copex-chimie) with embedded scaffolds in order to help students to design an experimental procedure. A pre-structuring of the procedure where the students have to choose the actions of their procedure among pre-defined actions and specify the parameters forces the students to face the complexity of the design. However, this is not sufficient for them to succeed; they look for some feedback to improve their procedure and finally abandon their task. In another condition, the students were provided with individualized feedbacks on the errors detected in their procedures by an artificial tutor. These feedbacks proved to be necessary to accompany the students throughout their experimental design without being discouraged. With this kind of scaffold, students worked longer and succeeded better to the task than all the other students.

  18. Fertilizer Response Curves for Commercial Southern Forest Species Defined with an Un-Replicated Experimental Design.

    SciTech Connect

    Coleman, Mark; Aubrey, Doug; Coyle, David, R.; Daniels, Richard, F.

    2005-11-01

    There has been recent interest in use of non-replicated regression experimental designs in forestry, as the need for replication in experimental design is burdensome on limited research budgets. We wanted to determine the interacting effects of soil moisture and nutrient availability on the production of various southeastern forest trees (two clones of Populus deltoides, open pollinated Platanus occidentalis, Liquidambar styraciflua and Pinus taeda). Additionally, we required an understanding of the fertilizer response curve. To accomplish both objectives we developed a composite design that includes a core ANOVA approach to consider treatment interactions, with the addition of non-replicated regression plots receiving a range of fertilizer levels for the primary irrigation treatment.

  19. Design and experimental validation of a flutter suppression controller for the active flexible wing

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.

  20. Flutter suppression for the Active Flexible Wing - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of a control law for an active flutter suppression system for the Active Flexible Wing wind-tunnel model is presented. The design was accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach relied on a fundamental understanding of the flutter mechanism to formulate understanding of the flutter mechanism to formulate a simple control law structure. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in the design model. The flutter suppression controller was also successfully operated in combination with a rolling maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  1. Thermoelastic Femoral Stress Imaging for Experimental Evaluation of Hip Prosthesis Design

    NASA Astrophysics Data System (ADS)

    Hyodo, Koji; Inomoto, Masayoshi; Ma, Wenxiao; Miyakawa, Syunpei; Tateishi, Tetsuya

    An experimental system using the thermoelastic stress analysis method and a synthetic femur was utilized to perform reliable and convenient mechanical biocompatibility evaluation of hip prosthesis design. Unlike the conventional technique, the unique advantage of the thermoelastic stress analysis method is its ability to image whole-surface stress (Δ(σ1+σ2)) distribution in specimens. The mechanical properties of synthetic femurs agreed well with those of cadaveric femurs with little variability between specimens. We applied this experimental system for stress distribution visualization of the intact femur, and the femurs implanted with an artificial joint. The surface stress distribution of the femurs sensitively reflected the prosthesis design and the contact condition between the stem and the bone. By analyzing the relationship between the stress distribution and the clinical results of the artificial joint, this technique can be used in mechanical biocompatibility evaluation and pre-clinical performance prediction of new artificial joint design.

  2. Design and Experimental Results for the S827 Airfoil; Period of Performance: 1998--1999

    SciTech Connect

    Somers, D. M.

    2005-01-01

    A 21%-thick, natural-laminar-flow airfoil, the S827, for the 75% blade radial station of 40- to 50-meter, stall-regulated, horizontal-axis wind turbines has been designed and analyzed theoretically and verified experimentally in the NASA Langley Low-Turbulence Pressure Tunnel. The primary objective of restrained maximum lift has not been achieved, although the maximum lift is relatively insensitive to roughness, which meets the design goal. The airfoil exhibits a relatively docile stall, which meets the design goal. The primary objective of low profile drag has been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results generally show good agreement with the exception of maximum lift, which is significantly underpredicted.

  3. Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Haller, Harold S.

    2009-01-01

    It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.

  4. Design optimization and experimental testing of the High-Flux Test Module of IFMIF

    NASA Astrophysics Data System (ADS)

    Leichtle, D.; Arbeiter, F.; Dolensky, B.; Fischer, U.; Gordeev, S.; Heinzel, V.; Ihli, T.; Moeslang, A.; Simakov, S. P.; Slobodchuk, V.; Stratmanns, E.

    2009-04-01

    The design of the High-Flux Test Module of the International Fusion Material Irradiation Facility has been developed continuously in the past few years. The present paper highlights recent design achievements, including a thorough state-of-the-art validation assessment of CFD tools and models. Along with design related analyses exercises on manufacturing procedures have been performed. Recommendations for the use of container, rig, and capsule materials as well as recent progress in brazing of electrical heaters are discussed. A test matrix starting from High-Flux Test Module compartments, i.e. segments of the full module, with heated dummy rigs up to the full-scale module with instrumented irradiation rigs has been developed and the appropriate helium gas loop has been designed conceptually. A roadmap of the envisaged experimental activities is presented in accordance with the test loop facility construction and mock-up design and fabrication schedules.

  5. Study and design of cryogenic propellant acquisition systems. Volume 2: Supporting experimental program

    NASA Technical Reports Server (NTRS)

    Burge, G. W.; Blackmon, J. B.

    1973-01-01

    Areas of cryogenic fuel systems were identified where critical experimental information was needed either to define a design criteria or to establish the feasibility of a design concept or a critical aspect of a particular design. Such data requirements fell into three broad categories: (1) basic surface tension screen characteristics; (2) screen acquisition device fabrication problems; and (3) screen surface tension device operational failure modes. To explore these problems and to establish design criteria where possible, extensive laboratory or bench test scale experiments were conducted. In general, these proved to be quite successful and, in many instances, the test results were directly used in the system design analyses and development. In some cases, particularly those relating to operational-type problems, areas requiring future research were identified, especially screen heat transfer and vibrational effects.

  6. Experimental investigation of two transonic linear turbine cascades at off-design conditions

    NASA Astrophysics Data System (ADS)

    Jouini, Dhafer Ben Mahmoud

    Detailed measurements have been made of the mid-span aerodynamic performance of two transonic turbine cascades at off-design conditions. The cascades investigated were a baseline cascade, designated HS1A, and a cascade with a modified leading edge design, designated HS1B. The measurements were for exit Mach numbers ranging from about 0.5 to about 1.2 and for Reynolds numbers from 4 x 105 to 106. The turbulence intensity in the test section and upstream of the cascade test section was about 4%. The profile losses were measured for the incidence values of -10°, 0.0°, +4.5°, +10.0°, and +14.5° relative to design. To aid in understanding the loss behaviour and to provide other insights into the flow physics, measurements of the blade loading, exit flow angles, trailing-edge base pressures, and the Axial Velocity Density Ratio (AVDR) were also made. The results showed that the profile losses at transonic Mach numbers can be closely related to the behaviour of the base pressure. The losses were also found to be affected by the AVDR. The AVDRs were found to decrease with increasing positive incidence. Moreover the results from both cascades showed that the modifications to the leading edge geometry of HS1B cascade were not successful in improving the blade performance at positive off-design incidence. Comparisons between the present experimental data and the available correlations in the open literature were also made. These comparisons included mid-span losses at design and off-design, and exit flow angles. It was found that further improvements can still be made to the existing correlations. Furthermore, the present experimental data represents a significant contribution to the data base of results available in the open literature for the development of new and improved correlations, particularly at transonic flow conditions, at both design and off-design conditions.

  7. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Humans are exposed to mixtures of environmental compounds. A regulatory assumption is that the mixtures of chemicals act in an additive manner. However, this assumption requires experimental validation. Traditional experimental designs (full factorial) require a large number of e...

  8. Taking evolutionary circuit design from experimentation to implementation: some useful techniques and a silicon demonstration

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Guo, X.; Keymeulen, D.; Ferguson, M. I.; Duong, V.

    2004-01-01

    Current techniques in evolutionary synthesis of analogue and digital circuits designed at transistor level have focused on achieving the desired functional response, without paying sufficient attention to issues needed for a practical implementation of the resulting solution. No silicon fabrication of circuits with topologies designed by evolution has been done before, leaving open questions on the feasibility of the evolutionary circuit design approach, as well as on how high-performance, robust, or portable such designs could be when implemented in hardware. It is argued that moving from evolutionary 'design-for experimentation' to 'design-for-implementation' requires, beyond inclusion in the fitness function of measures indicative of circuit evaluation factors such as power consumption and robustness to temperature variations, the addition of certain evaluation techniques that are not common in conventional design. Several such techniques that were found to be useful in evolving designs for implementation are presented; some are general, and some are particular to the problem domain of transistor-level logic design, used here as a target application. The example used here is a multifunction NAND/NOR logic gate circuit, for which evolution obtained a creative circuit topology more compact than what has been achieved by multiplexing a NAND and a NOR gate. The circuit was fabricated in a 0.5 mum CMOS technology and silicon tests showed good correspondence with the simulations.

  9. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  10. Overview of the TIBER II (Tokamak Ignition/Burn Experimental Reactor) design

    SciTech Connect

    Henning, C.D.; Logan, B.G.

    1987-10-16

    The TIBER II Tokamak Ignition/Burn Experimental Reactor design is the result of efforts by numerous people and institutions, including many fusion laboratories, universities, and industries. While subsystems will be covered extensively in other reports, this overview will attempt to place the work in perspective. Major features of the design are compact size, low cost, and steady-state operation. These are achieved through plasma shaping and innovative features such as radiation tolerant magnets and optimized shielding. While TIBER II can operate in a pulsed mode, steady-state is preferred for nuclear testing. Current drive is achieved by a combination of lower hybrid and neutral beams. In addition, 10 MW of ECR is added for disruption control and current drive profiling. The TIBER II design has been the US option in preparation for the International Thermonuclear Experimental Reactor (ITER). Other equivalent national designs are the NET in Europe, the FER in Japan and the OTR in the USSR. These designs will help set the basis for the new international design effort. 9 refs.

  11. Design and experimental results for a flapped natural-laminar-flow airfoil for general aviation applications

    NASA Technical Reports Server (NTRS)

    Somers, D. M.

    1981-01-01

    A flapped natural laminar flow airfoil for general aviation applications, the NLF(1)-0215F, has been designed and analyzed theoretically and verified experimentally in the Langley Low Turbulence Pressure Tunnel. The basic objective of combining the high maximum lift of the NASA low speed airfoils with the low cruise drag of the NACA 6 series airfoils has been achieved. The safety requirement that the maximum lift coefficient not be significantly affected with transition fixed near the leading edge has also been met. Comparisons of the theoretical and experimental results show generally good agreement.

  12. Design and Experimental Results for a Natural-Laminar-Flow Airfoil for General Aviation Applications

    NASA Technical Reports Server (NTRS)

    Somers, D. M.

    1981-01-01

    A natural-laminar-flow airfoil for general aviation applications, the NLF(1)-0416, was designed and analyzed theoretically and verified experimentally in the Langley Low-Turbulence Pressure Tunnel. The basic objective of combining the high maximum lift of the NASA low-speed airfoils with the low cruise drag of the NACA 6-series airfoils was achieved. The safety requirement that the maximum lift coefficient not be significantly affected with transition fixed near the leading edge was also met. Comparisons of the theoretical and experimental results show excellent agreement. Comparisons with other airfoils, both laminar flow and turbulent flow, confirm the achievement of the basic objective.

  13. Analytical and experimental investigation of liquid double drop dynamics: Preliminary design for space shuttle experiments

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The preliminary grant assessed the use of laboratory experiments for simulating low g liquid drop experiments in the space shuttle environment. Investigations were begun of appropriate immiscible liquid systems, design of experimental apparatus and analyses. The current grant continued these topics, completed construction and preliminary testing of the experimental apparatus, and performed experiments on single and compound liquid drops. A continuing assessment of laboratory capabilities, and the interests of project personnel and available collaborators, led to, after consultations with NASA personnel, a research emphasis specializing on compound drops consisting of hollow plastic or elastic spheroids filled with liquids.

  14. RNA-seq Data: Challenges in and Recommendations for Experimental Design and Analysis

    PubMed Central

    Williams, Alexander G.; Thomas, Sean; Wyman, Stacia K.; Holloway, Alisha K.

    2014-01-01

    RNA-seq is widely used to determine differential expression of genes or transcripts as well as identify novel transcripts, identify allele-specific expression, and precisely measure translation of transcripts. Thoughtful experimental design and choice of analysis tools are critical to ensure high quality data and interpretable results. Important considerations for experimental design include number of replicates, whether to collect paired-end or single-end reads, sequence length, and sequencing depth. Common analysis steps in all RNA-seq experiments include quality control, read alignment, assigning reads to genes or transcripts, and estimating gene or transcript abundance. Our aims are two-fold: to make recommendations for common components of experimental design and assess tool capabilities for each of these steps. We also test tools designed to detect differential expression since this is the most widespread use of RNA-seq. We hope these analyses will help guide those who are new to RNA-seq and will generate discussion about remaining needs for tool improvement and development. PMID:25271838

  15. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  16. A Bayesian active learning strategy for sequential experimental design in systems biology.

    PubMed

    Pauwels, Edouard; Lajaunie, Christian; Vert, Jean-Philippe

    2014-09-26

    BackgroundDynamical models used in systems biology involve unknown kinetic parameters. Setting these parameters is a bottleneck in many modeling projects. This motivates the estimation of these parameters from empirical data. However, this estimation problem has its own difficulties, the most important one being strong ill-conditionedness. In this context, optimizing experiments to be conducted in order to better estimate a system¿s parameters provides a promising direction to alleviate the difficulty of the task.ResultsBorrowing ideas from Bayesian experimental design and active learning, we propose a new strategy for optimal experimental design in the context of kinetic parameter estimation in systems biology. We describe algorithmic choices that allow to implement this method in a computationally tractable way and make it fully automatic. Based on simulation, we show that it outperforms alternative baseline strategies, and demonstrate the benefit to consider multiple posterior modes of the likelihood landscape, as opposed to traditional schemes based on local and Gaussian approximations.ConclusionThis analysis demonstrates that our new, fully automatic Bayesian optimal experimental design strategy has the potential to support the design of experiments for kinetic parameter estimation in systems biology. PMID:25256134

  17. Intuitive Web-Based Experimental Design for High-Throughput Biomedical Data

    PubMed Central

    Friedrich, Andreas; Kenar, Erhan; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model. PMID:25954760

  18. Multi-objective optimization in WEDM of D3 tool steel using integrated approach of Taguchi method & Grey relational analysis

    NASA Astrophysics Data System (ADS)

    Shivade, Anand S.; Shinde, Vasudev D.

    2014-09-01

    In this paper, wire electrical discharge machining of D3 tool steel is studied. Influence of pulse-on time, pulse-off time, peak current and wire speed are investigated for MRR, dimensional deviation, gap current and machining time, during intricate machining of D3 tool steel. Taguchi method is used for single characteristics optimization and to optimize all four process parameters simultaneously, Grey relational analysis (GRA) is employed along with Taguchi method. Through GRA, grey relational grade is used as a performance index to determine the optimal setting of process parameters for multi-objective characteristics. Analysis of variance (ANOVA) shows that the peak current is the most significant parameters affecting on multi-objective characteristics. Confirmatory results, proves the potential of GRA to optimize process parameters successfully for multi-objective characteristics.

  19. A design of experiment study of plasma sprayed alumina-titania coatings

    SciTech Connect

    Steeper, T.J. and Co., Aiken, SC . Savannah River Lab.); Varacalle, D.J. Jr.; Wilson, G.C. ); Riggs, W.L. II ); Rotolico, A.J.; Nerz, J.E. )

    1992-01-01

    An experimental study of the plasma spraying of alumina-titania powder is presented in this paper. This powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. Coating experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical spray parameters in a systematic design of experiments in order to display the range of plasma processing conditions and their effect on the resultant coating. The coatings were characterized by hardness and electrical tests, image analysis, and optical metallography. Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, deposition efficiency, and microstructure. The attributes of the coatings are correlated with the changes in operating parameters.

  20. A design of experiment study of plasma sprayed alumina-titania coatings

    SciTech Connect

    Steeper, T.J.; Varacalle, D.J. Jr.; Wilson, G.C.; Riggs, W.L. II; Rotolico, A.J.; Nerz, J.E.

    1992-08-01

    An experimental study of the plasma spraying of alumina-titania powder is presented in this paper. This powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. Coating experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical spray parameters in a systematic design of experiments in order to display the range of plasma processing conditions and their effect on the resultant coating. The coatings were characterized by hardness and electrical tests, image analysis, and optical metallography. Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, deposition efficiency, and microstructure. The attributes of the coatings are correlated with the changes in operating parameters.

  1. Development of objective-oriented groundwater models: 2. Robust experimental design

    NASA Astrophysics Data System (ADS)

    Sun, Ne-Zheng; Yeh, William W.-G.

    2007-02-01

    This paper continues the discussion in part 1 by considering the data collection strategy problem when the existing data are judged to be insufficient for constructing a reliable model. Designing an experiment for identifying a distributed parameter is very difficult because the identification of a more complex parameter structure requires more data. Moreover, without knowing the sufficiency of a design, finding an optimal design becomes meaningless. These difficulties can be avoided if we turn to the construction of objective-oriented models. The identifiability of a distributed parameter, as defined in this paper, contains the reducibility of parameter structure. Sufficient conditions for this kind of identifiability are given. When the structure error associated with a structure reduction is too large, these conditions may not be satisfied no matter how much data are collected. In this paper we formulate a new experimental design problem that consists of two objectives: minimizing the cost and maximizing the information content, with robustness and feasibility as constraints. We develop an algorithm that can find a cost-effective robust design for objective-oriented parameter identification. We also present a heuristic algorithm that can find a suboptimal design with less computational effort for real case studies. The proposed methodology is used to design a pumping test for identifying a distributed hydraulic conductivity. We verify the robustness of the obtained design by assuming that the true parameter may have continuous, discrete, random, and fractured structures. Finally, the presented procedure of constructing objective-oriented models is described step by step.

  2. Design, Evaluation and Experimental Effort Toward Development of a High Strain Composite Wing for Navy Aircraft

    NASA Technical Reports Server (NTRS)

    Bruno, Joseph; Libeskind, Mark

    1990-01-01

    This design development effort addressed significant technical issues concerning the use and benefits of high strain composite wing structures (Epsilon(sub ult) = 6000 micro-in/in) for future Navy aircraft. These issues were concerned primarily with the structural integrity and durability of the innovative design concepts and manufacturing techniques which permitted a 50 percent increase in design ultimate strain level (while maintaining the same fiber/resin system) as well as damage tolerance and survivability requirements. An extensive test effort consisting of a progressive series of coupon and major element tests was an integral part of this development effort, and culminated in the design, fabrication and test of a major full-scale wing box component. The successful completion of the tests demonstrated the structural integrity, durability and benefits of the design. Low energy impact testing followed by fatigue cycling verified the damage tolerance concepts incorporated within the structure. Finally, live fire ballistic testing confirmed the survivability of the design. The potential benefits of combining newer/emerging composite materials and new or previously developed high strain wing design to maximize structural efficiency and reduce fabrication costs was the subject of subsequent preliminary design and experimental evaluation effort.

  3. Experimental investigation of undesired stable equilibria in pumpkin shape super-pressure balloon designs

    NASA Astrophysics Data System (ADS)

    Schur, W. W.

    2004-01-01

    Excess in skin material of a pneumatic envelope beyond what is required for minimum enclosure of a gas bubble is a necessary but by no means sufficient condition for the existence of multiple equilibrium configurations for that pneumatic envelope. The very design of structurally efficient super-pressure balloons of the pumpkin shape type requires such excess. Undesired stable equilibria in pumpkin shape balloons have been observed on experimental pumpkin shape balloons. These configurations contain regions with stress levels far higher than those predicted for the cyclically symmetric design configuration under maximum pressurization. Successful designs of pumpkin shape super-pressure balloons do not allow such undesired stable equilibria under full pressurization. This work documents efforts made so far and describes efforts still underway by the National Aeronautics and Space Administration's Balloon Program Office to arrive on guidance on the design of pumpkin shape super-pressure balloons that guarantee full and proper deployment.

  4. Engineering at SLAC: Designing and constructing experimental devices for the Stanford Synchrotron Radiation Lightsource - Final Paper

    SciTech Connect

    Djang, Austin

    2015-08-22

    Thanks to the versatility of the beam lines at SSRL, research there is varied and benefits multiple fields. Each experiment requires a particular set of experiment equipment, which in turns requires its own particular assembly. As such, new engineering challenges arise from each new experiment. My role as an engineering intern has been to help solve these challenges, by designing and assembling experimental devices. My first project was to design a heated sample holder, which will be used to investigate the effect of temperature on a sample's x-ray diffraction pattern. My second project was to help set up an imaging test, which involved designing a cooled grating holder and assembling multiple positioning stages. My third project was designing a 3D-printed pencil holder for the SSRL workstations.

  5. A Bayesian experimental design approach to structural health monitoring with application to ultrasonic guided waves

    NASA Astrophysics Data System (ADS)

    Flynn, Eric Brian

    The dissertation will present the application of a Bayesian experimental design framework to structural health monitoring (SHM). When applied to SHM, Bayesian experimental design (BED) is founded on the minimization of the expected loss, i.e., Bayes Risk, of the SHM process through the optimization of the detection algorithm and system hardware design parameters. This expected loss is a function of the detector and system design, the cost of decision/detection error, and the distribution of prior probabilities of damage. While the presented framework is general to all SHM applications, particular attention is paid to guided wave-based SHM (GWSHM). GWSHM is the process of exciting user-defined mechanical waves in plate or beam-like structures and sensing the response in order to identify damage, which manifests itself though scattering and attenuation of the traveling waves. Using the BED framework, both a detection-centric and a localization-centric optimal detector are derived for GWSHM based on likelihood tests. In order to objectively evaluate the performance in practical terms for the users of SHM systems, the dissertation will introduce three new statistics-based tools: the Bayesian combined receiver operating characteristic (BCROC) curve, the localization probability density (LPDF) estimate, and the localizer operating characteristic (LOC) curve. It will demonstrate the superior performance of the BED-based detectors over existing GWSHM algorithms through application to a geometrically complex test structure. Next, the BED framework is used to establish both a model-based and data-driven system design process for GWSHM to ascertain the optimal placement of both actuators and sensors according to application-specific decision error cost functions. This design process considers, among other things, non-uniform probabilities of damage, non-symmetric scatterers, the optimization of both sensor placement and sensor count, and robustness to sensor failure. The sensor placement design process is demonstrated and verified using several hypothetical and real-world design scenarios.

  6. Active vibration absorber for the CSI evolutionary model - Design and experimental results. [Controls Structures Interaction

    NASA Technical Reports Server (NTRS)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstrations to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility has been developed to study practical implementation of new control technologies under realistic conditions. The paper discusses the design of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. Experimental results in the presence of these factors are presented and discussed. The robustness of this design under model uncertainty is demonstrated.

  7. Designation and Implementation of Microcomputer Principle and Interface Technology Virtual Experimental Platform Website

    NASA Astrophysics Data System (ADS)

    Gao, JinYue; Tang, Yin

    This paper explicitly discusses the designation and implementation thought and method of Microcomputer Principle and Interface Technology virtual experimental platform website construction. The instructional design of this platform mainly follows with the students-oriented constructivism learning theory, and the overall structure is subject to the features of teaching aims, teaching contents and interactive methods. Virtual experiment platform production and development should fully take the characteristics of network operation into consideration and adopt relevant technologies to improve the effect and speed of network software application in internet.

  8. A bioinspired design principle for DNA nanomotors: mechanics-mediated symmetry breaking and experimental demonstration.

    PubMed

    Cheng, Juan; Sreelatha, Sarangapani; Loh, Iong Ying; Liu, Meihan; Wang, Zhisong

    2014-05-15

    DNA nanotechnology is a powerful tool to fabricate nanoscale motors, but the DNA nanomotors to date are largely limited to the simplistic burn-the-bridge design principle that prevents re-use of a fabricated motor-track system and is unseen in biological nanomotors. Here we propose and experimentally demonstrate a scheme to implement a conceptually new design principle by which a symmetric bipedal nanomotor autonomously gains a direction not by damaging the traversed track but by fine-tuning the motor's size. PMID:24602841

  9. An experimental investigation of two 15 percent-scale wind tunnel fan-blade designs

    NASA Technical Reports Server (NTRS)

    Signor, David B.

    1988-01-01

    An experimental 3-D investigation of two fan-blade designs was conducted. The fan blades tested were 15 percent-scale models of blades to be used in the fan drive of the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. NACA 65- and modified NACA 65-series sections incorporated increased thickness on the upper surface, between the leading edge and the one-half-chord position. Twist and taper were the same for both blade designs. The fan blades with modified 65-series sections were found to have an increased stall margin when they were compared with the unmodified blades.

  10. Experimental design in supercritical fluid extraction of cocaine from coca leaves.

    PubMed

    Brachet, A; Christen, P; Gauvrit, J Y; Longeray, R; Lantéri, P; Veuthey, J L

    2000-07-01

    An optimisation procedure for the supercritical fluid extraction (SFE) of cocaine from the leaves of Erythroxylum coca var. coca was investigated by means of experimental design. After preliminary experiments where the SFE rate-controlling mechanism was determined, a central composite design was applied to evaluate interactions between selected SFE factors such as pressure, temperature, nature and percentage of the polar modifier, as well as to optimise these factors. Predicted and experimental contents of cocaine were compared and robustness of the extraction method estimated by drawing response surfaces. The analysis of cocaine in crude extracts was carried out by capillary GC equipped with a flame ionisation detector (GC-FID), as well as by capillary GC coupled with a mass spectrometer (GC-MS) for peak identification. PMID:10869687

  11. Theoretical and Experimental Investigation of Mufflers with Comments on Engine-Exhaust Muffler Design

    NASA Technical Reports Server (NTRS)

    Davis, Don D , Jr; Stokes, George M; Moore, Dewey; Stevens, George L , Jr

    1954-01-01

    Equations are presented for the attenuation characteristics of single-chamber and multiple-chamber mufflers of both the expansion-chamber and resonator types, for tuned side-branch tubes, and for the combination of an expansion chamber with a resonator. Experimental curves of attenuation plotted against frequency are presented for 77 different mufflers with a reflection-free tailpipe termination. The experiments were made at room temperature without flow; the sound source was a loud-speaker. A method is given for including the tailpipe reflections in the calculations. Experimental attenuation curves are presented for four different muffler-tailpipe combinations, and the results are compared with the theory. The application of the theory to the design of engine-exhaust mufflers is discussed, and charts are included for the assistance of the designer.

  12. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  13. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands. PMID:22020491

  14. Design and Experimental Results for the S825 Airfoil; Period of Performance: 1998-1999

    SciTech Connect

    Somers, D. M.

    2005-01-01

    A 17%-thick, natural-laminar-flow airfoil, the S825, for the 75% blade radial station of 20- to 40-meter, variable-speed and variable-pitch (toward feather), horizontal-axis wind turbines has been designed and analyzed theoretically and verified experimentally in the NASA Langley Low-Turbulence Pressure Tunnel. The two primary objectives of high maximum lift, relatively insensitive to roughness and low-profile drag have been achieved. The airfoil exhibits a rapid, trailing-edge stall, which does not meet the design goal of a docile stall. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results generally show good agreement.

  15. Pliocene Model Intercomparison Project (PlioMIP): experimental design and boundary conditions (Experiment 2)

    USGS Publications Warehouse

    Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  16. Pliocene Model Intercomparison (PlioMIP) Phase 2: scientific objectives and experimental design

    NASA Astrophysics Data System (ADS)

    Haywood, A. M.; Dowsett, H. J.; Dolan, A. M.; Rowley, D.; Abe-Ouchi, A.; Otto-Bliesner, B.; Chandler, M. A.; Hunter, S. J.; Lunt, D. J.; Pound, M.; Salzmann, U.

    2015-08-01

    The Pliocene Model Intercomparison Project (PlioMIP) is a co-ordinated international climate modelling initiative to study and understand climate and environments of the Late Pliocene, and their potential relevance in the context of future climate change. PlioMIP operates under the umbrella of the Palaeoclimate Modelling Intercomparison Project (PMIP), which examines multiple intervals in Earth history, the consistency of model predictions in simulating these intervals and their ability to reproduce climate signals preserved in geological climate archives. This paper provides a thorough model intercomparison project description, and documents the experimental design in a detailed way. Specifically, this paper describes the experimental design and boundary conditions that will be utilised for the experiments in Phase 2 of PlioMIP.

  17. Self-healing in segmented metallized film capacitors: Experimental and theoretical investigations for engineering design

    NASA Astrophysics Data System (ADS)

    Belko, V. O.; Emelyanov, O. A.

    2016-01-01

    A significant increase in the efficiency of modern metallized film capacitors has been achieved by the application of special segmented nanometer-thick electrodes. The proper design of the electrode segmentation guarantees the best efficiency of the capacitor's self-healing (SH) ability. Meanwhile, the reported theoretical and experimental results have not led to the commonly accepted model of the SH process, since the experimental SH dissipated energy value is several times higher than the calculated one. In this paper, we show that the difference is caused by the heat outflow into polymer film. Based on this, a mathematical model of the metallized electrode destruction is developed. These insights in turn are leading to a better understanding of the SH development. The adequacy of the model is confirmed by both the experiments and the numerical calculations. A procedure of optimal segmented electrode design is offered.

  18. Experimental evaluation of the Battelle accelerated test design for the solar array at Mead, Nebraska

    NASA Technical Reports Server (NTRS)

    Frickland, P. O.; Repar, J.

    1982-01-01

    A previously developed test design for accelerated aging of photovoltaic modules was experimentally evaluated. The studies included a review of relevant field experience, environmental chamber cycling of full size modules, and electrical and physical evaluation of the effects of accelerated aging during and after the tests. The test results indicated that thermally induced fatigue of the interconnects was the primary mode of module failure as measured by normalized power output. No chemical change in the silicone encapsulant was detectable after 360 test cycles.

  19. Survey of the Quality of Experimental Design, Statistical Analysis and Reporting of Research Using Animals

    PubMed Central

    Kilkenny, Carol; Parsons, Nick; Kadyszewski, Ed; Festing, Michael F. W.; Cuthill, Innes C.; Fry, Derek; Hutton, Jane; Altman, Douglas G.

    2009-01-01

    For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87%) or blinding (86%), to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer reviewers and journal editors share the responsibility to ensure that published studies fulfil these criteria. PMID:19956596

  20. Fermilab D-0 Experimental Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1987-10-31

    This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis.

  1. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  2. Experimental design and analysis for accelerated degradation tests with Li-ion cells.

    SciTech Connect

    Doughty, Daniel Harvey; Thomas, Edward Victor; Jungst, Rudolph George; Roth, Emanuel Peter

    2003-08-01

    This document describes a general protocol (involving both experimental and data analytic aspects) that is designed to be a roadmap for rapidly obtaining a useful assessment of the average lifetime (at some specified use conditions) that might be expected from cells of a particular design. The proposed experimental protocol involves a series of accelerated degradation experiments. Through the acquisition of degradation data over time specified by the experimental protocol, an unambiguous assessment of the effects of accelerating factors (e.g., temperature and state of charge) on various measures of the health of a cell (e.g., power fade and capacity fade) will result. In order to assess cell lifetime, it is necessary to develop a model that accurately predicts degradation over a range of the experimental factors. In general, it is difficult to specify an appropriate model form without some preliminary analysis of the data. Nevertheless, assuming that the aging phenomenon relates to a chemical reaction with simple first-order rate kinetics, a data analysis protocol is also provided to construct a useful model that relates performance degradation to the levels of the accelerating factors. This model can then be used to make an accurate assessment of the average cell lifetime. The proposed experimental and data analysis protocols are illustrated with a case study involving the effects of accelerated aging on the power output from Gen-2 cells. For this case study, inadequacies of the simple first-order kinetics model were observed. However, a more complex model allowing for the effects of two concurrent mechanisms provided an accurate representation of the experimental data.

  3. A computer program for enzyme kinetics that combines model discrimination, parameter refinement and sequential experimental design.

    PubMed Central

    Franco, R; Gavald, M T; Canela, E I

    1986-01-01

    A method of model discrimination and parameter estimation in enzyme kinetics is proposed. The experimental design and analysis of the model are carried out simultaneously and the stopping rule for experimentation is deduced by the experimenter when the probabilities a posteriori indicate that one model is clearly superior to the rest. A FORTRAN77 program specifically developed for joint designs is given. The method is very powerful, as indicated by its usefulness in the discrimination between models. For example, it has been successfully applied to three cases of enzyme kinetics (a single-substrate Michaelian reaction with product inhibition, a single-substrate complex reaction and a two-substrate reaction). By using this method the most probable model and the estimates of the parameters can be obtained in one experimental session. The FORTRAN77 program is deposited as Supplementary Publication SUP 50134 (19 pages) at the British Library (Lending Division), Boston Spa, Wetherby, West Yorkshire LS23 7BQ, U.K., from whom copies can be obtained on the terms indicated in Biochem. J. (1986) 233, 5. PMID:3800965

  4. Life on rock. Scaling down biological weathering in a new experimental design at Biosphere-2

    NASA Astrophysics Data System (ADS)

    Zaharescu, D. G.; Dontsova, K.; Burghelea, C. I.; Chorover, J.; Maier, R.; Perdrial, J. N.

    2012-12-01

    Biological colonization and weathering of bedrock on Earth is a major driver of landscape and ecosystem development, its effects reaching out into other major systems such climate and geochemical cycles of elements. In order to understand how microbe-plant-mycorrhizae communities interact with bedrock in the first phases of mineral weathering we developed a novel experimental design in the Desert Biome at Biosphere-2, University of Arizona (U.S.A). This presentation will focus on the development of the experimental setup. Briefly, six enclosed modules were designed to hold 288 experimental columns that will accommodate 4 rock types and 6 biological treatments. Each module is developed on 3 levels. A lower volume, able to withstand the weight of both, rock material and the rest of the structure, accommodates the sampling elements. A middle volume, houses the experimental columns in a dark chamber. A clear, upper section forms the habitat exposed to sunlight. This volume is completely sealed form exterior and it allows a complete control of its air and water parameters. All modules are connected in parallel with a double air purification system that delivers a permanent air flow. This setup is expected to provide a model experiment, able to test important processes in the interaction rock-life at grain-to- molecular scale.

  5. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  6. Experimental investigation of damage behavior of RC frame members including non-seismically designed columns

    NASA Astrophysics Data System (ADS)

    Chen, Linzhi; Lu, Xilin; Jiang, Huanjun; Zheng, Jianbo

    2009-06-01

    Reinforced concrete (RC) frame structures are one of the mostly common used structural systems, and their seismic performance is largely determined by the performance of columns and beams. This paper describes horizontal cyclic loading tests of ten column and three beam specimens, some of which were designed according to the current seismic design code and others were designed according to the early non-seismic Chinese design code, aiming at reporting the behavior of the damaged or collapsed RC frame strctures observed during the Wenchuan earthquake. The effects of axial load ratio, shear span ratio, and transverse and longitudinal reinforcement ratio on hysteresis behavior, ductility and damage progress were incorporated in the experimental study. Test results indicate that the non-seismically designed columns show premature shear failure, and yield larger maximum residual crack widths and more concrete spalling than the seismically designed columns. In addition, longitudinal steel reinforcement rebars were severely buckled. The axial load ratio and shear span ratio proved to be the most important factors affecting the ductility, crack opening width and closing ability, while the longitudinal reinforcement ratio had only a minor effect on column ductility, but exhibited more influence on beam ductility. Finally, the transverse reinforcement ratio did not influence the maximum residual crack width and closing ability of the seismically designed columns.

  7. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  8. A Revised Design for Microarray Experiments to Account for Experimental Noise and Uncertainty of Probe Response

    PubMed Central

    Pozhitkov, Alex E.; Noble, Peter A.; Bryk, Jaros?aw; Tautz, Diethard

    2014-01-01

    Background Although microarrays are analysis tools in biomedical research, they are known to yield noisy output that usually requires experimental confirmation. To tackle this problem, many studies have developed rules for optimizing probe design and devised complex statistical tools to analyze the output. However, less emphasis has been placed on systematically identifying the noise component as part of the experimental procedure. One source of noise is the variance in probe binding, which can be assessed by replicating array probes. The second source is poor probe performance, which can be assessed by calibrating the array based on a dilution series of target molecules. Using model experiments for copy number variation and gene expression measurements, we investigate here a revised design for microarray experiments that addresses both of these sources of variance. Results Two custom arrays were used to evaluate the revised design: one based on 25 mer probes from an Affymetrix design and the other based on 60 mer probes from an Agilent design. To assess experimental variance in probe binding, all probes were replicated ten times. To assess probe performance, the probes were calibrated using a dilution series of target molecules and the signal response was fitted to an adsorption model. We found that significant variance of the signal could be controlled by averaging across probes and removing probes that are nonresponsive or poorly responsive in the calibration experiment. Taking this into account, one can obtain a more reliable signal with the added option of obtaining absolute rather than relative measurements. Conclusion The assessment of technical variance within the experiments, combined with the calibration of probes allows to remove poorly responding probes and yields more reliable signals for the remaining ones. Once an array is properly calibrated, absolute quantification of signals becomes straight forward, alleviating the need for normalization and reference hybridizations. PMID:24618910

  9. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-31

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment to the design concept is quantitatively determined. A technique is then established to assimilate this data and produce posteriori uncertainties on key attributes and responses of the design concept. Several experiment perturbations based on engineering judgment are used to demonstrate these methods and also serve as an initial generation of the optimization problem. Finally, an optimization technique is developed which will simultaneously arrive at an optimized experiment to produce an optimized reactor design. Solution of this problem is made possible by the use of the simulated annealing algorithm for solution of optimization problems. The optimization examined in this work is based on maximizing the reactor cost savings associated with the modified design made possible by using the design margin gained through reduced basic nuclear data uncertainties. Cost values for experiment design specifications and reactor design specifications are established and used to compute a total savings by comparing the posteriori reactor cost to the a priori cost plus the cost of the experiment. The optimized solution arrives at a maximized cost savings.

  10. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation. PMID:26689874

  11. A three-phase series-parallel resonant converter -- analysis, design, simulation, and experimental results

    SciTech Connect

    Bhat, A.K.S.; Zheng, R.L.

    1996-07-01

    A three-phase dc-to-dc series-parallel resonant converter is proposed /and its operating modes for a 180{degree} wide gating pulse scheme are explained. A detailed analysis of the converter using a constant current model and the Fourier series approach is presented. Based on the analysis, design curves are obtained and a design example of a 1-kW converter is given. SPICE simulation results for the designed converter and experimental results for a 500-W converter are presented to verify the performance of the proposed converter for varying load conditions. The converter operates in lagging power factor (PF) mode for the entire load range and requires a narrow variation in switching frequency, to adequately regulate the output power.

  12. Randomized block experimental designs can increase the power and reproducibility of laboratory animal experiments.

    PubMed

    Festing, Michael F W

    2014-01-01

    Randomized block experimental designs have been widely used in agricultural and industrial research for many decades. Usually they are more powerful, have higher external validity, are less subject to bias, and produce more reproducible results than the completely randomized designs typically used in research involving laboratory animals. Reproducibility can be further increased by using time as a blocking factor. These benefits can be achieved at no extra cost. A small experiment investigating the effect of an antioxidant on the activity of a liver enzyme in four inbred mouse strains, which had two replications (blocks) separated by a period of two months, illustrates this approach. The widespread failure to use these designs more widely in research involving laboratory animals has probably led to a substantial waste of animals, money, and scientific resources and slowed down the development of new treatments for human and animal diseases. PMID:25541548

  13. Active vibration absorber for CSI evolutionary model: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstration to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility was developed to study practical implementation of new control technologies under realistic conditions. The design is discussed of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. The primary performance objective considered is damping augmentation of the first nine structural modes. Comparison of experimental and predicted closed loop damping is presented, including test and simulation time histories for open and closed loop cases. Although the simulation and test results are not in full agreement, robustness of this design under model uncertainty is demonstrated. The basic advantage of this second order controller design is that the stability of the controller is model independent.

  14. The Langley Research Center CSI phase-0 evolutionary model testbed-design and experimental results

    NASA Technical Reports Server (NTRS)

    Belvin, W. K.; Horta, Lucas G.; Elliott, K. B.

    1991-01-01

    A testbed for the development of Controls Structures Interaction (CSI) technology is described. The design philosophy, capabilities, and early experimental results are presented to introduce some of the ongoing CSI research at NASA-Langley. The testbed, referred to as the Phase 0 version of the CSI Evolutionary model (CEM), is the first stage of model complexity designed to show the benefits of CSI technology and to identify weaknesses in current capabilities. Early closed loop test results have shown non-model based controllers can provide an order of magnitude increase in damping in the first few flexible vibration modes. Model based controllers for higher performance will need to be robust to model uncertainty as verified by System ID tests. Data are presented that show finite element model predictions of frequency differ from those obtained from tests. Plans are also presented for evolution of the CEM to study integrated controller and structure design as well as multiple payload dynamics.

  15. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi-objective design should stimulate its application within the field of (13)C-based metabolic flux analysis. PMID:26265092

  16. Use of experimental data in testing methods for design against uncertainty

    NASA Astrophysics Data System (ADS)

    Rosca, Raluca Ioana

    Modern methods of design take into consideration the fact that uncertainty is present in everyday life, whether in the form of variable loads (the strongest wind that would affect a building), material properties of an alloy, or future demand for the product or cost of labor. Moreover, the Japanese example showed that it may be more cost-effective to design taking into account the existence of the uncertainty rather than to plan to eliminate or greatly reduce it. The dissertation starts by comparing the theoretical basis of two methods for design against uncertainty, namely probability theory and possibility theory. A two-variable design problem is then used to show the differences. It is concluded that for design problems with two or more cases of failure of very different magnitude (as the stop of a car due to lack of gas or motor failure), probability theory divides existent resources in a more intuitive way than possibility theory. The dissertation continues with the description of simple experiments (building towers of dominoes) and then it presents the methodology to increase the amount of information that can be drawn from a given data set. The methodology is shown on the Bidder-Challenger problem, a simulation of a problem of a company that makes microchips to set a target speed for its next microchip. The simulations use the domino experimental data. It is demonstrated that important insights into methods of probability and possibility based design can be gained from experiments.

  17. Inducing design biases that characterize successful experimentation in weak-theory domains: TIPS

    SciTech Connect

    Gopalakrishnan, V.

    1996-12-31

    Experiment design in domains with weak theories is largely a trial-and-error process. In such domains, the effects of actions are unpredictable due to insufficient knowledge about the causal relationships among entities involved in an experiment. Thus, experiments are designed based on heuristics obtained from prior experience. Assuming that past experiment designs leading to success or failure can be recorded electronically, this thesis research proposes one method for analyzing these designs to yield hints regarding effective operator application sequences. This work assumes that the order in which operators are applied matters to the overall success of experiments. Experiment design can also be thought of as a form of planning, since it involves generation of a sequence of steps comprising of one or more operations that can change the environment by changing values of some of the parameters that describe the environment. Experiment design operators can therefore be thought of as plan operators at higher levels of abstraction. This thesis proposes a method for learning contexts within which applying certain sequences of operators has favored successful experimentation in the past.

  18. Multi-objective optimal experimental designs for event-related fMRI studies.

    PubMed

    Kao, Ming-Hung; Mandal, Abhyuday; Lazar, Nicole; Stufken, John

    2009-02-01

    In this article, we propose an efficient approach to find optimal experimental designs for event-related functional magnetic resonance imaging (ER-fMRI). We consider multiple objectives, including estimating the hemodynamic response function (HRF), detecting activation, circumventing psychological confounds and fulfilling customized requirements. Taking into account these goals, we formulate a family of multi-objective design criteria and develop a genetic-algorithm-based technique to search for optimal designs. Our proposed technique incorporates existing knowledge about the performance of fMRI designs, and its usefulness is shown through simulations. Although our approach also works for other linear combinations of parameters, we primarily focus on the case when the interest lies either in the individual stimulus effects or in pairwise contrasts between stimulus types. Under either of these popular cases, our algorithm outperforms the previous approaches. We also find designs yielding higher estimation efficiencies than m-sequences. When the underlying model is with white noise and a constant nuisance parameter, the stimulus frequencies of the designs we obtained are in good agreement with the optimal stimulus frequencies derived by Liu and Frank, 2004, NeuroImage 21: 387-400. In addition, our approach is built upon a rigorous model formulation. PMID:18948212

  19. A design and experimental verification methodology for an energy harvester skin structure

    NASA Astrophysics Data System (ADS)

    Lee, Soobum; Youn, Byeng D.

    2011-05-01

    This paper presents a design and experimental verification methodology for energy harvesting (EH) skin, which opens up a practical and compact piezoelectric energy harvesting concept. In the past, EH research has primarily focused on the design improvement of a cantilever-type EH device. However, such EH devices require additional space for proof mass and fixture and sometimes result in significant energy loss as the clamping condition becomes loose. Unlike the cantilever-type device, the proposed design is simply implemented by laminating a thin piezoelectric patch onto a vibrating structure. The design methodology proposed, which determines a highly efficient piezoelectric material distribution, is composed of two tasks: (i) topology optimization and (ii) shape optimization of the EH material. An outdoor condensing unit is chosen as a case study among many engineered systems with harmonic vibrating configuration. The proposed design methodology determined an optimal PZT material configuration on the outdoor unit skin structure. The designed EH skin was carefully prototyped to demonstrate that it can generate power up to 3.7 mW, which is sustainable for operating wireless sensor units for structural health monitoring and/or building automation.

  20. Facility for Advanced Accelerator Experimental Tests at SLAC (FACET) Conceptual Design Report

    SciTech Connect

    Amann, J.; Bane, K.; /SLAC

    2009-10-30

    This Conceptual Design Report (CDR) describes the design of FACET. It will be updated to stay current with the developing design of the facility. This CDR begins as the baseline conceptual design and will evolve into an 'as-built' manual for the completed facility. The Executive Summary, Chapter 1, gives an introduction to the FACET project and describes the salient features of its design. Chapter 2 gives an overview of FACET. It describes the general parameters of the machine and the basic approaches to implementation. The FACET project does not include the implementation of specific scientific experiments either for plasma wake-field acceleration for other applications. Nonetheless, enough work has been done to define potential experiments to assure that the facility can meet the requirements of the experimental community. Chapter 3, Scientific Case, describes the planned plasma wakefield and other experiments. Chapter 4, Technical Description of FACET, describes the parameters and design of all technical systems of FACET. FACET uses the first two thirds of the existing SLAC linac to accelerate the beam to about 20GeV, and compress it with the aid of two chicanes, located in Sector 10 and Sector 20. The Sector 20 area will include a focusing system, the generic experimental area and the beam dump. Chapter 5, Management of Scientific Program, describes the management of the scientific program at FACET. Chapter 6, Environment, Safety and Health and Quality Assurance, describes the existing programs at SLAC and their application to the FACET project. It includes a preliminary analysis of safety hazards and the planned mitigation. Chapter 7, Work Breakdown Structure, describes the structure used for developing the cost estimates, which will also be used to manage the project. The chapter defines the scope of work of each element down to level 3.

  1. Using an Animal Group Vigilance Practical Session to Give Learners a "Heads-Up" to Problems in Experimental Design

    ERIC Educational Resources Information Center

    Rands, Sean A.

    2011-01-01

    The design of experimental ecological fieldwork is difficult to teach to classes, particularly when protocols for data collection are normally carefully controlled by the class organiser. Normally, reinforcement of the some problems of experimental design such as the avoidance of pseudoreplication and appropriate sampling techniques does not occur…

  2. Introducing Third-Year Chemistry Students to the Planning and Design of an Experimental Program

    NASA Astrophysics Data System (ADS)

    Dunn, Jeffrey G.; Phillips, David Norman; van Bronswijk, Wilhelm

    1997-10-01

    The design and planning of an experimental program is often an important aspect of the job description of recent graduate employees in chemical industry and time should therefore be devoted to this activity in an undergraduate course. This paper describes a pencil and paper activity which involves the design and planning of an experimental programme which may lead to the solution of the problem. These skills are an essential pre-requisite to any experimental activity. We provide the students with a list of problems similar to those that a new graduate could encounter on commencing employment in chemical industry. They are real problems, which the Inorganic Chemistry staff of the School have been previously asked to solve for local industry. A staff member acts as the "client", and the students is the "consultant". The aim is that by a series of interviews between the client and the consultant, the students can refine a vague problem statement into a quantitative statement, and then from this develop a proposal to investigate the problem in order to confirm the cause. This proposal is submitted to the client for assessment. The students are expected to arrange one meeting with the supervisor in each week. This activity is highly commended by the School of Applied Chemistry's Advisory Board, which is primarily comprised of industrial chemists.

  3. Development of a Model for Measuring Scientific Processing Skills Based on Brain-Imaging Technology: Focused on the Experimental Design Process

    ERIC Educational Resources Information Center

    Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju

    2014-01-01

    The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…

  4. Development of a Model for Measuring Scientific Processing Skills Based on Brain-Imaging Technology: Focused on the Experimental Design Process

    ERIC Educational Resources Information Center

    Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju

    2014-01-01

    The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured

  5. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    SciTech Connect

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling, sample extraction, and analytical methods to be used in the INL-2 study. For each of the five test events, the specified floor of the INL building will be contaminated with BG using a point-release device located in the room specified in the experimental design. Then quality control (QC), reference material coupon (RMC), judgmental, and probabilistic samples will be collected according to the sampling plan for each test event. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples were selected with a random aspect and in sufficient numbers to provide desired confidence for detecting contamination or clearing uncontaminated (or decontaminated) areas. Following sample collection for a given test event, the INL building will be decontaminated. For possibly contaminated areas, the numbers of probabilistic samples were chosen to provide 95% confidence of detecting contaminated areas of specified sizes. For rooms that may be uncontaminated following a contamination event, or for whole floors after decontamination, the numbers of judgmental and probabilistic samples were chosen using the CJR approach. The numbers of samples were chosen to support making X%/Y% clearance statements with X = 95% or 99% and Y = 96% or 97%. The experimental and sampling design also provides for making X%/Y% clearance statements using only probabilistic samples. For each test event, the numbers of characterization and clearance samples were selected within limits based on operational considerations while still maintaining high confidence for detection and clearance aspects. The sampling design for all five test events contains 2085 samples, with 1142 after contamination and 943 after decontamination. These numbers include QC, RMC, judgmental, and probabilistic samples. The experimental and sampling design specified in this report provides a good statistical foundation for achieving the objectives of the INL-2 study.

  6. Computational simulations of frictional losses in pipe networks confirmed in experimental apparatusses designed by honors students

    NASA Astrophysics Data System (ADS)

    Pohlman, Nicholas A.; Hynes, Eric; Kutz, April

    2015-11-01

    Lectures in introductory fluid mechanics at NIU are a combination of students with standard enrollment and students seeking honors credit for an enriching experience. Most honors students dread the additional homework problems or an extra paper assigned by the instructor. During the past three years, honors students of my class have instead collaborated to design wet-lab experiments for their peers to predict variable volume flow rates of open reservoirs driven by gravity. Rather than learn extra, the honors students learn the Bernoulli head-loss equation earlier to design appropriate systems for an experimental wet lab. Prior designs incorporated minor loss features such as sudden contraction or multiple unions and valves. The honors students from Spring 2015 expanded the repertoire of available options by developing large scale set-ups with multiple pipe networks that could be combined together to test the flexibility of the student team's computational programs. The engagement of bridging the theory with practice was appreciated by all of the students such that multiple teams were able to predict performance within 4% accuracy. The challenges, schedules, and cost estimates of incorporating the experimental lab into an introductory fluid mechanics course will be reported.

  7. Pressure-Flow Experimental Performance of New Intravascular Blood Pump Designs for Fontan Patients.

    PubMed

    Chopski, Steven G; Fox, Carson S; Riddle, Michelle L; McKenna, Kelli L; Patel, Jay P; Rozolis, John T; Throckmorton, Amy L

    2016-03-01

    An intravascular axial flow pump is being developed as a mechanical cavopulmonary assist device for adolescent and adult patients with dysfunctional Fontan physiology. Coupling computational modeling with experimental evaluation of prototypic designs, this study examined the hydraulic performance of 11 impeller prototypes with blade stagger or twist angles varying from 100 to 600 degrees. A refined range of twisted blade angles between 300 and 400 degrees with 20-degree increments was then selected, and four additional geometries were constructed and hydraulically evaluated. The prototypes met performance expectations and produced 3-31 mm Hg for flow rates of 1-5 L/min for 6000-8000 rpm. A regression analysis was completed with all characteristic coefficients contributing significantly (P < 0.0001). This analysis revealed that the impeller with 400 degrees of blade twist outperformed the other designs. The findings of the numerical model for 300-degree twisted case and the experimental results deviated within approximately 20%. In an effort to simplify the impeller geometry, this work advanced the design of this intravascular cavopulmonary assist device closer to preclinical animal testing. PMID:26333131

  8. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  9. Behavioral design of a positive verbal community: a preliminary experimental analysis.

    PubMed

    Alford, B A; Jaremko, M E

    1990-09-01

    Behaviorists have theorized (and experimental analyses suggest) the potential clinical application of verbal behavior modification. This study evaluated therapeutic effects of behavioral intervention to modify the intact verbal community. The setting was an adolescent operant treatment center for behavioral disorders. All residents within the center, 16 females and 22 males, participated in the study. A within subjects experimental design compared effects of a positive verbal community (PVC) plus the ongoing operant treatment program to the operant program alone. Conceptually, these were dual-level and single-level operant programs, respectively. Dependent measures included rates of positive goal-relevant verbalizations of residents, and clinical measures of self-control and psychiatric symptoms. Preliminary evidence supported the feasibility of the PVC as a potential novel therapeutic intervention. PMID:2086602

  10. Robust Design of Body Slip Angle Observer for Electric Vehicles and its Experimental Demonstration

    NASA Astrophysics Data System (ADS)

    Aoki, Yoshifumi; Hori, Yoichi

    Electric Vehicles (EVs) are inherently suitable for 2-Dimension vehicle motion control. To utilize EV's advantages, body slip angle ? and yaw rate ? play an important role. However, as sensors to measure ? are very expensive, we need to estimate ? from only variables to be measurable. In this paper, an improved estimation method for body slip angle ? for EVs is proposed. This method is based on a linear observer using side acceleration ay as well as ? information. We especially considered the design of gain matrix and succeeded in exact and robust estimation. We performed experiments by UOT MarchII. This experimental vehicle driven by four in-wheel motors was made for research on advanced control of EVs. Some experimental results are shown to verify the effectiveness of the proposed method.

  11. The balloon experimental twin telescope for infrared interferometry (BETTII): optical design

    NASA Astrophysics Data System (ADS)

    Veach, Todd J.; Rinehart, Stephen A.; Mentzell, John E.; Silverberg, Robert F.; Fixsen, Dale J.; Rizzo, Maxime J.; Dhabal, Arnab; Gibbons, Caitlin E.; Benford, Dominic J.

    2014-07-01

    Here we present the optical and limited cryogenic design for The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII), an 8-meter far-infrared interferometer designed to fly on a high-altitude scientific balloon. The optical design is separated into warm and cold optics with the cold optics further separated into the far-infrared (FIR) (30-90 microns) and near-infrared (NIR) (1-3 microns). The warm optics are comprised of the twin siderostats, twin telescopes, K-mirror, and warm delay line. The cold optics are comprised of the cold delay line and the transfer optics to the FIR science detector array and the NIR steering array. The field of view of the interferometer is 2', with a wavelength range of 30-90 microns, 0.5" spectral resolution at 40 microns, R~200 spectral resolution, and 1.5" pointing stability. We also present the design of the cryogenic system necessary for operation of the NIR and FIR detectors. The cryogenic system consists of a `Buffered He-7' type cryogenic cooler providing a cold stage base temperature of < 280mK and 10 micro-Watts of heat lift and a custom in-house designed dewar that nominally provides sufficient hold time for the duration of the BETTII flight (24 hours).

  12. Using Taguchi method to optimize differential evolution algorithm parameters to minimize workload smoothness index in SALBP

    NASA Astrophysics Data System (ADS)

    Mozdgir, A.; Mahdavi, Iraj; Seyyedi, I.; Shiraqei, M. E.

    2011-06-01

    An assembly line is a flow-oriented production system where the productive units performing the operations, referred to as stations, are aligned in a serial manner. The assembly line balancing problem arises and has to be solved when an assembly line has to be configured or redesigned. The so-called simple assembly line balancing problem (SALBP), a basic version of the general problem, has attracted attention of researchers and practitioners of operations research for almost half a century. There are four types of objective functions which are considered to this kind of problem. The versions of SALBP may be complemented by a secondary objective which consists of smoothing station loads. Many heuristics have been proposed for the assembly line balancing problem due to its computational complexity and difficulty in identifying an optimal solution and so many heuristic solutions are supposed to solve this problem. In this paper a differential evolution algorithm is developed to minimize workload smoothness index in SALBP-2 and the algorithm parameters are optimized using Taguchi method.

  13. Taguchi approach for co-gasification optimization of torrefied biomass and coal.

    PubMed

    Chen, Wei-Hsin; Chen, Chih-Jung; Hung, Chen-I

    2013-09-01

    This study employs the Taguchi method to approach the optimum co-gasification operation of torrefied biomass (eucalyptus) and coal in an entrained flow gasifier. The cold gas efficiency is adopted as the performance index of co-gasification. The influences of six parameters, namely, the biomass blending ratio, oxygen-to-fuel mass ratio (O/F ratio), biomass torrefaction temperature, gasification pressure, steam-to-fuel mass ratio (S/F ratio), and inlet temperature of the carrier gas, on the performance of co-gasification are considered. The analysis of the signal-to-noise ratio suggests that the O/F ratio is the most important factor in determining the performance and the appropriate O/F ratio is 0.7. The performance is also significantly affected by biomass along with torrefaction, where a torrefaction temperature of 300°C is sufficient to upgrade eucalyptus. According to the recommended operating conditions, the values of cold gas efficiency and carbon conversion at the optimum co-gasification are 80.99% and 94.51%, respectively. PMID:23907063

  14. Optimal Experimental Design for Parameter Estimation of a Cell Signaling Model

    PubMed Central

    Bandara, Samuel; Schlöder, Johannes P.; Eils, Roland; Bock, Hans Georg; Meyer, Tobias

    2009-01-01

    Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP3) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP3 lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay. PMID:19911077

  15. Optimal experimental design for parameter estimation of a cell signaling model.

    PubMed

    Bandara, Samuel; Schlder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias

    2009-11-01

    Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay. PMID:19911077

  16. Experimental design in caecilian systematics: phylogenetic information of mitochondrial genomes and nuclear rag1.

    PubMed

    San Mauro, Diego; Gower, David J; Massingham, Tim; Wilkinson, Mark; Zardoya, Rafael; Cotton, James A

    2009-08-01

    In molecular phylogenetic studies, a major aspect of experimental design concerns the choice of markers and taxa. Although previous studies have investigated the phylogenetic performance of different genes and the effectiveness of increasing taxon sampling, their conclusions are partly contradictory, probably because they are highly context specific and dependent on the group of organisms used in each study. Goldman introduced a method for experimental design in phylogenetics based on the expected information to be gained that has barely been used in practice. Here we use this method to explore the phylogenetic utility of mitochondrial (mt) genes, mt genomes, and nuclear rag1 for studies of the systematics of caecilian amphibians, as well as the effect of taxon addition on the stabilization of a controversial branch of the tree. Overall phylogenetic information estimates per gene, specific estimates per branch of the tree, estimates for combined (mitogenomic) data sets, and estimates as a hypothetical new taxon is added to different parts of the caecilian tree are calculated and compared. In general, the most informative data sets are those for mt transfer and ribosomal RNA genes. Our results also show at which positions in the caecilian tree the addition of taxa have the greatest potential to increase phylogenetic information with respect to the controversial relationships of Scolecomorphus, Boulengerula, and all other teresomatan caecilians. These positions are, as intuitively expected, mostly (but not all) adjacent to the controversial branch. Generating whole mitogenomic and rag1 data for additional taxa joining the Scolecomorphus branch may be a more efficient strategy than sequencing a similar amount of additional nucleotides spread across the current caecilian taxon sampling. The methodology employed in this study allows an a priori evaluation and testable predictions of the appropriateness of particular experimental designs to solve specific questions at different levels of the caecilian phylogeny. PMID:20525595

  17. Design considerations for ITER (International Thermonuclear Experimental Reactor) magnet systems: Revision 1

    SciTech Connect

    Henning, C.D.; Miller, J.R.

    1988-10-09

    The International Thermonuclear Experimental Reactor (ITER) is now completing a definition phase as a beginning of a three-year design effort. Preliminary parameters for the superconducting magnet system have been established to guide further and more detailed design work. Radiation tolerance of the superconductors and insulators has been of prime importance, since it sets requirements for the neutron-shield dimension and sensitively influences reactor size. The major levels of mechanical stress in the structure appear in the cases of the inboard legs of the toroidal-field (TF) coils. The cases of the poloidal-field (PF) coils must be made thin or segmented to minimize eddy current heating during inductive plasma operation. As a result, the winding packs of both the TF and PF coils includes significant fractions of steel. The TF winding pack provides support against in-plane separating loads but offers little support against out-of-plane loads, unless shear-bonding of the conductors can be maintained. The removal of heat due to nuclear and ac loads has not been a fundamental limit to design, but certainly has non-negligible economic consequences. We present here preliminary ITER magnet systems design parameters taken from trade studies, designs, and analyses performed by the Home Teams of the four ITER participants, by the ITER Magnet Design Unit in Garching, and by other participants at workshops organized by the Magnet Design Unit. The work presented here reflects the efforts of many, but the responsibility for the opinions expressed is the authors'. 4 refs., 3 figs., 4 tabs.

  18. Design and experimental investigations on a small scale traveling wave thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Chen, M.; Ju, Y. L.

    2013-02-01

    A small scale traveling wave or Stirling thermoacoustic engine with a resonator of only 1 m length was designed, constructed and tested by using nitrogen as working gas. The small heat engine achieved a steady working frequency of 45 Hz. The pressure ratio reached 1.189, with an average charge pressure of 0.53 MPa and a heating power of 1.14 kW. The temperature and the pressure characteristics during the onset and damping processes were also observed and discussed. The experimental results demonstrated that the small engine possessed the potential to drive a Stirling-type pulse tube cryocooler.

  19. A micro PPT for Cubesat application: Design and preliminary experimental results

    NASA Astrophysics Data System (ADS)

    Coletti, M.; Guarducci, F.; Gabriel, S. B.

    2011-08-01

    Cubesats, allowing for cheap access to space, are one of the fastest growing sectors in the space industry. A Pulsed Plasma Thruster to perform drag compensation for a Cubesat platform, with the aim of doubling the time needed for the Cubesat to naturally de-orbit (hence doubling its lifetime) is currently under development by Clyde Space Ltd., Mars Space Ltd. and the University of Southampton under an ESA funded project. In this paper the design of the thruster will be presented together with preliminary experimental results. The preliminary test results suggest that the thruster will be able to meet the mission requirements.

  20. Optimization of polyvinylidene fluoride (PVDF) membrane fabrication for protein binding using statistical experimental design.

    PubMed

    Ahmad, A L; Ideris, N; Ooi, B S; Low, S C; Ismail, A

    2016-01-01

    Statistical experimental design was employed to optimize the preparation conditions of polyvinylidenefluoride (PVDF) membranes. Three variables considered were polymer concentration, dissolving temperature, and casting thickness, whereby the response variable was membrane-protein binding. The optimum preparation for the PVDF membrane was a polymer concentration of 16.55 wt%, a dissolving temperature of 27.5°C, and a casting thickness of 450 µm. The statistical model exhibits a deviation between the predicted and actual responses of less than 5%. Further characterization of the formed PVDF membrane showed that the morphology of the membrane was in line with the membrane-protein binding performance. PMID:27088961

  1. Experimental Design for CMIP6: Aerosol, Land Use, and Future Scenarios Final Report

    SciTech Connect

    Arnott, James

    2015-10-30

    The Aspen Global Change Institute hosted a technical science workshop entitled, “Experimental design for CMIP6: Aerosol, Land Use, and Future Scenarios,” on August 3-8, 2014 in Aspen, CO. Claudia Tebaldi (NCAR) and Brian O’Neill (NCAR) served as co-chairs for the workshop. The Organizing committee also included Dave Lawrence (NCAR), Jean-Francois Lamarque (NCAR), George Hurtt (University of Maryland), & Detlef van Vuuren (PBL Netherlands Environmental Change). The meeting included the participation of 22 scientists representing many of the major climate modeling centers for a total of 110 participant days.

  2. Quiet Clean Short-Haul Experimental Engine (QCSEE): Acoustic treatment development and design

    NASA Technical Reports Server (NTRS)

    Clemons, A.

    1979-01-01

    Acoustic treatment designs for the quiet clean short-haul experimental engines are defined. The procedures used in the development of each noise-source suppressor device are presented and discussed in detail. A complete description of all treatment concepts considered and the test facilities utilized in obtaining background data used in treatment development are also described. Additional supporting investigations that are complementary to the treatment development work are presented. The expected suppression results for each treatment configuration are given in terms of delta SPL versus frequency and in terms of delta PNdB.

  3. Experimental observation of solitary waves in a new designed pendulum chain system

    NASA Astrophysics Data System (ADS)

    Zhu, Changqing; Lei, Juanmian; Wu, Yecun; Li, Nan; Chen, Da; Shi, Qingfan

    2015-07-01

    A new coupled pendulum chain system is developed to vividly simulate the solitary solutions of the sine-Gordon (SG) equation. Transmission processes of three kinds of solitons (kink, anti-kink and breather) are systematically observed by using a high speed camera system. The solutions of the SG equation are derived through deducing the net external torque of the pendulums. The experimental data obtained are consistent with the theoretical calculation, which verifies that the system designed is an effective device to demonstrate the nonlinear behaviour of solitary waves in teaching and learning.

  4. Development and design of a multi-column experimental setup for Kr/Xe separation

    SciTech Connect

    Garn, Troy G.; Greenhalgh, Mitchell; Watson, Tony

    2014-12-01

    As a precursor to FY-15 Kr/Xe separation testing, design modifications to an existing experimental setup are warranted. The modifications would allow for multi-column testing to facilitate a Xe separation followed by a Kr separation using engineered form sorbents prepared using an INL patented process. A new cooling apparatus capable of achieving test temperatures to -40° C and able to house a newly designed Xe column was acquired. Modifications to the existing setup are being installed to allow for multi-column testing and gas constituent analyses using evacuated sample bombs. The new modifications will allow for independent temperature control for each column enabling a plethora of test conditions to be implemented. Sample analyses will be used to evaluate the Xe/Kr selectivity of the AgZ-PAN sorbent and determine the Kr purity of the effluent stream following Kr capture using the HZ-PAN sorbent.

  5. Design and Experimental Performance of a Two Stage Partial Admission Turbine, Task B.1/B.4

    NASA Technical Reports Server (NTRS)

    Sutton, R. F.; Boynton, J. L.; Akian, R. A.; Shea, Dan; Roschak, Edmund; Rojas, Lou; Orr, Linsey; Davis, Linda; King, Brad; Bubel, Bill

    1992-01-01

    A three-inch mean diameter, two-stage turbine with partial admission in each stage was experimentally investigated over a range of admissions and angular orientations of admission arcs. Three configurations were tested in which first stage admission varied from 37.4 percent (10 of 29 passages open, 5 per side) to 6.9 percent (2 open, 1 per side). Corresponding second stage admissions were 45.2 percent (14 of 31 passages open, 7 per side) and 12.9 percent (4 open, 2 per side). Angular positions of the second stage admission arcs with respect to the first stage varied over a range of 70 degrees. Design and off-design efficiency and flow characteristics for the three configurations are presented. The results indicated that peak efficiency and the corresponding isentropic velocity ratio decreased as the arcs of admission were decreased. Both efficiency and flow characteristics were sensitive to the second stage nozzle orientation angles.

  6. US ITER (International Thermonuclear Experimental Reactor) shield and blanket design activities

    SciTech Connect

    Baker, C.C.

    1988-08-01

    This paper summarizes nuclear-related work in support of the US effort for the International Thermonuclear Experimental Reactor (ITER) Study. Primary tasks carried out during the past year include design improvements of the inboard shield developed for the TIBER concept, scoping studies of a variety of tritium breeding blanket options, development of necessary design guidelines and evaluation criteria for the blanket options, further safety considerations related to nuclear components, and issues regarding structural materials for an ITER device. The blanket concepts considered are the aqueous/Li salt solution, a water-cooled, solid breeder blanket, a helium-cooled, solid-breeder blanket, a blanket cooled by helium containing lithium-bearing particulates, and a blanket concept based on breeding tritium from He/sup 3/. 1 ref., 2 tabs.

  7. Design of charge exchange recombination spectroscopy for the joint Texas experimental tokamaka)

    NASA Astrophysics Data System (ADS)

    Chi, Y.; Zhuang, G.; Cheng, Z. F.; Hou, S. Y.; Cheng, C.; Li, Z.; Wang, J. R.; Wang, Z. J.

    2014-11-01

    The old diagnostic neutral beam injector first operated at the University of Texas at Austin is ready for rejoining the joint Texas experimental tokamak (J-TEXT). A new set of high voltage power supplies has been equipped and there is no limitation for beam modulation or beam pulse duration henceforth. Based on the spectra of fully striped impurity ions induced by the diagnostic beam the design work for toroidal charge exchange recombination spectroscopy (CXRS) system is presented. The 529 nm carbon VI (n = 8 - 7 transition) line seems to be the best choice for ion temperature and plasma rotation measurements and the considered hardware is listed. The design work of the toroidal CXRS system is guided by essential simulation of expected spectral results under the J-TEXT tokamak operation conditions.

  8. Inlet Flow Test Calibration for a Small Axial Compressor Facility. Part 1: Design and Experimental Results

    NASA Technical Reports Server (NTRS)

    Miller, D. P.; Prahst, P. S.

    1994-01-01

    An axial compressor test rig has been designed for the operation of small turbomachines. The inlet region consisted of a long flowpath region with two series of support struts and a flapped inlet guide vane. A flow test was run to calibrate and determine the source and magnitudes of the loss mechanisms in the inlet for a highly loaded two-stage axial compressor test. Several flow conditions and IGV angle settings were established in which detailed surveys were completed. Boundary layer bleed was also provided along the casing of the inlet behind the support struts and ahead of the IGV. A detailed discussion of the flowpath design along with a summary of the experimental results are provided in Part 1.

  9. Design of an experimental electric arc furnace. Report of investigations/1992

    SciTech Connect

    Hartman, A.D.; Ochs, T.L.

    1992-01-01

    Instabilities in electric steelmaking furnace arcs cause electrical and acoustical noise, reduce operating efficiency, increase refractory erosion, and increase electrode usage. The U.S. Bureau of Mines has an ongoing research project investigating methods to stabilize these arcs to improve productivity in steel production. To perform experiments to test new hypotheses, researchers designed and instrumented an advanced, experimental single-phase furnace. The paper describes the furnace, which was equipped with high-speed data acquisition capabilities for electrical, temperature, pressure and flow rate measurements; automated atmosphere control; ballistic calorimetry; and viewports for high-speed cinematography. Precise environmental control and accurate data acquisition allow the statistical design of experiments and assignment of rigorous confidence limits when testing potential furnace or procedural modifications.

  10. Design and experimental verification for optical module of optical vector-matrix multiplier.

    PubMed

    Zhu, Weiwei; Zhang, Lei; Lu, Yangyang; Zhou, Ping; Yang, Lin

    2013-06-20

    Optical computing is a new method to implement signal processing functions. The multiplication between a vector and a matrix is an important arithmetic algorithm in the signal processing domain. The optical vector-matrix multiplier (OVMM) is an optoelectronic system to carry out this operation, which consists of an electronic module and an optical module. In this paper, we propose an optical module for OVMM. To eliminate the cross talk and make full use of the optical elements, an elaborately designed structure that involves spherical lenses and cylindrical lenses is utilized in this optical system. The optical design software package ZEMAX is used to optimize the parameters and simulate the whole system. Finally, experimental data is obtained through experiments to evaluate the overall performance of the system. The results of both simulation and experiment indicate that the system constructed can implement the multiplication between a matrix with dimensions of 16 by 16 and a vector with a dimension of 16 successfully. PMID:23842187

  11. A conceptual design of the International Thermonuclear Experimental Reactor for the Central Solenoid

    SciTech Connect

    Heim, J.R.; Parker, J.M.

    1990-09-21

    Conceptual design of the International Thermonuclear Experimental Reactor (ITER) superconducting magnet system is nearing completion by the ITER Design Team, and one of the Central Solenoid (CS) designs is presented. The CS part of this magnet system will be a vertical stack of eight modules, approximately 16 m high, each having a approximate dimensions of: 4.1-m o.d., 2.8-m i.d., 1.9-m h. The peak field at the bore is approximately 13.5 T. Cable-in-conduit conductor with Nb{sub 3}Sn composite wire will be used to wind the coils. The overall coil fabrication will use the insulate-wind-react-impregnate method. Coil modules will be fabricated using double-pancake coils with all splice joints located in the low-field region on the outside of the coils. All coils will be structurally graded with high-strength steel reinforcement which is co-wound with the conductor. We describe details of the CS coil design and analysis.

  12. Design and Development of a Composite Dome for Experimental Characterization of Material Permeability

    NASA Technical Reports Server (NTRS)

    Estrada, Hector; Smeltzer, Stanley S., III

    1999-01-01

    This paper presents the design and development of a carbon fiber reinforced plastic dome, including a description of the dome fabrication, method for sealing penetrations in the dome, and a summary of the planned test series. This dome will be used for the experimental permeability characterization and leakage validation of composite vessels pressurized using liquid hydrogen and liquid nitrogen at the Cryostat Test Facility at the NASA Marshall Space Flight Center (MSFC). The preliminary design of the dome was completed using membrane shell analysis. Due to the configuration of the test setup, the dome will experience some flexural stresses and stress concentrations in addition to membrane stresses. Also, a potential buckling condition exists for the dome due to external pressure during the leak testing of the cryostat facility lines. Thus, a finite element analysis was conducted to assess the overall strength and stability of the dome for each required test condition. Based on these results, additional plies of composite reinforcement material were applied to local regions on the dome to alleviate stress concentrations and limit deflections. The dome design includes a circular opening in the center for the installation of a polar boss, which introduces a geometric discontinuity that causes high stresses in the region near the hole. To attenuate these high stresses, a reinforcement system was designed using analytical and finite element analyses. The development of a low leakage polar boss system is also investigated.

  13. Experimental Testing of Rockfall Barriers Designed for the Low Range of Impact Energy

    NASA Astrophysics Data System (ADS)

    Buzzi, O.; Spadari, M.; Giacomini, A.; Fityus, S.; Sloan, S. W.

    2013-07-01

    Most of the recent research on rockfall and the development of protective systems, such as flexible rockfall barriers, have been focused on medium to high levels of impacting energy. However, in many regions of the world, the rockfall hazard involves low levels of energy. This is particularly the case in New South Wales, Australia, because of the nature of the geological environments. The state Road and Traffic Authority (RTA) has designed various types of rockfall barriers, including some of low capacity, i.e. 35 kJ. The latter were tested indoors using a pendulum equipped with an automatic block release mechanism triggered by an optical beam. Another three systems were also tested, including two products designed by rockfall specialised companies and one modification of the initial design of the RTA. The research focused on the influence of the system's stiffness on the transmission of load to components of the barrier such as posts and cables. Not surprisingly, the more compliant the system, the less loaded the cables and posts. It was also found that removing the intermediate cables and placing the mesh downslope could reduce the stiffness of the system designed by the RTA. The paper concludes with some multi-scale considerations on the capacity of a barrier to absorb the energy based on experimental evidence.

  14. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas; Sheth, Rubik; Le, Hung

    2013-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the modeling and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  15. Experimental design of Fenton and photo-Fenton reactions for the treatment of cellulose bleaching effluents.

    PubMed

    Torrades, Francesc; Pérez, Montserrat; Mansilla, Héctor D; Peral, José

    2003-12-01

    Multivariate experimental design was applied to the treatment of a cellulose conventional bleaching effluent in order to evaluate the use of the Fenton reagent under solar light irradiation. The effluent was characterised by the general parameters total organic carbon (TOC), chemical oxygen demand and color, and it was analysed for chlorinated low molecular weight compounds using GC-MS. The main parameters that govern the complex reactive system: Fe(II) and H(2)O(2) initial concentration, and temperature were simultaneously studied. Factorial experimental design allowed to assign the weight of each variable in the TOC removal after 15 min of reaction. Temperature had an important effect in the organic matter degradation, especially when the ratio of Fenton reagents was not properly chosen. Fenton reagent under solar irradiation proved to be highly effective for these types of wastewaters. A 90% TOC reduction was achieved in only 15 min of treatment. In addition, the GC-MS analysis showed the elimination of the chlorinated organic compounds initially detected in the studied bleaching effluents. PMID:14550352

  16. Experimental Design Optimization of a Sequential Injection Method for Promazine Assay in Bulk and Pharmaceutical Formulations

    PubMed Central

    Idris, Abubakr M.; Assubaie, Fahad N.; Sultan, Salah M.

    2007-01-01

    Experimental design optimization approach was utilized to develop a sequential injection analysis (SIA) method for promazine assay in bulk and pharmaceutical formulations. The method was based on the oxidation of promazine by Ce(IV) in sulfuric acidic media resulting in a spectrophotometrically detectable species at 512 nm. A 33 full factorial design and response surface methods were applied to optimize experimental conditions potentially controlling the analysis. The optimum conditions obtained were 1.0 × 10−4 M sulphuric acid, 0.01 M Ce(IV), and 10 μL/s flow rate. Good analytical parameters were obtained including range of linearity 1–150 μg/mL, linearity with correlation coefficient 0.9997, accuracy with mean recovery 98.2%, repeatability with RSD 1.4% (n = 7 consequent injections), intermediate precision with RSD 2.1% (n = 5 runs over a week), limits of detection 0.34 μg/mL, limits of quantification 0.93 μg/mL, and sampling frequency 23 samples/h. The obtained results were realized by the British Pharmacopoeia method and comparable results were obtained. The provided SIA method enjoys the advantages of the technique with respect to rapidity, reagent/sample saving, and safety in solution handling and to the environment. PMID:18350124

  17. Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.

    2011-01-01

    The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.

  18. Experimental design for optimizing drug release from silicone elastomer matrix and investigation of transdermal drug delivery.

    PubMed

    Snorradóttir, Bergthóra S; Gudnason, Pálmar I; Thorsteinsson, Freygardur; Másson, Már

    2011-04-18

    Silicone elastomers are commonly used for medical devices and external prosthesis. Recently, there has been growing interest in silicone-based medical devices with enhanced function that release drugs from the elastomer matrix. In the current study, an experimental design approach was used to optimize the release properties of the model drug diclofenac from medical silicone elastomer matrix, including a combination of four permeation enhancers as additives and allowing for constraints in the properties of the material. The D-optimal design included six factors and five responses describing material properties and release of the drug. The first experimental object was screening, to investigate the main and interaction effects, based on 29 experiments. All excipients had a significant effect and were therefore included in the optimization, which also allowed the possible contribution of quadratic terms to the model and was based on 38 experiments. Screening and optimization of release and material properties resulted in the production of two optimized silicone membranes, which were tested for transdermal delivery. The results confirmed the validity of the model for the optimized membranes that were used for further testing for transdermal drug delivery through heat-separated human skin. The optimization resulted in an excipient/drug/silicone composition that resulted in a cured elastomer with good tensile strength and a 4- to 7-fold transdermal delivery increase relative to elastomer that did not contain excipients. PMID:21371556

  19. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.; Le,Hung

    2012-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the model development and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  20. Sonophotolytic degradation of synthetic pharmaceutical wastewater: statistical experimental design and modeling.

    PubMed

    Ghafoori, Samira; Mowla, Amir; Jahani, Ramtin; Mehrvar, Mehrab; Chan, Philip K

    2015-03-01

    The merits of the sonophotolysis as a combination of sonolysis (US) and photolysis (UV/H2O2) are investigated in a pilot-scale external loop airlift sonophotoreactor for the treatment of a synthetic pharmaceutical wastewater (SPWW). In the first part of this study, the multivariate experimental design is carried out using Box-Behnken design (BBD). The effluent is characterized by the total organic carbon (TOC) percent removal as a surrogate parameter. The results indicate that the response of the TOC percent removal is significantly affected by the synergistic effects of the linear term of H2O2 dosage and ultrasound power with the antagonistic effect of quadratic term of H2O2 dosage. The statistical analysis of the results indicates a satisfactory prediction of the system behavior by the developed model. In the second part of this study, a novel rigorous mathematical model for the sonophotolytic process is developed to predict the TOC percent removal as a function of time. The mathematical model is based on extensively accepted sonophotochemical reactions and the rate constants in advanced oxidation processes. A good agreement between the model predictions and experimental data indicates that the proposed model could successfully describe the sonophotolysis of the pharmaceutical wastewater. PMID:25460426

  1. Network Pharmacology Strategies Toward Multi-Target Anticancer Therapies: From Computational Models to Experimental Design Principles

    PubMed Central

    Tang, Jing; Aittokallio, Tero

    2014-01-01

    Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.

  2. Experimental design, modeling and optimization of polyplex formation between DNA oligonucleotides and branched polyethylenimine.

    PubMed

    Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana

    2015-09-28

    The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure. PMID:26247491

  3. Network pharmacology strategies toward multi-target anticancer therapies: from computational models to experimental design principles.

    PubMed

    Tang, Jing; Aittokallio, Tero

    2014-01-01

    Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies. PMID:23530504

  4. Design considerations in building in silico equivalents of common experimental influenza virus assays.

    PubMed

    Holder, Benjamin P; Liao, Laura E; Simon, Philippe; Boivin, Guy; Beauchemin, Catherine A A

    2011-06-01

    Experimentation in vitro is a vital part of the process by which the clinical and epidemiological characteristics of a particular influenza virus strain are determined. We detail the considerations which must be made in designing appropriate theoretical/mathematical models of these experiments and show how modeling can increase the information output of such experiments. Starting from a traditional system of ordinary differential equations, common to infectious disease modeling, we broaden the approach by using an agent-based model, applicable to more general experimental geometries and assumptions about the biological properties of viruses, cell and their interaction. Within this framework, we explore the limits of the assumptions made by more traditional models and the conditions under which these assumptions begin to break down, requiring the use of more sophisticated models. We apply the agent-based model to experimental plaque growth of two influenza strains, one resistant to the antiviral oseltamivir, and extract the values of key infection parameters specific to each strain. PMID:21244331

  5. Artificial neural networks combined with experimental design: a "soft" approach for chemical kinetics.

    PubMed

    Amato, Filippo; Gonzlez-Hernndez, Jos Luis; Havel, Josef

    2012-05-15

    The possibilities of artificial neural networks (ANNs) "soft" computing to evaluate chemical kinetic data have been studied. In the first stage, a set of "standard" kinetic curves with known parameters (rate constants and/or concentrations of the reactants), which is some kind of "normalized maps", is prepared. The database should be built according to a suitable experimental design (ED). In the second stage, such data set is then used for ANNs "learning". Afterwards, in the second stage, experimental data are evaluated and parameters of "other" kinetic curves are computed without solving anymore the system of differential equations. The combined ED-ANNs approach has been applied to solve several kinetic systems. It was also demonstrated that using ANNs, the optimization of complex chemical systems can be achieved even not knowing or determining the values of the rate constants. Moreover, the solution of differential equations is here not necessary, as well. Using ED the number of experiments can be reduced substantially. Methodology of ED-ANNs applied to multicomponent analysis shows advantages over classical methods while the knowledge of kinetic reactions is not needed. ANNs computation in kinetics is robust as shown evaluating the effect of experimental errors and it is of general applicability. PMID:22483879

  6. Quantifying the effect of experimental design choices for in vitro scratch assays.

    PubMed

    Johnston, Stuart T; Ross, Joshua V; Binder, Benjamin J; Sean McElwain, D L; Haridas, Parvathi; Simpson, Matthew J

    2016-07-01

    Scratch assays are often used to investigate potential drug treatments for chronic wounds and cancer. Interpreting these experiments with a mathematical model allows us to estimate the cell diffusivity, D, and the cell proliferation rate, λ. However, the influence of the experimental design on the estimates of D and λ is unclear. Here we apply an approximate Bayesian computation (ABC) parameter inference method, which produces a posterior distribution of D and λ, to new sets of synthetic data, generated from an idealised mathematical model, and experimental data for a non-adhesive mesenchymal population of fibroblast cells. The posterior distribution allows us to quantify the amount of information obtained about D and λ. We investigate two types of scratch assay, as well as varying the number and timing of the experimental observations captured. Our results show that a scrape assay, involving one cell front, provides more precise estimates of D and λ, and is more computationally efficient to interpret than a wound assay, with two opposingly directed cell fronts. We find that recording two observations, after making the initial observation, is sufficient to estimate D and λ, and that the final observation time should correspond to the time taken for the cell front to move across the field of view. These results provide guidance for estimating D and λ, while simultaneously minimising the time and cost associated with performing and interpreting the experiment. PMID:27086040

  7. Synthesis of designed materials by laser-based direct metal deposition technique: Experimental and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Qi, Huan

    Direct metal deposition (DMD), a laser-cladding based solid freeform fabrication technique, is capable of depositing multiple materials at desired composition which makes this technique a flexible method to fabricate heterogeneous components or functionally-graded structures. The inherently rapid cooling rate associated with the laser cladding process enables extended solid solubility in nonequilibrium phases, offering the possibility of tailoring new materials with advanced properties. This technical advantage opens the area of synthesizing a new class of materials designed by topology optimization method which have performance-based material properties. For better understanding of the fundamental phenomena occurring in multi-material laser cladding with coaxial powder injection, a self-consistent 3-D transient model was developed. Physical phenomena including laser-powder interaction, heat transfer, melting, solidification, mass addition, liquid metal flow, and species transportation were modeled and solved with a controlled-volume finite difference method. Level-set method was used to track the evolution of liquid free surface. The distribution of species concentration in cladding layer was obtained using a nonequilibrium partition coefficient model. Simulation results were compared with experimental observations and found to be reasonably matched. Multi-phase material microstructures which have negative coefficients of thermal expansion were studied for their DMD manufacturability. The pixel-based topology-optimal designs are boundary-smoothed by Bezier functions to facilitate toolpath design. It is found that the inevitable diffusion interface between different material-phases degrades the negative thermal expansion property of the whole microstructure. A new design method is proposed for DMD manufacturing. Experimental approaches include identification of laser beam characteristics during different laser-powder-substrate interaction conditions, an investigation of extended solubility in multi-material laser cladding, and a study of DMD manufacturing technology for its impact on energy and environment with the comparison of traditional machining process. Experimental results show the feasibility of depositing multiple materials at arbitrary compositions and forming clad with unlimited solubility and uniform distribution in DMD process. DMD technology presents great potential for reducing energy consumption and environmental impact in parts repairing/remanufacturing and situations where the part to be built has small solid-to-cavity volume ratio.

  8. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    PubMed

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them. PMID:25311906

  9. Experimental investigation of undesired stable equilibria in pumpkin shape super-pressure balloon designs

    NASA Astrophysics Data System (ADS)

    Schur, W.

    The scientific community's desire for large capacity, constant altitude, long duration stratospheric platforms is not likely going to be met by un-reinforced spherical super-pressure balloons. More likely, the pneumatic envelope for the large-scale super-pressure balloon of the future will be a tendon reinforced structure in which the tendons perform the primary pressure load confining function and the skin serves as a gas barrier and transfers the local pressure load to the tendons. NASA's Ultra Long Duration Balloon (ULDB), which is currently under development, is of that type. By separating the load carrying function of the tendons and the skin a number of advantages are gained. Perhaps most important is the fact that the required skin strength remains to first order independent of the balloon size. Only the size and number of tendons are dictated by the balloon size. By designing the balloon to be at least quasi statically determinate, the stress distributions are more certain, and stress raisers due to fabrication imperfections are more easily controlled and it becomes unnecessary to account for load path uncertainties by providing everywhere excessive strength and structural weight. Furthermore, it becomes possible to use for the envelope skin a visco-elastic film (polyethylene) that has proven performance in the stratospheric environment. The silhouette shape of this balloon type has prompted early researchers to name this design a "pumpkin" shape balloon. Later investigators accepted this terminology. The pumpkin shape balloon concept was adopted by NASA for its ULDB design at the end of 1998 when advantages of that design over a spherical shape design were convincingly demonstrated. Two stratospheric test flights of large-scale super-pressure balloons demonstrated the functioning of this balloon type. In the second successful flight the switch was made from an excessively strong and heavy skin, a holdover from the earlier concept of a spherical design, to a visco-elastic film. The balloons of a third and fourth full-scale test flights experienced structural problems during a campaign in Australia in 2001. Post-flight investigations identified two problems. The first problem was apparently caused by lack of dynamic strength of the film material in its transverse direction, a property that has theretofore not been tested in balloon films. The second problem was identified through photographic evidence on the second of the two balloons. Images of the launch spool configuration and of the balloon at float altitude, indicated that excess gore-width might prevent full deployment to the design shape. This is a dangerous situation, as the proper functioning of the design requires full deployment. Search in the literature confirmed one other case of flawed but stable deployment of a pumpkin shape balloon that has been investigated by researchers. This balloon is the "Endeavor", which is an adventurer balloon that was intended for manned circumnavigation. The experimental work documented in this paper sought to identify what design aspects of pumpkin shape balloons promote faulty deployment into undesired stable equilibria and w at design aspects assure full deployment ofh pumpkin type balloons. It is argued that the features of a constant bulge shape design (the apparent design of the "Endeavor") make it unnecessarily prone to flawed deployment. The constant bulge radius design is a superior choice, but could be improved by using a smaller bulge radius between the "tropics" of the quasi-spheroid while using a larger bulge radius for the remainder of the balloon when deployment issue become critical. In that case, of course, the strength critical region is the one with the larger bulge radius. Adequate understanding of these aspects is required to design pumpkin shape super-pressure balloons with confidence. Results from studies and tests conducted as a part of the ULDB Project are discussed.

  10. Experimental validation of an integrated controls-structures design methodology for a class of flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.

    1994-01-01

    This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.

  11. Experimental study designs to improve the evaluation of road mitigation measures for wildlife.

    PubMed

    Rytwinski, Trina; van der Ree, Rodney; Cunnington, Glenn M; Fahrig, Lenore; Findlay, C Scott; Houlahan, Jeff; Jaeger, Jochen A G; Soanes, Kylie; van der Grift, Edgar A

    2015-05-01

    An experimental approach to road mitigation that maximizes inferential power is essential to ensure that mitigation is both ecologically-effective and cost-effective. Here, we set out the need for and standards of using an experimental approach to road mitigation, in order to improve knowledge of the influence of mitigation measures on wildlife populations. We point out two key areas that need to be considered when conducting mitigation experiments. First, researchers need to get involved at the earliest stage of the road or mitigation project to ensure the necessary planning and funds are available for conducting a high quality experiment. Second, experimentation will generate new knowledge about the parameters that influence mitigation effectiveness, which ultimately allows better prediction for future road mitigation projects. We identify seven key questions about mitigation structures (i.e., wildlife crossing structures and fencing) that remain largely or entirely unanswered at the population-level: (1) Does a given crossing structure work? What type and size of crossing structures should we use? (2) How many crossing structures should we build? (3) Is it more effective to install a small number of large-sized crossing structures or a large number of small-sized crossing structures? (4) How much barrier fencing is needed for a given length of road? (5) Do we need funnel fencing to lead animals to crossing structures, and how long does such fencing have to be? (6) How should we manage/manipulate the environment in the area around the crossing structures and fencing? (7) Where should we place crossing structures and barrier fencing? We provide experimental approaches to answering each of them using example Before-After-Control-Impact (BACI) study designs for two stages in the road/mitigation project where researchers may become involved: (1) at the beginning of a road/mitigation project, and (2) after the mitigation has been constructed; highlighting real case studies when available. PMID:25704749

  12. Kriging for Simulation Metamodeling: Experimental Design, Reduced Rank Kriging, and Omni-Rank Kriging

    NASA Astrophysics Data System (ADS)

    Hosking, Michael Robert

    This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our explanation through tests of reduced rank kriging's performance over different situations. In total, reduced rank kriging is a useful tool for simulation metamodeling. For the third contribution we will answer the question of how to find the best rank for reduced rank kriging. We do this by creating an alternative method which does not need to search for a particular rank. Instead it uses all potential ranks; we call this approach omnirank kriging. This modification realizes the potential gains from reduced rank kriging and provides a workable methodology for simulation metamodeling. Finally, we will demonstrate the use and value of these developments on two case studies, a clinic operation problem and a location problem. These cases will validate the value of this research. Simulation metamodeling always attempts to extract maximum information from limited data. Each one of these contributions will allow analysts to make better use of their constrained computational budgets.

  13. Passing of northern pike and common carp through experimental barriers designed for use in wetland restoration

    USGS Publications Warehouse

    French, John R. P., III; Wilcox, Douglas A.; Nichols, S. Jerrine

    1999-01-01

    Restoration plans for Metzger Marsh, a coastal wetland on the south shore of western Lake Erie, incorporated a fish-control system designed to restrict access to the wetland by large common carp (Cyprinus carpio). Ingress fish passageways in the structure contain slots into which experimental grates of varying size and shape can be placed to selectively allow entry and transfer of other large fish species while minimizing the number of common carp to be handled. We tested different sizes and shapes of grates in experimental tanks in the laboratory to determine the best design for testing in the field. We also tested northern pike (Esox lucius) because lack of access to wetland spawning habitat has greatly reduced their populations in western Lake Erie. Based on our results, vertical bar grates were chosen for installation because common carp were able to pass through circular grates smaller than body height by compressing their soft abdomens; they passed through rectangular grates on the diagonal. Vertical bar grates with 5-cm spacing that were installed across much of the control structure should limit access of common carp larger than 34 cm total length (TL) and northern pike larger than 70 cm. Vertical bar grates selected for initial field trials in the fish passageway had spacings of 5.8 and 6.6 cm, which increased access by common carp to 40 and 47 cm TL and by northern pike to 76 and 81 cm, respectively. The percentage of potential common carp biomass (fish seeking entry) that must be handled in lift baskets in the passageway increased from 0.9 to 4.8 to 15.4 with each increase in spacing between bars. Further increases in spacing would greatly increase the number of common carp that would have to be handled. The results of field testing should be useful in designing selective fish-control systems for other wetland restoration sites adjacent to large water bodies.

  14. Design and Experimental Verification of Deployable/Inflatable Ultra-Lightweight Structures

    NASA Technical Reports Server (NTRS)

    Pai, P. Frank

    2004-01-01

    Because launch cost of a space structural system is often proportional to the launch volume and mass and there is no significant gravity in space, NASA's space exploration programs and various science missions have stimulated extensive use of ultra-lightweight deployable/inflatable structures. These structures are named here as Highly Flexible Structures (HFSs) because they are designed to undergo large displacements, rotations, and/or buckling without plastic deformation under normal operation conditions. Except recent applications to space structural systems, HFSs have been used in many mechanical systems, civil structures, aerospace vehicles, home appliances, and medical devices to satisfy space limitations, provide special mechanisms, and/or reduce structural weight. The extensive use of HFSs in today's structural engineering reveals the need of a design and analysis software and a database system with design guidelines for practicing engineers to perform computer-aided design and rapid prototyping of HFSs. Also to prepare engineering students for future structural engineering requires a new and easy-to- understand method of presenting the complex mathematics of the modeling and analysis of HFSs. However, because of the high flexibility of HFSs, many unique challenging problems in the modeling, design and analysis of HFSs need to be studied. The current state of research on HFSs needs advances in the following areas: (1) modeling of large rotations using appropriate strain measures, (2) modeling of cross-section warpings of structures, (3) how to account for both large rotations and cross- section warpings in 2D (two-dimensional) and 1D structural theories, (4) modeling of thickness thinning of membranes due to inflation pressure, pretension, and temperature change, (5) prediction of inflated shapes and wrinkles of inflatable structures, (6) development of efficient numerical methods for nonlinear static and dynamic analyses, and (7) filling the gap between geometrically exact elastic analysis and elastoplastic analysis. The objectives of this research project were: (1) to study the modeling, design, and analysis of deployable/inflatable ultra-lightweight structures, (2) to perform numerical and experimental studies on the static and dynamic characteristics and deployability of HFSs, (3) to derive guidelines for designing HFSs, (4) to develop a MATLAB toolbox for the design, analysis, and dynamic animation of HFSs, and (5) to perform experiments and establish an adequate database of post-buckling characteristics of HFSs.

  15. De Novo Peptide Design and Experimental Validation of Histone Methyltransferase Inhibitors

    PubMed Central

    Smadbeck, James; Peterson, Meghan B.; Zee, Barry M.; Garapaty, Shivani; Mago, Aashna; Lee, Christina; Giannis, Athanassios; Trojer, Patrick; Garcia, Benjamin A.; Floudas, Christodoulos A.

    2014-01-01

    Histones are small proteins critical to the efficient packaging of DNA in the nucleus. DNA–protein complexes, known as nucleosomes, are formed when the DNA winds itself around the surface of the histones. The methylation of histone residues by enhancer of zeste homolog 2 (EZH2) maintains gene repression over successive cell generations. Overexpression of EZH2 can silence important tumor suppressor genes leading to increased invasiveness of many types of cancers. This makes the inhibition of EZH2 an important target in the development of cancer therapeutics. We employed a three-stage computational de novo peptide design method to design inhibitory peptides of EZH2. The method consists of a sequence selection stage and two validation stages for fold specificity and approximate binding affinity. The sequence selection stage consists of an integer linear optimization model that was solved to produce a rank-ordered list of amino acid sequences with increased stability in the bound peptide-EZH2 structure. These sequences were validated through the calculation of the fold specificity and approximate binding affinity of the designed peptides. Here we report the discovery of novel EZH2 inhibitory peptides using the de novo peptide design method. The computationally discovered peptides were experimentally validated in vitro using dose titrations and mechanism of action enzymatic assays. The peptide with the highest in vitro response, SQ037, was validated in nucleo using quantitative mass spectrometry-based proteomics. This peptide had an IC50 of 13.5 M, demonstrated greater potency as an inhibitor when compared to the native and K27A mutant control peptides, and demonstrated competitive inhibition versus the peptide substrate. Additionally, this peptide demonstrated high specificity to the EZH2 target in comparison to other histone methyltransferases. The validated peptides are the first computationally designed peptides that directly inhibit EZH2. These inhibitors should prove useful for further chromatin biology investigations. PMID:24587223

  16. Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics

    NASA Astrophysics Data System (ADS)

    Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.

    2006-06-01

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  17. Rugby-like hohlraum experimental designs for demonstrating x-ray drive enhancement

    NASA Astrophysics Data System (ADS)

    Amendt, Peter; Cerjan, C.; Hinkel, D. E.; Milovich, J. L.; Park, H.-S.; Robey, H. F.

    2008-01-01

    A suite of experimental designs for the Omega laser facility [Boehly et al., Opt. Commun. 133, 495 (1997)] using rugby and cylindrical hohlraums is proposed to confirm the energetics benefits of rugby-shaped hohlraums over cylinders under optimal implosion symmetry conditions. Postprocessed Dante x-ray drive measurements predict a 12-17eV (23%-36%) peak hohlraum temperature (x-ray flux) enhancement for a 1ns flattop laser drive history. Simulated core self-emission x-ray histories also show earlier implosion times by 200-400ps, depending on the hohlraum case-to-capsule ratio and laser-entrance-hole size. Capsules filled with 10 or 50atm of deuterium (DD) are predicted to give in excess of 1010 neutrons in two-dimensional hohlraum simulations in the absence of mix, enabling DD burn history measurements for the first time in indirect-drive on Omega. Capsule designs with 50atm of DHe3 are also proposed to make use of proton slowing for independently verifying the drive benefits of rugby hohlraums. Scale-5/4 hohlraum designs are also introduced to provide further margin to potential laser-plasma-induced backscatter and hot-electron production.

  18. Bioprocess development for production of alkaline protease by Bacillus pseudofirmus Mn6 through statistical experimental designs.

    PubMed

    Abdel-Fattah, Y R; El-Enshasy, H A; Soliman, N A; El-Gendi, H

    2009-04-01

    A sequential optimization strategy, based on statistical experimental designs, is employed to enhance the production of alkaline protease by a Bacillus pseudofirmus local isolate. To screen the bioprocess parameters significantly influencing the alkaline protease activity, a 2-level Plackett-Burman design was applied. Among 15 variables tested, the pH, peptone, and incubation time were selected based on their high positive significant effect on the protease activity. A nearoptimum medium formulation was then obtained that increased the protease yield by more than 5-fold. Thereafter, the response surface methodology (RSM) was adopted to acquire the best process conditions among the selected variables, where a 3-level Box-Behnken design was utilized to create a polynomial quadratic model correlating the relationship between the three variables and the protease activity. The optimal combination of the major medium constituents for alkaline protease production, evaluated using the nonlinear optimization algorithm of EXCEL-Solver, was as follows: pH of 9.5, 2% peptone, and incubation time of 60h. The predicted optimum alkaline protease activity was 3,213 U/ml/min, which was 6.4 times the activity with basal medium. PMID:19420994

  19. Rugby-like hohlraum experimental designs for demonstrating x-ray drive enhancement

    SciTech Connect

    Amendt, Peter; Cerjan, C.; Hinkel, D. E.; Milovich, J. L.; Park, H.-S.; Robey, H. F.

    2008-01-15

    A suite of experimental designs for the Omega laser facility [Boehly et al., Opt. Commun. 133, 495 (1997)] using rugby and cylindrical hohlraums is proposed to confirm the energetics benefits of rugby-shaped hohlraums over cylinders under optimal implosion symmetry conditions. Postprocessed Dante x-ray drive measurements predict a 12-17 eV (23%-36%) peak hohlraum temperature (x-ray flux) enhancement for a 1 ns flattop laser drive history. Simulated core self-emission x-ray histories also show earlier implosion times by 200-400 ps, depending on the hohlraum case-to-capsule ratio and laser-entrance-hole size. Capsules filled with 10 or 50 atm of deuterium (DD) are predicted to give in excess of 10{sup 10} neutrons in two-dimensional hohlraum simulations in the absence of mix, enabling DD burn history measurements for the first time in indirect-drive on Omega. Capsule designs with 50 atm of D{sup 3}He are also proposed to make use of proton slowing for independently verifying the drive benefits of rugby hohlraums. Scale-5/4 hohlraum designs are also introduced to provide further margin to potential laser-plasma-induced backscatter and hot-electron production.

  20. Design considerations for ITER (International Thermonuclear Experimental Reactor) toroidal field coils

    SciTech Connect

    Kalsi, S.S.; Lousteau, D.C.; Miller, J.R.

    1987-01-01

    The International Thermonuclear Experimental Reactor (ITER) is a new tokamak design project with joint participation from Europe, Japan, the Union of Soviet Socialist Republics (USSR), and the United States. This paper describes a magnetic and mechanical design methodology for toroidal field (TF) coils that employs Nb/sub 3/Sn superconductor technology. Coil winding is sized by using conductor concepts developed for the US TIBER concept. The nuclear heating generated during operation is removed from the windings by helium flowing through the conductor. The heat in the coil case is removed through a separate cooling circuit operating at approximately 20 K. Manifold concepts are presented for the complete coil cooling system. Also included are concepts for the coil structural arrangement. The effects of in-plane and out-of-plane loads are included in the design considerations for the windings and case. Concepts are presented for reacting these loads with a minimum amount of additional structural material. Concepts discussed in this paper could be considered for the ITER TF coils. 6 refs., 5 figs., 1 tab.

  1. Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines

    PubMed Central

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729

  2. Optimization and evaluation of clarithromycin floating tablets using experimental mixture design.

    PubMed

    Uğurlu, Timucin; Karaçiçek, Uğur; Rayaman, Erkan

    2014-01-01

    The purpose of the study was to prepare and evaluate clarithromycin (CLA) floating tablets using experimental mixture design for treatment of Helicobacter pylori provided by prolonged gastric residence time and controlled plasma level. Ten different formulations were generated based on different molecular weight of hypromellose (HPMC K100, K4M, K15M) by using simplex lattice design (a sub-class of mixture design) with Minitab 16 software. Sodium bicarbonate and anhydrous citric acid were used as gas generating agents. Tablets were prepared by wet granulation technique. All of the process variables were fixed. Results of cumulative drug release at 8th h (CDR 8th) were statistically analyzed to get optimized formulation (OF). Optimized formulation, which gave floating lag time lower than 15 s and total floating time more than 10 h, was analyzed and compared with target for CDR 8th (80%). A good agreement was shown between predicted and actual values of CDR 8th with a variation lower than 1%. The activity of clarithromycin contained optimizedformula against H. pylori were quantified using well diffusion agar assay. Diameters of inhibition zones vs. log10 clarithromycin concentrations were plotted in order to obtain a standard curve and clarithromycin activity. PMID:25272652

  3. Design and experimental validation for direct-drive fault-tolerant permanent-magnet vernier machines.

    PubMed

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729

  4. A Fundamental Study of Smoldering with Emphasis on Experimental Design for Zero-G

    NASA Technical Reports Server (NTRS)

    Fernandez-Pello, Carlos; Pagni, Patrick J.

    1995-01-01

    A research program to study smoldering combustion with emphasis on the design of an experiment to be conducted in the space shuttle was conducted at the Department of Mechanical Engineering, University of California, Berkeley. The motivation of the research is the interest in smoldering both as a fundamental combustion problem and as a serious fire risk. Research conducted included theoretical and experimental studies that have brought considerable new information about smolder combustion, the effect that buoyancy has on the process, and specific information for the design of a space experiment. Experiments were conducted at normal gravity, in opposed and forward mode of propagation and in the upward and downward direction to determine the effect and range of influence of gravity on smolder. Experiments were also conducted in microgravity, in a drop tower and in parabolic aircraft flights, where the brief microgravity periods were used to analyze transient aspects of the problem. Significant progress was made on the study of one-dimensional smolder, particularly in the opposed-flow configuration. These studies provided enough information to design a small-scale space-based experiment that was successfully conducted in the Spacelab Glovebox in the June 1992 USML-1/STS-50 mission of the Space Shuttle Columbia.

  5. Design and experimental validation of a simple controller for a multi-segment magnetic crawler robot

    NASA Astrophysics Data System (ADS)

    Kelley, Leah; Ostovari, Saam; Burmeister, Aaron B.; Talke, Kurt A.; Pezeshkian, Narek; Rahimi, Amin; Hart, Abraham B.; Nguyen, Hoa G.

    2015-05-01

    A novel, multi-segmented magnetic crawler robot has been designed for ship hull inspection. In its simplest version, passive linkages that provide two degrees of relative motion connect front and rear driving modules, so the robot can twist and turn. This permits its navigation over surface discontinuities while maintaining its adhesion to the hull. During operation, the magnetic crawler receives forward and turning velocity commands from either a tele-operator or high-level, autonomous control computer. A low-level, embedded microcomputer handles the commands to the driving motors. This paper presents the development of a simple, low-level, leader-follower controller that permits the rear module to follow the front module. The kinematics and dynamics of the two-module magnetic crawler robot are described. The robot's geometry, kinematic constraints and the user-commanded velocities are used to calculate the desired instantaneous center of rotation and the corresponding central-linkage angle necessary for the back module to follow the front module when turning. The commands to the rear driving motors are determined by applying PID control on the error between the desired and measured linkage angle position. The controller is designed and tested using Matlab Simulink. It is then implemented and tested on an early two-module magnetic crawler prototype robot. Results of the simulations and experimental validation of the controller design are presented.

  6. Modeling NIF Experimental Designs with Adaptive Mesh Refinement and Lagrangian Hydrodynamics

    SciTech Connect

    Koniges, A E; Anderson, R W; Wang, P; Gunney, B N; Becker, R; Eder, D C; MacGowan, B J

    2005-08-31

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  7. Supersonic, nonlinear, attached-flow wing design for high lift with experimental validation

    NASA Technical Reports Server (NTRS)

    Pittman, J. L.; Miller, D. S.; Mason, W. H.

    1984-01-01

    Results of the experimental validation are presented for the three dimensional cambered wing which was designed to achieve attached supercritical cross flow for lifting conditions typical of supersonic maneuver. The design point was a lift coefficient of 0.4 at Mach 1.62 and 12 deg angle of attack. Results from the nonlinear full potential method are presented to show the validity of the design process along with results from linear theory codes. Longitudinal force and moment data and static pressure data were obtained in the Langley Unitary Plan Wind Tunnel at Mach numbers of 1.58, 1.62, 1.66, 1.70, and 2.00 over an angle of attack range of 0 to 14 deg at a Reynolds number of 2.0 x 10 to the 6th power per foot. Oil flow photographs of the upper surface were obtained at M = 1.62 for alpha approx. = 8, 10, 12, and 14 deg.

  8. An integrated Taguchi and response surface methodological approach for the optimization of an HPLC method to determine glimepiride in a supersaturatable self-nanoemulsifying formulation

    PubMed Central

    Dash, Rajendra Narayan; Mohammed, Habibuddin; Humaira, Touseef

    2015-01-01

    We studied the application of Taguchi orthogonal array (TOA) design during the development of an isocratic stability indicating HPLC method for glimepiride as per TOA design; twenty-seven experiments were conducted by varying six chromatographic factors. Percentage of organic phase was the most significant (p < 0.001) on retention time, while buffer pH had the most significant (p < 0.001) effect on tailing factor and theoretical plates. TOA design has shortcoming, which identifies the only linear effect, while ignoring the quadratic and interaction effects. Hence, a response surface model for each response was created including the linear, quadratic and interaction terms. The developed models for each response found to be well predictive bearing an acceptable adjusted correlation coefficient (0.9152 for retention time, 0.8985 for tailing factor and 0.8679 for theoretical plates). The models were found to be significant (p < 0.001) having a high F value for each response (15.76 for retention time, 13.12 for tailing factor and 9.99 for theoretical plates). The optimal chromatographic condition uses acetonitrile – potassium dihydrogen phosphate (pH 4.0; 30 mM) (50:50, v/v) as the mobile phase. The temperature, flow rate and injection volume were selected as 35 ± 2 °C, 1.0 mL min−1 and 20 μL respectively. The method was validated as per ICH guidelines and was found to be specific for analyzing glimepiride from a novel supersaturatable self-nanoemulsifying formulation. PMID:26903773

  9. Leaching of Natural Gravel and Concrete by CO2 - Experimental Design, Leaching Behaviour and Dissolution Rates

    NASA Astrophysics Data System (ADS)

    Fuchs, Rita; Leis, Albrecht; Mittermayr, Florian; Harer, Gerhard; Wagner, Hanns; Reichl, Peter; Dietzel, Martin

    2015-04-01

    The durability of building material in aggressive aqueous environments is a key factor for evaluating the product quality and application as well as of high economic interest. Therefore, aspects of durability have been frequently investigated with different approaches such as monitoring, modelling and experimental work. In the present study an experimental approach based on leaching behaviour of natural calcite-containing siliceous gravel used as backfill material in tunnelling and sprayed concrete by CO2 was developed. CO2 was introduced to form carbonic acid, which is known as an important agent to induce chemical attack. The goals of this study were (i) to develop a proper experimental design to survey the leaching of building materials on-line, (ii) to decipher individual reaction mechanisms and kinetics and (iii) to estimate time-resolved chemical resistance of the used material throughout leaching. A combined flow through reactor unit was successfully installed, where both open and closed system conditions can be easily simulated by changing flow directions and rates. The chemical compositions of the experimental solutions were adjusted by CO2 addition at pHstat conditions and monitored in-situ by pH/SpC electrodes and by analysing the chemical composition of samples throughout an experimental run. From the obtained data e.g. dissolution rates with respect to calcite were obtained for the gravel material, which were dependent on the individual calcite content of the leached material. The rates were found to reflect the flow rate conditions, and the kinetic data lay within the range expected from dissolution experiments in the CaCO3-CO2-H2O system. In case of concrete the reactions throughout the leaching experiment were complex. Coupled dissolution and precipitation phenomena (e.g. portlandite dissolution, calcite formation) occurred. The coupled reactions can be followed by the evolution of the solution chemistry. The overall rates of elemental removal from the gravel and concrete samples were used to assess their durability at various boundary environmental conditions.

  10. Design of Experiment Approach to the Study of Parameters in the New Die Set Tube Hydroforming

    NASA Astrophysics Data System (ADS)

    Elyasi, M.; Hossinzade, M.

    2011-08-01

    This paper outlines the Taguchi optimization methodology, which is applied to optimize the effective parameters in forming cylindrical tubes by the new die set of tube hydroforming process. The process parameters evaluated in this research are axial feeding, initial and final forming pressure. The design of experiments based upon L9 orthogonal arrays by Taguchi was used and analysis of variance (ANOVA) was employed to analyze the effect of these parameters on the die cavity filling of the deformed tubes. The analysis of the results showed that the optimal condition for getting high precision in die cavity filling is to keep a combination of high initial and final pressure with suitable punch stroke. Finally, the confirmation test was derived based on the optimal combination of parameters and it was shown that the Taguchi method is suitable to examine the optimization.

  11. Experimental design for estimating parameters of rate-limited mass transfer: Analysis of stream tracer studies

    NASA Astrophysics Data System (ADS)

    Wagner, Brian J.; Harvey, Judson W.

    Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI>>1.0), solute exchange rates are fast relative to stream-water velocity and all solute is exchanged with the storage zone over the experimental reach. As DaI increases, tracer dispersion caused by hyporheic exchange eventually reaches an equilibrium condition and storage-zone exchange parameters become essentially nonidentifiable.

  12. Experimental design for estimating parameters of rate-limited mass transfer: Analysis of stream tracer studies

    USGS Publications Warehouse

    Wagner, B.J.; Harvey, J.W.

    1997-01-01

    Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI >> 1.0), solute exchange rates are fast relative to stream-water velocity and all solute is exchanged with the storage zone over the experimental reach. As DaI increases, tracer dispersion caused by hyporheic exchange eventually reaches an equilibrium condition and storage-zone exchange parameters become essentially nonidentifiable.

  13. Effects of environmental and experimental design factors on culturing and testing of Ceriodaphnia dubia

    SciTech Connect

    Cooney, J.D.; DeGraeve, G.M.; Moore, E.L.; Lenoble, B.J.; Pollock, T.L.; Smith, G.J. )

    1989-09-01

    EPA has developed a 7-day toxicity test to evaluate effects of effluents on Ceriodaphnia dubia survival and reproduction. This study evaluated effects of laboratory conditions and culturing and tests procedures on Ceriodaphnia survival and reproduction. Temperature, food concentration, chamber size, solution renewal frequency, light quality, illumination, photoperiod, water type, and test organism age were evaluated to determine how these various factors affected individual culturing success and acceptibility and reproducibility of toxicity test results. Test conditions proposed by EPA were evaluated by varying individual environmental or experimental design conditions to determine a range of responses for survival and reproduction under controlled conditions, both with and without a reference toxicant. For those conditions not specified by EPA (e.g., water hardness, light quality), a commonly used range of conditions was evaluated. 19 refs., 4 tabs.

  14. Optimization of glucosinolate separation by micellar electrokinetic capillary chromatography using a Doehlert's experimental design.

    PubMed

    Paugam, L; Ménard, R; Larue, J P; Thouvenot, D

    1999-12-01

    The aim of this study was to optimize by micellar electrokinetic chromatography the separation of four glucosinolates, i.e. sinigrin, glucobrassicin and methoxyglucobrassicin involved in Cruciferae resistance mechanisms and glucotropaeolin used as an internal standard. The separation borate buffer which contained sodium dodecyl sulphate, tetramethylammonium hydroxide and methanol was firstly optimized by using a three variable Doehlert experimental design. The optimum concentrations found enabled, for the first time, to obtain an acceptable resolution between the two indole glucosinolates, glucobrassicin and methoxyglucobrassicin. Modifications of the method such as a capillary pre-rinse with pure borate buffer and a step change in voltage during experiment were performed to improve the resolutions between glucosinolates and to reduce the analysis time. This method was validated by a statistical analysis and showed good linearity, repeatability and reproducibility. PMID:10630880

  15. Single case experimental designs: introduction to a special issue of Neuropsychological Rehabilitation.

    PubMed

    Evans, Jonathan J; Gast, David L; Perdices, Michael; Manolov, Rumen

    2014-01-01

    This paper introduces the Special Issue of Neuropsychological Rehabilitation on Single Case Experimental Design (SCED) methodology. SCED studies have a long history of use in evaluating behavioural and psychological interventions, but in recent years there has been a resurgence of interest in SCED methodology, driven in part by the development of standards for conducting and reporting SCED studies. Although there is consensus on some aspects of SCED methodology, the question of how SCED data should be analysed remains unresolved. This Special Issues includes two papers discussing aspects of conducting SCED studies, five papers illustrating use of SCED methodology in clinical practice, and nine papers that present different methods of SCED data analysis. A final Discussion paper summarises points of agreement, highlights areas where further clarity is needed, and ends with a set of resources that will assist researchers conduct and analyse SCED studies. PMID:24766415

  16. Visual analysis in single case experimental design studies: brief review and guidelines.

    PubMed

    Lane, Justin D; Gast, David L

    2014-01-01

    Visual analysis of graphic displays of data is a cornerstone of studies using a single case experimental design (SCED). Data are graphed for each participant during a study with trend, level, and stability of data assessed within and between conditions. Reliable interpretations of effects of an intervention are dependent on researchers' understanding and use of systematic procedures. The purpose of this paper is to provide readers with a rationale for visual analysis of data when using a SCED, a step-by-step guide for conducting a visual analysis of graphed data, as well as to highlight considerations for persons interested in using visual analysis to evaluate an intervention, especially the importance of collecting reliability data for dependent measures and fidelity of implementation of study procedures. PMID:23883189

  17. Experimental Design and Interpretation of Functional Neuroimaging Studies of Cognitive Processes

    PubMed Central

    Caplan, David

    2008-01-01

    This article discusses how the relation between experimental and baseline conditions in functional neuroimaging studies affects the conclusions that can be drawn from a study about the neural correlates of components of the cognitive system and about the nature and organization of those components. I argue that certain designs in common use—in particular the contrast of qualitatively different representations that are processed at parallel stages of a functional architecture—can never identify the neural basis of a cognitive operation and have limited use in providing information about the nature of cognitive systems. Other types of designs—such as ones that contrast representations that are computed in immediately sequential processing steps and ones that contrast qualitatively similar representations that are parametrically related within a single processing stage—are more easily interpreted. PMID:17979122

  18. Compact infrared cryogenic wafer-level camera: design and experimental validation.

    PubMed

    de la Barrière, Florence; Druart, Guillaume; Guérineau, Nicolas; Lasfargues, Gilles; Fendler, Manuel; Lhermet, Nicolas; Taboury, Jean

    2012-03-10

    We present a compact infrared cryogenic multichannel camera with a wide field of view equal to 120°. By merging the optics with the detector, the concept is compatible with both cryogenic constraints and wafer-level fabrication. The design strategy of such a camera is described, as well as its fabrication and integration process. Its characterization has been carried out in terms of the modulation transfer function and the noise equivalent temperature difference (NETD). The optical system is limited by the diffraction. By cooling the optics, we achieve a very low NETD equal to 15 mK compared with traditional infrared cameras. A postprocessing algorithm that aims at reconstructing a well-sampled image from the set of undersampled raw subimages produced by the camera is proposed and validated on experimental images. PMID:22410982

  19. Using experimental design modules for process characterization in manufacturing/materials processes laboratories

    NASA Technical Reports Server (NTRS)

    Ankenman, Bruce; Ermer, Donald; Clum, James A.

    1994-01-01

    Modules dealing with statistical experimental design (SED), process modeling and improvement, and response surface methods have been developed and tested in two laboratory courses. One course was a manufacturing processes course in Mechanical Engineering and the other course was a materials processing course in Materials Science and Engineering. Each module is used as an 'experiment' in the course with the intent that subsequent course experiments will use SED methods for analysis and interpretation of data. Evaluation of the modules' effectiveness has been done by both survey questionnaires and inclusion of the module methodology in course examination questions. Results of the evaluation have been very positive. Those evaluation results and details of the modules' content and implementation are presented. The modules represent an important component for updating laboratory instruction and to provide training in quality for improved engineering practice.

  20. Experimental design to generate strong shear layers in a high-energy-density plasma

    NASA Astrophysics Data System (ADS)

    Harding, E. C.; Drake, R. P.; Aglitskiy, Y.; Gillespie, R. S.; Grosskopf, M. J.; Weaver, J. L.; Velikovich, A. L.; Visco, A.; Ditmar, J. R.

    2010-06-01

    The development of a new experimental system for generating a strong shear flow in a high-energy-density plasma is described in detail. The targets were designed with the goal of producing a diagnosable Kelvin-Helmholtz (KH) instability, which plays an important role in the transition turbulence but remains relatively unexplored in the high-energy-density regime. To generate the shear flow the Nike laser was used to drive a flow of Al plasma over a low-density foam surface with an initial perturbation. The interaction of the Al and foam was captured with a spherical crystal imager using 1.86 keV X-rays. The selection of the individual targets components is discussed and results are presented.