Science.gov

Sample records for taguchi experimental design

  1. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  2. Microcosm assays and Taguchi experimental design for treatment of oil sludge containing high concentration of hydrocarbons.

    PubMed

    Castorena-Cortés, G; Roldán-Carrillo, T; Zapata-Peñasco, I; Reyes-Avila, J; Quej-Aké, L; Marín-Cruz, J; Olguín-Lora, P

    2009-12-01

    Microcosm assays and Taguchi experimental design was used to assess the biodegradation of an oil sludge produced by a gas processing unit. The study showed that the biodegradation of the sludge sample is feasible despite the high level of pollutants and complexity involved in the sludge. The physicochemical and microbiological characterization of the sludge revealed a high concentration of hydrocarbons (334,766+/-7001 mg kg(-1) dry matter, d.m.) containing a variety of compounds between 6 and 73 carbon atoms in their structure, whereas the concentration of Fe was 60,000 mg kg(-1) d.m. and 26,800 mg kg(-1) d.m. of sulfide. A Taguchi L(9) experimental design comprising 4 variables and 3 levels moisture, nitrogen source, surfactant concentration and oxidant agent was performed, proving that moisture and nitrogen source are the major variables that affect CO(2) production and total petroleum hydrocarbons (TPH) degradation. The best experimental treatment yielded a TPH removal of 56,092 mg kg(-1) d.m. The treatment was carried out under the following conditions: 70% moisture, no oxidant agent, 0.5% of surfactant and NH(4)Cl as nitrogen source.

  3. A Taguchi experimental design study of twin-wire electric arc sprayed aluminum coatings

    SciTech Connect

    Steeper, T.J. ); Varacalle, D.J. Jr.; Wilson, G.C.; Johnson, R.W. ); Irons, G.; Kratochvil, W.R. ); Riggs, W.L. II )

    1992-01-01

    An experimental study was conducted on the twin-wire electric arc spraying of aluminum coatings. This aluminum wire system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic experiments. Experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical process parameters in a systematic design of experiments in order to display the range of processing conditions and their effect on the resultant coating. The coatings were characterized by hardness tests, optical metallography, and image analysis. The paper discusses coating qualities with respect to hardness, roughness, deposition efficiency, and microstructure. The study attempts to correlate the features of the coatings with the changes in operating parameters. A numerical model of the process is presented including gas, droplet, and coating dynamics.

  4. A Taguchi experimental design study of twin-wire electric arc sprayed aluminum coatings

    SciTech Connect

    Steeper, T.J.; Varacalle, D.J. Jr.; Wilson, G.C.; Johnson, R.W.; Irons, G.; Kratochvil, W.R.; Riggs, W.L. II

    1992-08-01

    An experimental study was conducted on the twin-wire electric arc spraying of aluminum coatings. This aluminum wire system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic experiments. Experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical process parameters in a systematic design of experiments in order to display the range of processing conditions and their effect on the resultant coating. The coatings were characterized by hardness tests, optical metallography, and image analysis. The paper discusses coating qualities with respect to hardness, roughness, deposition efficiency, and microstructure. The study attempts to correlate the features of the coatings with the changes in operating parameters. A numerical model of the process is presented including gas, droplet, and coating dynamics.

  5. Parametric appraisal of process parameters for adhesion of plasma sprayed nanostructured YSZ coatings using Taguchi experimental design.

    PubMed

    Mantry, Sisir; Mishra, Barada K; Chakraborty, Madhusudan

    2013-01-01

    This paper presents the application of the Taguchi experimental design in developing nanostructured yittria stabilized zirconia (YSZ) coatings by plasma spraying process. This paper depicts dependence of adhesion strength of as-sprayed nanostructured YSZ coatings on various process parameters, and effect of those process parameters on performance output has been studied using Taguchi's L16 orthogonal array design. Particle velocities prior to impacting the substrate, stand-off-distance, and particle temperature are found to be the most significant parameter affecting the bond strength. To achieve retention of nanostructure, molten state of nanoagglomerates (temperature and velocity) has been monitored using particle diagnostics tool. Maximum adhesion strength of 40.56 MPa has been experimentally found out by selecting optimum levels of selected factors. The enhanced bond strength of nano-YSZ coating may be attributed to higher interfacial toughness due to cracks being interrupted by adherent nanozones.

  6. Parametric Appraisal of Process Parameters for Adhesion of Plasma Sprayed Nanostructured YSZ Coatings Using Taguchi Experimental Design

    PubMed Central

    Mantry, Sisir; Mishra, Barada K.; Chakraborty, Madhusudan

    2013-01-01

    This paper presents the application of the Taguchi experimental design in developing nanostructured yittria stabilized zirconia (YSZ) coatings by plasma spraying process. This paper depicts dependence of adhesion strength of as-sprayed nanostructured YSZ coatings on various process parameters, and effect of those process parameters on performance output has been studied using Taguchi's L16 orthogonal array design. Particle velocities prior to impacting the substrate, stand-off-distance, and particle temperature are found to be the most significant parameter affecting the bond strength. To achieve retention of nanostructure, molten state of nanoagglomerates (temperature and velocity) has been monitored using particle diagnostics tool. Maximum adhesion strength of 40.56 MPa has been experimentally found out by selecting optimum levels of selected factors. The enhanced bond strength of nano-YSZ coating may be attributed to higher interfacial toughness due to cracks being interrupted by adherent nanozones. PMID:24288490

  7. Optimization of Wear Behavior of Magnesium Alloy AZ91 Hybrid Composites Using Taguchi Experimental Design

    NASA Astrophysics Data System (ADS)

    Girish, B. M.; Satish, B. M.; Sarapure, Sadanand; Basawaraj

    2016-06-01

    In the present paper, the statistical investigation on wear behavior of magnesium alloy (AZ91) hybrid metal matrix composites using Taguchi technique has been reported. The composites were reinforced with SiC and graphite particles of average size 37 μm. The specimens were processed by stir casting route. Dry sliding wear of the hybrid composites were tested on a pin-on-disk tribometer under dry conditions at different normal loads (20, 40, and 60 N), sliding speeds (1.047, 1.57, and 2.09 m/s), and composition (1, 2, and 3 wt pct of each of SiC and graphite). The design of experiments approach using Taguchi technique was employed to statistically analyze the wear behavior of hybrid composites. Signal-to-noise ratio and analysis of variance were used to investigate the influence of the parameters on the wear rate.

  8. Neutralization of red mud with pickling waste liquor using Taguchi's design of experimental methodology.

    PubMed

    Rai, Suchita; Wasewar, Kailas L; Lataye, Dilip H; Mishra, Rajshekhar S; Puttewar, Suresh P; Chaddha, Mukesh J; Mahindiran, P; Mukhopadhyay, Jyoti

    2012-09-01

    'Red mud' or 'bauxite residue', a waste generated from alumina refinery is highly alkaline in nature with a pH of 10.5-12.5. Red mud poses serious environmental problems such as alkali seepage in ground water and alkaline dust generation. One of the options to make red mud less hazardous and environmentally benign is its neutralization with acid or an acidic waste. Hence, in the present study, neutralization of alkaline red mud was carried out using a highly acidic waste (pickling waste liquor). Pickling waste liquor is a mixture of strong acids used for descaling or cleaning the surfaces in steel making industry. The aim of the study was to look into the feasibility of neutralization process of the two wastes using Taguchi's design of experimental methodology. This would make both the wastes less hazardous and safe for disposal. The effect of slurry solids, volume of pickling liquor, stirring time and temperature on the neutralization process were investigated. The analysis of variance (ANOVA) shows that the volume of the pickling liquor is the most significant parameter followed by quantity of red mud with 69.18% and 18.48% contribution each respectively. Under the optimized parameters, pH value of 7 can be achieved by mixing the two wastes. About 25-30% of the total soda from the red mud is being neutralized and alkalinity is getting reduced by 80-85%. Mineralogy and morphology of the neutralized red mud have also been studied. The data presented will be useful in view of environmental concern of red mud disposal. PMID:22751850

  9. Optimization of critical factors to enhance polyhydroxyalkanoates (PHA) synthesis by mixed culture using Taguchi design of experimental methodology.

    PubMed

    Venkata Mohan, S; Venkateswar Reddy, M

    2013-01-01

    Optimizing different factors is crucial for enhancement of mixed culture bioplastics (polyhydroxyalkanoates (PHA)) production. Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was applied to evaluate the influence and specific function of eight important factors (iron, glucose concentration, VFA concentration, VFA composition, nitrogen concentration, phosphorous concentration, pH, and microenvironment) on the bioplastics production. Three levels of factor (2(1) × 3(7)) variation were considered with symbolic arrays of experimental matrix [L(18)-18 experimental trails]. All the factors were assigned with three levels except iron concentration (2(1)). Among all the factors, microenvironment influenced bioplastics production substantially (contributing 81%), followed by pH (11%) and glucose concentration (2.5%). Validation experiments were performed with the obtained optimum conditions which resulted in improved PHA production. Good substrate degradation (as COD) of 68% was registered during PHA production. Dehydrogenase and phosphatase enzymatic activities were monitored during process operation. PMID:23201522

  10. Vertically aligned N-doped CNTs growth using Taguchi experimental design

    NASA Astrophysics Data System (ADS)

    Silva, Ricardo M.; Fernandes, António J. S.; Ferro, Marta C.; Pinna, Nicola; Silva, Rui F.

    2015-07-01

    The Taguchi method with a parameter design L9 orthogonal array was implemented for optimizing the nitrogen incorporation in the structure of vertically aligned N-doped CNTs grown by thermal chemical deposition (TCVD). The maximization of the ID/IG ratio of the Raman spectra was selected as the target value. As a result, the optimal deposition configuration was NH3 = 90 sccm, growth temperature = 825 °C and catalyst pretreatment time of 2 min, the first parameter having the main effect on nitrogen incorporation. A confirmation experiment with these values was performed, ratifying the predicted ID/IG ratio of 1.42. Scanning electron microscopy (SEM) characterization revealed a uniform completely vertically aligned array of multiwalled CNTs which individually exhibit a bamboo-like structure, consisting of periodically curved graphitic layers, as depicted by high resolution transmission electron microscopy (HRTEM). The X-ray photoelectron spectroscopy (XPS) results indicated a 2.00 at.% of N incorporation in the CNTs in pyridine-like and graphite-like, as the predominant species.

  11. Metal recovery enhancement using Taguchi style experimentation

    SciTech Connect

    Wells, P.A.; Andreas, R.E.; Fox, T.M.

    1995-12-31

    In the remelting of scrap, the ultimate goal is to produce clean aluminum while minimizing metal losses. Recently, it has become more difficult to make significant recovery improvements in Reynolds` Reclamation Plants since metal recoveries were nearing the theoretical maximum. In an effort to gain a better understanding of the factors impacting Reynolds remelting process, a series of experiments using a Taguchi-type design was performed. Specifically, the critical variables and interactions affecting metal recovery of shredded, delacquered Used Beverage Containers (UBC) melted in a side-well reverbatory furnace were examined. This furnace was equipped with plunger-style puddlers and metal circulation. Both delacquering and melting processes operated continuously with downtime only for necessary mechanical repairs. The experimental design consisted of an orthogonal array with eight trials, each using nominal 500,000 lb shred charge volumes. Final recovery results included molten output and metal easily recovered from dross generated during the test.

  12. Optimization of experimental parameters based on the Taguchi robust design for the formation of zinc oxide nanocrystals by solvothermal method

    SciTech Connect

    Yiamsawas, Doungporn; Boonpavanitchakul, Kanittha; Kangwansupamonkon, Wiyong

    2011-05-15

    Research highlights: {yields} Taguchi robust design can be applied to study ZnO nanocrystal growth. {yields} Spherical-like and rod-like shaped of ZnO nanocrystals can be obtained from solvothermal method. {yields} [NaOH]/[Zn{sup 2+}] ratio plays the most important factor on the aspect ratio of prepared ZnO. -- Abstract: Zinc oxide (ZnO) nanoparticles and nanorods were successfully synthesized by a solvothermal process. Taguchi robust design was applied to study the factors which result in stronger ZnO nanocrystal growth. The factors which have been studied are molar concentration ratio of sodium hydroxide and zinc acetate, amount of polymer templates and molecular weight of polymer templates. Transmission electron microscopy and X-ray diffraction technique were used to analyze the experiment results. The results show that the concentration ratio of sodium hydroxide and zinc acetate ratio has the greatest effect on ZnO nanocrystal growth.

  13. Hydrothermal processing of hydroxyapatite nanoparticles—A Taguchi experimental design approach

    NASA Astrophysics Data System (ADS)

    Sadat-Shojai, Mehdi; Khorasani, Mohammad-Taghi; Jamshidi, Ahmad

    2012-12-01

    Chemical precipitation followed by hydrothermal processing is conventionally employed in the laboratory-scale synthesis of hydroxyapatite (HAp) and extensive information on its processing conditions has therefore been provided in literature. However, the knowledge about the influence of some operating parameters, especially those important for a large-scale production, is yet insufficient. A specific approach based on a Taguchi orthogonal array was therefore used to evaluate these parameters and to optimize them for a more effective synthesis. This approach allowed us to systematically determine the correlation between the operating factors and the powder quality. Analysis of signal-to-noise ratios revealed the great influence of temperature and pH on the characteristic of powder. Additionally, the injection rate of one reagent into another was found to be the most important operating factor affecting the stoichiometric ratio of powders. As-prepared powders were also studied for their in-vitro bioactivity. The SEM images showed the accumulation of a new apatite-like phase on surface of the powder along with an interesting morphological change after a 45-day incubation of powder in SBF, indicating a promising bioactivity. Some results also showed the capability of simple hydrothermal method for the synthesis of a lamellar structure without the help of any templating system.

  14. Assessing the applicability of the Taguchi design method to an interrill erosion study

    NASA Astrophysics Data System (ADS)

    Zhang, F. B.; Wang, Z. L.; Yang, M. Y.

    2015-02-01

    Full-factorial experimental designs have been used in soil erosion studies, but are time, cost and labor intensive, and sometimes they are impossible to conduct due to the increasing number of factors and their levels to consider. The Taguchi design is a simple, economical and efficient statistical tool that only uses a portion of the total possible factorial combinations to obtain the results of a study. Soil erosion studies that use the Taguchi design are scarce and no comparisons with full-factorial designs have been made. In this paper, a series of simulated rainfall experiments using a full-factorial design of five slope lengths (0.4, 0.8, 1.2, 1.6, and 2 m), five slope gradients (18%, 27%, 36%, 48%, and 58%), and five rainfall intensities (48, 62.4, 102, 149, and 170 mm h-1) were conducted. Validation of the applicability of a Taguchi design to interrill erosion experiments was achieved by extracting data from the full dataset according to a theoretical Taguchi design. The statistical parameters for the mean quasi-steady state erosion and runoff rates of each test, the optimum conditions for producing maximum erosion and runoff, and the main effect and percentage contribution of each factor obtained from the full-factorial and Taguchi designs were compared. Both designs generated almost identical results. Using the experimental data from the Taguchi design, it was possible to accurately predict the erosion and runoff rates under the conditions that had been excluded from the Taguchi design. All of the results obtained from analyzing the experimental data for both designs indicated that the Taguchi design could be applied to interrill erosion studies and could replace full-factorial designs. This would save time, labor and costs by generally reducing the number of tests to be conducted. Further work should test the applicability of the Taguchi design to a wider range of conditions.

  15. Taguchi's experimental design for optimizing the production of novel thermostable polypeptide antibiotic from Geobacillus pallidus SAT4.

    PubMed

    Muhammad, Syed Aun; Ahmed, Safia; Ismail, Tariq; Hameed, Abdul

    2014-01-01

    Polypeptide antimicrobials used against topical infections are reported to obtain from mesophilic bacterial species. A thermophilic Geobacillus pallidus SAT4 was isolated from hot climate of Sindh Dessert, Pakistan and found it active against Micrococcus luteus ATCC 10240, Staphylococcus aureus ATCC 6538, Bacillus subtilis NCTC 10400 and Pseudomonas aeruginosa ATCC 49189. The current experiment was designed to optimize the production of novel thermostable polypeptide by applying the Taguchi statistical approach at various conditions including the time of incubation, temperature, pH, aeration rate, nitrogen, and carbon concentrations. There were two most important factors that affect the production of antibiotic including time of incubation and nitrogen concentration and two interactions including the time of incubation/pH and time of incubation/nitrogen concentration. Activity was evaluated by well diffusion assay. The antimicrobial produced was stable and active even at 55°C. Ammonium sulphate (AS) was used for antibiotic recovery and it was desalted by dialysis techniques. The resulted protein was evaluated through SDS-PAGE. It was concluded that novel thermostable protein produced by Geobacillus pallidus SAT4 is stable at higher temperature and its production level can be improved statistically at optimum values of pH, time of incubation and nitrogen concentration the most important factors for antibiotic production.

  16. Effect of Additives on Green Sand Molding Properties using Design of Experiments and Taguchi's Quality Loss Function - An Experimental Study

    NASA Astrophysics Data System (ADS)

    Desai, Bhagyashree; Mokashi, Pavani; Anand, R. L.; Burli, S. B.; Khandal, S. V.

    2016-09-01

    The experimental study aims to underseek the effect of various additives on the green sand molding properties as a particular combination of additives could yield desired sand properties. The input parameters (factors) selected were water and powder (Fly ash, Coconut shell and Tamarind) in three levels. Experiments were planned using design of experiments (DOE). On the basis of plans, experiments were conducted to understand the behavior of sand mould properties such as compression strength, shear strength, permeability number with various additives. From the experimental results it could be concluded that the factors have significant effect on the sand properties as P-value found to be less than 0.05 for all the cases studied. The optimization based on quality loss function was also performed. The study revealed that the quality loss associated with the tamarind powder was lesser compared to other additives selected for the study. The optimization based on quality loss function and the parametric analysis using ANOVA suggested that the tamarind powder of 8 gm per Kg of molding sand and moisture content of 7% yield better properties to obtain sound castings.

  17. A Comparison of Central Composite Design and Taguchi Method for Optimizing Fenton Process

    PubMed Central

    Asghar, Anam; Abdul Raman, Abdul Aziz; Daud, Wan Mohd Ashri Wan

    2014-01-01

    In the present study, a comparison of central composite design (CCD) and Taguchi method was established for Fenton oxidation. [Dye]ini, Dye : Fe+2, H2O2 : Fe+2, and pH were identified control variables while COD and decolorization efficiency were selected responses. L9 orthogonal array and face-centered CCD were used for the experimental design. Maximum 99% decolorization and 80% COD removal efficiency were obtained under optimum conditions. R squared values of 0.97 and 0.95 for CCD and Taguchi method, respectively, indicate that both models are statistically significant and are in well agreement with each other. Furthermore, Prob > F less than 0.0500 and ANOVA results indicate the good fitting of selected model with experimental results. Nevertheless, possibility of ranking of input variables in terms of percent contribution to the response value has made Taguchi method a suitable approach for scrutinizing the operating parameters. For present case, pH with percent contribution of 87.62% and 66.2% was ranked as the most contributing and significant factor. This finding of Taguchi method was also verified by 3D contour plots of CCD. Therefore, from this comparative study, it is concluded that Taguchi method with 9 experimental runs and simple interaction plots is a suitable alternative to CCD for several chemical engineering applications. PMID:25258741

  18. Application of Taguchi Philosophy for Optimization of Design Parameters in a Rectangular Enclosure with Triangular Fin Array

    NASA Astrophysics Data System (ADS)

    Dwivedi, Ankur; Das, Debasish

    2015-10-01

    In this study, an optimum parametric design yielding maximum heat transfer has been suggested using Taguchi Philosophy. This statistical approach has been applied to the results of an experimental parametric study conducted to investigate the influence of fin height ( L); fin spacing ( S) and Rayleigh number ( Ra) on convection heat transfer from triangular fin array within a vertically oriented rectangular enclosure. Taguchi's L9 (3**3) orthogonal array design has been adopted for three different levels of influencing parameters. The goal of this study is to reach maximum heat transfer (i.e. Nusselt number). The dependence of optimum fin spacing on fin height has been also reported. The results proved the suitability of the application of Taguchi design approach in this kind of study, and the predictions by the method are reported in very good agreement with experimental results. This paper also compares the application of classical design approach with Taguchi's methodology used for determination of optimum parametric design

  19. A Taguchi study of the aeroelastic tailoring design process

    NASA Technical Reports Server (NTRS)

    Bohlmann, Jonathan D.; Scott, Robert C.

    1991-01-01

    A Taguchi study was performed to determine the important players in the aeroelastic tailoring design process and to find the best composition of the optimization's objective function. The Wing Aeroelastic Synthesis Procedure (TSO) was used to ascertain the effects that factors such as composite laminate constraints, roll effectiveness constraints, and built-in wing twist and camber have on the optimum, aeroelastically tailored wing skin design. The results show the Taguchi method to be a viable engineering tool for computational inquiries, and provide some valuable lessons about the practice of aeroelastic tailoring.

  20. Removal of phenol from aqueous solution using carbonized Terminalia chebula-activated carbon: process parametric optimization using conventional method and Taguchi's experimental design, adsorption kinetic, equilibrium and thermodynamic study

    NASA Astrophysics Data System (ADS)

    Khare, Prateek; Kumar, Arvind

    2012-12-01

    In the present paper, the phenol removal from wastewater was investigated using agri-based adsorbent: Terminalia chebula-activated carbon (TCAC) produced by carbonization of Terminalia chebula (TC) in air-controlled atmosphere at 600 °C for 4 h. The surface area of TCAC was measured as 364 m2/g using BET method. The surface characteristic of TCAC was analyzed based on the value of point of zero charge. The effect of parameters such as TCAC dosage, pH, initial concentration of phenol, time of contact and temperature on the sorption of phenol by TCAC was investigated using conventional method and Taguchi experimental design. The total adsorption capacity of phenol was obtained as 36.77 mg/g using Langmuir model at the temperature of 30 °C at pH = 5.5. The maximum removal of phenol (294.86 mg/g) was obtained using Taguchi's method. The equilibrium study of phenol on TCAC showed that experimental data fitted well to R-P model. The results also showed that kinetic data were followed more closely the pseudo-first-order model. The results of thermodynamic study showed that the adsorption of phenol on TCAC was spontaneous and an exothermic in nature.

  1. Using Taguchi robust design method to develop an optimized synthesis procedure of nanocrystalline cancrinite

    NASA Astrophysics Data System (ADS)

    Azizi, Seyed Naser; Asemi, Neda; Samadi-Maybodi, Abdolrouf

    2012-09-01

    In this study, perlite was used as a low-cost source of Si and Al to synthesis of nanocrystalline cancrinite zeolite. The synthesis of cancrinite zeolite from perlite by using the alkaline hydrothermal treatment under saturated steam pressure was investigated. A statistical Taguchi design of experiments was employed to evaluate the effects of the process variables such as type of aging, aging time and hydrothermal crystallization time on the crystallnity of synthesized zeolite. The optimum conditions for maximum crystallinity of nanocrystalline cancrinite were obtained as microwave-assisted aging, 60 min aging time and 6 h hydrothermal crystallization time from statistical analysis of the experimental results using Taguchi design. The synthetic samples were characterization by XRD, FT-IR and FE-SEM techniques. The results showed that the microwave-assisted aging can shorten the crystallization time and reduced the crystal size to form nanocrystalline cancrinite zeolite.

  2. Taguchi design-based optimization of sandwich immunoassay microarrays for detecting breast cancer biomarkers.

    PubMed

    Luo, Wen; Pla-Roca, Mateu; Juncker, David

    2011-07-15

    Taguchi design, a statistics-based design of experiment method, is widely used for optimization of products and complex production processes in many different industries. However, its use for antibody microarray optimization has remained underappreciated. Here, we provide a brief explanation of Taguchi design and present its use for the optimization of antibody sandwich immunoassay microarray with five breast cancer biomarkers: CA15-3, CEA, HER2, MMP9, and uPA. Two successive optimization rounds with each 16 experimental trials were performed. We tested three factors (capture antibody, detection antibody, and analyte) at four different levels (concentrations) in the first round and seven factors (including buffer solution, streptavidin-Cy5 dye conjugate concentration, and incubation times for five assay steps) with two levels each in the second round; five two-factor interactions between selected pairs of factors were also tested. The optimal levels for each factor as measured by net assay signal increase were determined graphically, and the significance of each factor was analyzed statistically. The concentration of capture antibody, streptavidin-Cy5, and buffer composition were identified as the most significant factors for all assays; analyte incubation time and detection antibody concentration were significant only for MMP9 and CA15-3, respectively. Interactions between pairs of factors were identified, but were less influential compared with single factor effects. After Taguchi optimization, the assay sensitivity was improved between 7 and 68 times, depending on the analyte, reaching 640 fg/mL for uPA, and the maximal signal intensity increased between 1.8 and 3 times. These results suggest that Taguchi design is an efficient and useful approach for the rapid optimization of antibody microarrays.

  3. Ex situ slurry phase bioremediation of chrysene contaminated soil with the function of metabolic function: process evaluation by data enveloping analysis (DEA) and Taguchi design of experimental methodology (DOE).

    PubMed

    Venkata Mohan, S; Purushotham Reddy, B; Sarma, P N

    2009-01-01

    Bioremediation of chrysene in soil matrix was evaluated in soil slurry phase bioreactor in conjugation with metabolic functions (aerobic, anoxic and anaerobic), microenvironment (single and mixed) conditions and nature of mixed consortia (native/resident mixed microflora and bioaugmented inoculum). Twelve experiments were operated independently in agitated-batch reactor keeping all other operating conditions constant (substrate loading rate--0.084 g chrysene/kg soil-day; soil loading rate--10 kg soil/m(3)-day (3:25 soil water ratio); operating temperature--35+/-2 degrees C). Data envelopment analysis (DEA) procedure was employed to analyze the performance of experimental variations in terms of chrysene degradation and pH. The efficacy of anoxic metabolism over the corresponding aerobic and anaerobic metabolic functions was documented. Aerobic metabolic function showed effective degradation capability under mixed microenvironment after augmentation with anaerobic inoculum. Anaerobic metabolic function showed lowest degradation potential. Application of bioaugmentation showed positive influence on the chrysene degradation rate. Design of experimental methodology (DOE) by Taguchi approach was applied to evaluate the effect of four selected factors (native soil microflora, microenvironment, metabolic function and bioaugmentation) on the chrysene degradation process. The optimized factors derived from analysis depicted the requirement of native soil microflora under anoxic metabolic function using mixed microenvironment after augmenting with anaerobic inoculum for achieving effective chrysene degradation efficacy.

  4. Fabrication and optimization of camptothecin loaded Eudragit S 100 nanoparticles by Taguchi L4 orthogonal array design

    PubMed Central

    Mahalingam, Manikandan; Krishnamoorthy, Kannan

    2015-01-01

    Introduction: The objective of this investigation was to design and optimize the experimental conditions for the fabrication of camptothecin (CPT) loaded Eudragit S 100. Nanoparticles, and to understand the effect of various process parameters on the average particles size, particle size uniformity and surface area of the prepared polymeric nanoparticles using Taguchi design. Materials and Methods: CPT loaded Eudragit S 100 nanoparticles were prepared by nanoprecipitation method and characterized by particles size analyzer. Taguchi orthogonal array design was implemented to study the influence of seven independent variables on three dependent variables. Eight experimental trials involving seven independent variables at higher and lower levels were generated by design expert. Results: Factorial design result has shown that (a) except, β-cyclodextrin concentration all other parameters do not significantly influenced the average particle size (R1); (b) except, sonication duration and aqueous phase volume, all other process parameters significantly influence the particle size uniformity; (c) all the process parameters does not significantly influence the surface area. Conclusion: The R1, particle size uniformity and surface area of the prepared drug-loaded polymeric nanoparticles were found to be 120 nm, 0.237 and 55.7 m2 /g and the results were good correlated with the data generated by the Taguchi design method. PMID:26258056

  5. Experimental Validation for Hot Stamping Process by Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Fawzi Zamri, Mohd; Lim, Syh Kai; Razlan Yusoff, Ahmad

    2016-02-01

    Due to the demand for reduction in gas emissions, energy saving and producing safer vehicles has driven the development of Ultra High Strength Steel (UHSS) material. To strengthen UHSS material such as boron steel, it needed to undergo a process of hot stamping for heating at certain temperature and time. In this paper, Taguchi method is applied to determine the appropriate parameter of thickness, heating temperature and heating time to achieve optimum strength of boron steel. The experiment is conducted by using flat square shape of hot stamping tool with tensile dog bone as a blank product. Then, the value of tensile strength and hardness is measured as response. The results showed that the lower thickness, higher heating temperature and heating time give the higher strength and hardness for the final product. In conclusion, boron steel blank are able to achieve up to 1200 MPa tensile strength and 650 HV of hardness.

  6. Formulation Development and Evaluation of Hybrid Nanocarrier for Cancer Therapy: Taguchi Orthogonal Array Based Design

    PubMed Central

    Tekade, Rakesh K.; Chougule, Mahavir B.

    2013-01-01

    Taguchi orthogonal array design is a statistical approach that helps to overcome limitations associated with time consuming full factorial experimental design. In this study, the Taguchi orthogonal array design was applied to establish the optimum conditions for bovine serum albumin (BSA) nanocarrier (ANC) preparation. Taguchi method with L9 type of robust orthogonal array design was adopted to optimize the experimental conditions. Three key dependent factors namely, BSA concentration (% w/v), volume of BSA solution to total ethanol ratio (v : v), and concentration of diluted ethanolic aqueous solution (% v/v), were studied at three levels 3%, 4%, and 5% w/v; 1 : 0.75, 1 : 0.90, and 1 : 1.05 v/v; 40%, 70%, and 100% v/v, respectively. The ethanolic aqueous solution was used to impart less harsh condition for desolvation and attain controlled nanoparticle formation. The interaction plot studies inferred the ethanolic aqueous solution concentration to be the most influential parameter that affects the particle size of nanoformulation. This method (BSA, 4% w/v; volume of BSA solution to total ethanol ratio, 1 : 0.90 v/v; concentration of diluted ethanolic solution, 70% v/v) was able to successfully develop Gemcitabine (G) loaded modified albumin nanocarrier (M-ANC-G) of size 25.07 ± 2.81 nm (ζ = −23.03 ± 1.015 mV) as against to 78.01 ± 4.99 nm (ζ = −24.88 ± 1.37 mV) using conventional method albumin nanocarrier (C-ANC-G). Hybrid nanocarriers were generated by chitosan layering (solvent gelation technique) of respective ANC to form C-HNC-G and M-HNC-G of sizes 125.29 ± 5.62 nm (ζ = 12.01 ± 0.51 mV) and 46.28 ± 2.21 nm (ζ = 15.05 ± 0.39 mV), respectively. Zeta potential, entrapment, in vitro release, and pH-based stability studies were investigated and influence of formulation parameters are discussed. Cell-line-based cytotoxicity assay (A549 and H460 cells) and cell internalization assay (H460 cell line) were

  7. Taguchi statistical design and analysis of cleaning methods for spacecraft materials

    NASA Technical Reports Server (NTRS)

    Lin, Y.; Chung, S.; Kazarians, G. A.; Blosiu, J. O.; Beaudet, R. A.; Quigley, M. S.; Kern, R. G.

    2003-01-01

    In this study, we have extensively tested various cleaning protocols. The variant parameters included the type and concentration of solvent, type of wipe, pretreatment conditions, and various rinsing systems. Taguchi statistical method was used to design and evaluate various cleaning conditions on ten common spacecraft materials.

  8. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method.

    PubMed

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-06-13

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134-57.500 gr ethanol kg(-1) Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis.

  9. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method

    PubMed Central

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-01-01

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134–57.500 gr ethanol kg−1 Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis. PMID:27291594

  10. Thermochemical hydrolysis of macroalgae Ulva for biorefinery: Taguchi robust design method.

    PubMed

    Jiang, Rui; Linzon, Yoav; Vitkin, Edward; Yakhini, Zohar; Chudnovsky, Alexandra; Golberg, Alexander

    2016-01-01

    Understanding the impact of all process parameters on the efficiency of biomass hydrolysis and on the final yield of products is critical to biorefinery design. Using Taguchi orthogonal arrays experimental design and Partial Least Square Regression, we investigated the impact of change and the comparative significance of thermochemical process temperature, treatment time, %Acid and %Solid load on carbohydrates release from green macroalgae from Ulva genus, a promising biorefinery feedstock. The average density of hydrolysate was determined using a new microelectromechanical optical resonator mass sensor. In addition, using Flux Balance Analysis techniques, we compared the potential fermentation yields of these hydrolysate products using metabolic models of Escherichia coli, Saccharomyces cerevisiae wild type, Saccharomyces cerevisiae RN1016 with xylose isomerase and Clostridium acetobutylicum. We found that %Acid plays the most significant role and treatment time the least significant role in affecting the monosaccharaides released from Ulva biomass. We also found that within the tested range of parameters, hydrolysis with 121 °C, 30 min 2% Acid, 15% Solids could lead to the highest yields of conversion: 54.134-57.500 gr ethanol kg(-1) Ulva dry weight by S. cerevisiae RN1016 with xylose isomerase. Our results support optimized marine algae utilization process design and will enable smart energy harvesting by thermochemical hydrolysis. PMID:27291594

  11. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  12. Optimal Takagi-Sugeno Fuzzy Gain-Scheduler Design Using Taguchi-MHGA Method

    NASA Astrophysics Data System (ADS)

    Hsieh, Chen-Huei; Chou, Jyh-Horng; Wu, Ying-Jeng

    The fuzzy gain scheduling (FGS) control scheme based on TS (Takagi-Sugeno) fuzzy model is an effective approach to control nonlinear systems whose dynamics change with different operating condition. However, when the TS-model-based FGS control scheme is adopted to the stabilization/tracking control problem, a considerable amount of approximation errors between the nonlinear system and fuzzy approximation system apparently affect the control performance. Besides, when the LQR (linear quadratic regulator) method is employed to design local linear controllers, it is necessary to adjust the weighting matrices in performance index of the LQR for getting minimum performance index. Hence, in order to reduce the aforementioned approximation errors and enhance the dynamic performance of the TS-model-based FGS control scheme, a systematic and optimal reasoning method, named as Taguchi-MHGA (Taguchi-modified-hierarchical-genetic-algorithm) approach, is proposed in this paper to search for the optimal fuzzy centers (the linearization points) of the fuzzy regions, the optimal set of membership functions, and the weighting matrices of the LQR method. Furthermore, for ensuring that the closed-loop FGS system at any arbitrary operating point is asymptotically stable, two new sufficient conditions are presented. Finally, computer simulations are performed to demonstrate the effectiveness of the TS-model-based FGS control scheme designed by Taguchi-MHGA method. It is shown that the satisfactory performances have been achieved by such designed optimal TS-model-based FGS control scheme.

  13. Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods

    NASA Astrophysics Data System (ADS)

    Olds, John R.; Walberg, Gerald D.

    1993-02-01

    Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.

  14. Application of Taguchi Design and Response Surface Methodology for Improving Conversion of Isoeugenol into Vanillin by Resting Cells of Psychrobacter sp. CSW4.

    PubMed

    Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir

    2013-01-01

    For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of Taguchi approach and Box-Behnken statistical design under resting cell conditions. Taguchi design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by Taguchi experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of Taguchi design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions.

  15. Application of Taguchi Design and Response Surface Methodology for Improving Conversion of Isoeugenol into Vanillin by Resting Cells of Psychrobacter sp. CSW4

    PubMed Central

    Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir

    2013-01-01

    For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of Taguchi approach and Box-Behnken statistical design under resting cell conditions. Taguchi design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by Taguchi experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of Taguchi design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions. PMID:24250648

  16. Applying Taguchi Methods To Brazing Of Rocket-Nozzle Tubes

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Bellows, William J.; Deily, David C.; Brennan, Alex; Somerville, John G.

    1995-01-01

    Report describes experimental study in which Taguchi Methods applied with view toward improving brazing of coolant tubes in nozzle of main engine of space shuttle. Dr. Taguchi's parameter design technique used to define proposed modifications of brazing process reducing manufacturing time and cost by reducing number of furnace brazing cycles and number of tube-gap inspections needed to achieve desired small gaps between tubes.

  17. Workspace design for crane cabins applying a combined traditional approach and the Taguchi method for design of experiments.

    PubMed

    Spasojević Brkić, Vesna K; Veljković, Zorica A; Golubović, Tamara; Brkić, Aleksandar Dj; Kosić Šotić, Ivana

    2016-01-01

    Procedures in the development process of crane cabins are arbitrary and subjective. Since approximately 42% of incidents in the construction industry are linked to them, there is a need to collect fresh anthropometric data and provide additional recommendations for design. In this paper, dimensioning of the crane cabin interior space was carried out using a sample of 64 crane operators' anthropometric measurements, in the Republic of Serbia, by measuring workspace with 10 parameters using nine measured anthropometric data from each crane operator. This paper applies experiments run via full factorial designs using a combined traditional and Taguchi approach. The experiments indicated which design parameters are influenced by which anthropometric measurements and to what degree. The results are expected to be of use for crane cabin designers and should assist them to design a cabin that may lead to less strenuous sitting postures and fatigue for operators, thus improving safety and accident prevention.

  18. Taguchi design for optimization and development of antibacterial drug-loaded PLGA nanoparticles.

    PubMed

    Sonam; Chaudhary, Hema; Kumar, Vikash

    2014-03-01

    This research report was to develop Cefixime loaded polylactide-co-glycolide (PLGA) nanoparticles using modified precipitation method. TEM analysis indicated formation of well-formed, smooth, spherical nanoparticles with no aggregates whereas XRD recommended dispersion of drug in PLGA carrier system in amorphous form. The polymer and stabilizer concentration and organic to aqueous ratio were found to be significant factors for nanoparticles and their optimization using Taguchi design (L9). The design formulations showed entrapment efficiency (EE), particle size and poly-dispersity index (PDI) ranging 68.31 ± 1.74%, 159.8-157.7 nm and 0.126-0.149, respectively indicated small and stable nanoparticles with good homogeneity and encapsulation. The design optimized formulation drug release and permeation studies demonstrated that it is four times sustained release behavior and 1.74 times better permeation than free drug. The result of microbiological assay also suggested that optimized formulation has significant antibacterial activity against intracellular multidrug resistance (MDR) of Salmonella typhi.

  19. Mathematical modeling and analysis of EDM process parameters based on Taguchi design of experiments

    NASA Astrophysics Data System (ADS)

    Laxman, J.; Raj, K. Guru

    2015-12-01

    Electro Discharge Machining is a process used for machining very hard metals, deep and complex shapes by metal erosion in all types of electro conductive materials. The metal is removed through the action of an electric discharge of short duration and high current density between the tool and the work piece. The eroded metal on the surface of both work piece and the tool is flushed away by the dielectric fluid. The objective of this work is to develop a mathematical model for an Electro Discharge Machining process which provides the necessary equations to predict the metal removal rate, electrode wear rate and surface roughness. Regression analysis is used to investigate the relationship between various process parameters. The input parameters are taken as peak current, pulse on time, pulse off time, tool lift time. and the Metal removal rate, electrode wear rate and surface roughness are as responses. Experiments are conducted on Titanium super alloy based on the Taguchi design of experiments i.e. L27 orthogonal experiments.

  20. Microencapsulation of (deoxythymidine)₂₀-DOTAP complexes in stealth liposomes optimized by Taguchi design.

    PubMed

    Tavakoli, Shirin; Tamaddon, Ali Mohammad; Golkar, Nasim; Samani, Soliman Mohammadi

    2015-03-01

    Stealth liposomes encapsulating oligonucleotides are considered as promising non-viral gene delivery carriers; however, general preparation procedures are not capable to encapsulate nucleic acids (NAs) efficiently. In this study, the lyophobic complexes of deoxythymidine20 oligonucleotide (dT20) and DOTAP were used instead of free dT20 for nano-encapsulation process by reverse phase evaporation method. Regarding the various factors that can potentially affect the liposome characteristics, Taguchi design was applied to analyze the simultaneous effects of factors comprising PEG-lipid (%), dT20/total lipid molar ratio, cholesterol (Chol%) and organic-to-aqueous phase ratio (o/w) at three levels. The response variables, hydrodynamic diameter, loading efficiency (LE%) and capacity (LC%), were studied by dynamic light scattering and ethidium bromide exclusion assay, respectively. The optimum condition described by minimum particle size as well as high LE% and LC% was obtained at 5% PEG-lipid, dT20/total lipid of 7, 20% Chol and o/w of 3 with an average size of 84 nm, LE% = 83.4% and LC% = 11.6%. Moreover, stability assessments in presence of heparin sulfate revealed the noticeable resistance, unlike DOTAP/dT20 lipoplexes, to premature release of NA. Transmission electron microscopy confirmed formation of discrete and circular vesicles encapsulating dT20.

  1. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  2. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    NASA Astrophysics Data System (ADS)

    Nath, Nayani Kishore

    2016-06-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L{9/'} (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  3. An Exploratory Exercise in Taguchi Analysis of Design Parameters: Application to a Shuttle-to-space Station Automated Approach Control System

    NASA Technical Reports Server (NTRS)

    Deal, Don E.

    1991-01-01

    The chief goals of the summer project have been twofold - first, for my host group and myself to learn as much of the working details of Taguchi analysis as possible in the time allotted, and, secondly, to apply the methodology to a design problem with the intention of establishing a preliminary set of near-optimal (in the sense of producing a desired response) design parameter values from among a large number of candidate factor combinations. The selected problem is concerned with determining design factor settings for an automated approach program which is to have the capability of guiding the Shuttle into the docking port of the Space Station under controlled conditions so as to meet and/or optimize certain target criteria. The candidate design parameters under study were glide path (i.e., approach) angle, path intercept and approach gains, and minimum impulse bit mode (a parameter which defines how Shuttle jets shall be fired). Several performance criteria were of concern: terminal relative velocity at the instant the two spacecraft are mated; docking offset; number of Shuttle jet firings in certain specified directions (of interest due to possible plume impingement on the Station's solar arrays), and total RCS (a measure of the energy expended in performing the approach/docking maneuver). In the material discussed here, we have focused on single performance criteria - total RCS. An analysis of the possibility of employing a multiobjective function composed of a weighted sum of the various individual criteria has been undertaken, but is, at this writing, incomplete. Results from the Taguchi statistical analysis indicate that only three of the original four posited factors are significant in affecting RCS response. A comparison of model simulation output (via Monte Carlo) with predictions based on estimated factor effects inferred through the Taguchi experiment array data suggested acceptable or close agreement between the two except at the predicted optimum

  4. Wear performance optimization of stir cast Al-TiB2 metal matrix composites using Taguchi design of experiments

    NASA Astrophysics Data System (ADS)

    Poria, Suswagata; Sahoo, Prasanta; Sutradhar, Goutam

    2016-09-01

    The present study outlines the use of Taguchi parameter design to minimize the wear performance of Al-TiB2 metal matrix composites by optimizing tribological process parameters. Different weight percentages of micro-TiB2 powders with average sizes of 5-40 micron are incorporated into molten LM4 aluminium matrix by stir casting method. The wear performance of Al-TiB2 composites is evaluated in a block-on-roller type Multitribo tester at room temperature. Three parameters viz. weight percentage of TiB2, load and speed are considered with three levels each at the time of experiment. A L27 orthogonal array is used to carry out experiments accommodating all the factors and their levels including their interaction effects. Optimal combination of parameters for wear performance is obtained by Taguchi analysis. Analysis of variance (ANOVA) is used to find out percentage contribution of each parameter and their interaction also on wear performance. Weight percentage of TiB2 is forced to be the most effective parameter in controlling wear behaviour of Al-TiB2 metal matrix composite.

  5. Application of Taguchi method in optimization of cervical ring cage.

    PubMed

    Yang, Kai; Teo, Ee-Chon; Fuss, Franz Konstantin

    2007-01-01

    The Taguchi method is a statistical approach to overcome the limitation of the factorial and fractional factorial experiments by simplifying and standardizing the fractional factorial design. The objective of the current study is to illustrate the procedures and strengths of the Taguchi method in biomechanical analysis by using a case study of a cervical ring cage optimization. A three-dimensional finite element (FE) model of C(5)-C(6) with a generic cervical ring cage inserted was modelled. Taguchi method was applied in the optimization of the cervical ring cage in material property and dimensions for producing the lowest stress on the endplate to reduce the risk of cage subsidence, as in the following steps: (1) establishment of objective function; (2) determination of controllable factors and their levels; (3) identification of uncontrollable factors and test conditions; (4) design of Taguchi crossed array layout; (5) execution of experiments according to trial conditions; (6) analysis of results; (7) determination of optimal run; (8) confirmation of optimum run. The results showed that a cage with larger width, depth and wall thickness can produce the lower von Mises stress under various conditions. The contribution of implant materials is found trivial. The current case study illustrates that the strengths of the Taguchi method lie in (1) consistency in experimental design and analysis; (2) reduction of time and cost of experiments; (3) robustness of performance with removing the noise factors. The Taguchi method will have a great potential application in biomechanical field when factors of the issues are at discrete level. PMID:17822708

  6. Optimal design of loudspeaker arrays for robust cross-talk cancellation using the Taguchi method and the genetic algorithm.

    PubMed

    Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung

    2005-05-01

    An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.

  7. Estudio numerico y experimental del proceso de soldeo MIG sobre la aleacion 6063--T5 utilizando el metodo de Taguchi

    NASA Astrophysics Data System (ADS)

    Meseguer Valdenebro, Jose Luis

    Electric arc welding processes represent one of the most used techniques on manufacturing processes of mechanical components in modern industry. The electric arc welding processes have been adapted to current needs, becoming a flexible and versatile way to manufacture. Numerical results in the welding process are validated experimentally. The main numerical methods most commonly used today are three: finite difference method, finite element method and finite volume method. The most widely used numerical method for the modeling of welded joints is the finite element method because it is well adapted to the geometric and boundary conditions in addition to the fact that there is a variety of commercial programs which use the finite element method as a calculation basis. The content of this thesis shows an experimental study of a welded joint conducted by means of the MIG welding process of aluminum alloy 6063-T5. The numerical process is validated experimentally by applying the method of finite element through the calculation program ANSYS. The experimental results in this paper are the cooling curves, the critical cooling time t4/3, the weld bead geometry, the microhardness obtained in the welded joint, and the metal heat affected zone base, process dilution, critical areas intersected between the cooling curves and the curve TTP. The numerical results obtained in this thesis are: the thermal cycle curves, which represent both the heating to maximum temperature and subsequent cooling. The critical cooling time t4/3 and thermal efficiency of the process are calculated and the bead geometry obtained experimentally is represented. The heat affected zone is obtained by differentiating the zones that are found at different temperatures, the critical areas intersected between the cooling curves and the TTP curve. In order to conclude this doctoral thesis, an optimization has been conducted by means of the Taguchi method for welding parameters in order to obtain an

  8. A parameter-tuned genetic algorithm for statistically constrained economic design of multivariate CUSUM control charts: a Taguchi loss approach

    NASA Astrophysics Data System (ADS)

    Niaki, Seyed Taghi Akhavan; Javad Ershadi, Mohammad

    2012-12-01

    In this research, the main parameters of the multivariate cumulative sum (CUSUM) control chart (the reference value k, the control limit H, the sample size n and the sampling interval h) are determined by minimising the Lorenzen-Vance cost function [Lorenzen, T.J., and Vance, L.C. (1986), 'The Economic Design of Control Charts: A Unified Approach', Technometrics, 28, 3-10], in which the external costs of employing the chart are added. In addition, the model is statistically constrained to achieve desired in-control and out-of-control average run lengths. The Taguchi loss approach is used to model the problem and a genetic algorithm, for which its main parameters are tuned using the response surface methodology (RSM), is proposed to solve it. At the end, sensitivity analyses on the main parameters of the cost function are presented and their practical conclusions are drawn. The results show that RSM significantly improves the performance of the proposed algorithm and the external costs of applying the chart, which are due to real-world constraints, do not increase the average total loss very much.

  9. Taguchi optimisation of ELISA procedures.

    PubMed

    Jeney, C; Dobay, O; Lengyel, A; Adám, E; Nász, I

    1999-03-01

    We propose a new method in the field of ELISA optimization using an experimental design called the Taguchi method. This can be used to compare the net effects of different conditions which can be both qualitative and quantitative in nature. The method reduces the effects of the interactions of the optimized variables making it possible to access the optimum conditions even in cases where there are large interactions between the variables of the assay. Furthermore, the proposed special assignment of factors makes it possible to calculate the biochemical parameters of the ELISA procedure carried out under optimum conditions. Thus, the calibration curve, the sensitivity of the optimum assay, the intra-assay and inter-assay variability can be estimated. The method is fast, accessing the results in one step, compared to the traditional, time-consuming 'one-step-at-a-time' method. We exemplify the procedure with a method to optimize the detection of ScFv (single chain fragment of variable) phages by ELISA. All the necessary calculations can be carried out by a spreadsheet program without any special statistical knowledge. PMID:10089092

  10. Workbook for Taguchi Methods for Product Quality Improvement.

    ERIC Educational Resources Information Center

    Zarghami, Ali; Benbow, Don

    Taguchi methods are methods of product quality improvement that analyze major contributions and how they can be controlled to reduce variability of poor performance. In this approach, knowledge is used to shorten testing. Taguchi methods are concerned with process improvement rather than with process measurement. This manual is designed to be used…

  11. Formulation and optimization of solid lipid nanoparticle formulation for pulmonary delivery of budesonide using Taguchi and Box-Behnken design.

    PubMed

    Emami, J; Mohiti, H; Hamishehkar, H; Varshosaz, J

    2015-01-01

    Budesonide is a potent non-halogenated corticosteroid with high anti-inflammatory effects. The lungs are an attractive route for non-invasive drug delivery with advantages for both systemic and local applications. The aim of the present study was to develop, characterize and optimize a solid lipid nanoparticle system to deliver budesonide to the lungs. Budesonide-loaded solid lipid nanoparticles were prepared by the emulsification-solvent diffusion method. The impact of various processing variables including surfactant type and concentration, lipid content organic and aqueous volume, and sonication time were assessed on the particle size, zeta potential, entrapment efficiency, loading percent and mean dissolution time. Taguchi design with 12 formulations along with Box-Behnken design with 17 formulations was developed. The impact of each factor upon the eventual responses was evaluated, and the optimized formulation was finally selected. The size and morphology of the prepared nanoparticles were studied using scanning electron microscope. Based on the optimization made by Design Expert 7(®) software, a formulation made of glycerol monostearate, 1.2 % polyvinyl alcohol (PVA), weight ratio of lipid/drug of 10 and sonication time of 90 s was selected. Particle size, zeta potential, entrapment efficiency, loading percent, and mean dissolution time of adopted formulation were predicted and confirmed to be 218.2 ± 6.6 nm, -26.7 ± 1.9 mV, 92.5 ± 0.52 %, 5.8 ± 0.3 %, and 10.4 ± 0.29 h, respectively. Since the preparation and evaluation of the selected formulation within the laboratory yielded acceptable results with low error percent, the modeling and optimization was justified. The optimized formulation co-spray dried with lactose (hybrid microparticles) displayed desirable fine particle fraction, mass median aerodynamic diameter (MMAD), and geometric standard deviation of 49.5%, 2.06 μm, and 2.98 μm; respectively. Our results provide fundamental data for the

  12. Taguchi methods in electronics: A case study

    NASA Astrophysics Data System (ADS)

    Kissel, R.

    1992-05-01

    Total Quality Management (TQM) is becoming more important as a way to improve productivity. One of the technical aspects of TQM is a system called the Taguchi method. This is an optimization method that, with a few precautions, can reduce test effort by an order of magnitude over conventional techniques. The Taguchi method is specifically designed to minimize a product's sensitivity to uncontrollable system disturbances such as aging, temperature, voltage variations, etc., by simultaneously varying both design and disturbance parameters. The analysis produces an optimum set of design parameters. A 3-day class on the Taguchi method was held at the Marshall Space Flight Center (MSFC) in May 1991. A project was needed as a follow-up after the class was over, and the motor controller was selected at that time. Exactly how to proceed was the subject of discussion for some months. It was not clear exactly what to measure, and design kept getting mixed with optimization. There was even some discussion about why the Taguchi method should be used at all.

  13. Integrated Bayesian Experimental Design

    NASA Astrophysics Data System (ADS)

    Fischer, R.; Dreier, H.; Dinklage, A.; Kurzan, B.; Pasch, E.

    2005-11-01

    Any scientist planning experiments wants to optimize the design of a future experiment with respect to best performance within the scheduled experimental scenarios. Bayesian Experimental Design (BED) aims in finding optimal experimental settings based on an information theoretic utility function. Optimal design parameters are found by maximizing an expected utility function where the future data and the parameters of physical scenarios of interest are marginalized. The goal of the Integrated Bayesian Experimental Design (IBED) concept is to combine experiments as early as on the design phase to mutually exploit the benefits of the other experiments. The Bayesian Integrated Data Analysis (IDA) concept of linking interdependent measurements to provide a validated data base and to exploit synergetic effects will be used to design meta-diagnostics. An example is given by the Thomson scattering (TS) and the interferometry (IF) diagnostics individually, and a set of both. In finding the optimal experimental design for the meta-diagnostic, TS and IF, the strengths of both experiments can be combined to synergistically increase the reliability of results.

  14. Simulation reduction using the Taguchi method

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Lautenschlager, Ume; Erikstad, Stein Owe; Allen, Janet K.

    1993-01-01

    A large amount of engineering effort is consumed in conducting experiments to obtain information needed for making design decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. The principal focus of this project is to develop and implement applications of Taguchi's quality engineering techniques. In particular, we show how these techniques are applied to reduce the number of experiments for trajectory simulation of the LifeSat space vehicle. Orthogonal arrays are used to study many parameters simultaneously with a minimum of time and resources. Taguchi's signal to noise ratio is being employed to measure quality. A compromise Decision Support Problem and Robust Design are applied to demonstrate how quality is designed into a product in the early stages of designing.

  15. Ultrasound-assisted emulsification microextraction coupled with gas chromatography-mass spectrometry using the Taguchi design method for bisphenol migration studies from thermal printer paper, toys and baby utensils.

    PubMed

    Viñas, Pilar; López-García, Ignacio; Campillo, Natalia; Rivas, Ricardo E; Hernández-Córdoba, Manuel

    2012-08-01

    The optimization of a clean procedure based on ultrasound-assisted emulsification liquid-liquid microextraction for the sensitive determination of four bisphenols is presented. The miniaturized technique was coupled with gas chromatography-mass spectrometry after derivatization by in situ acetylation. The Taguchi experimental method, an orthogonal array design, was applied to find the optimal combination of seven factors (each factor at three levels) influencing the emulsification, extraction and collection efficiency, namely acetic anhydride volume, sodium phosphate concentration, carbon tetrachloride volume, aqueous sample volume, sodium chloride concentration and ultrasound power and application time. A second factorial design was applied with four factors and five levels for each factor, 25 experiments being performed in this instance. The matrix effect was evaluated, and it was concluded that sample quantification can be done by calibration with aqueous standards. The detection limits ranged from 0.01 to 0.03 ng mL(-1) depending on the compound. The environmentally friendly sample pretreatment procedure was applied to study the migration of the bisphenols from different types of samples: thermal printer paper, compact discs, digital versatile discs, small tight-fitting waistcoats, baby's bottles, baby bottle nipples of different materials and children's toys.

  16. Response surface methodology and process optimization of sustained release pellets using Taguchi orthogonal array design and central composite design

    PubMed Central

    Singh, Gurinder; Pai, Roopa S.; Devi, V. Kusum

    2012-01-01

    Furosemide is a powerful diuretic and antihypertensive drug which has low bioavailability due to hepatic first pass metabolism and has a short half-life of 2 hours. To overcome the above drawback, the present study was carried out to formulate and evaluate sustained release (SR) pellets of furosemide for oral administration prepared by extrusion/spheronization. Drug Coat L-100 was used within the pellet core along with microcrystalline cellulose as the diluent and concentration of selected binder was optimized to be 1.2%. The formulation was prepared with drug to polymer ratio 1:3. It was optimized using Design of Experiments by employing a 32 central composite design that was used to systematically optimize the process parameters combined with response surface methodology. Dissolution studies were carried out with USP apparatus Type I (basket type) in both simulated gastric and intestinal pH. The statistical technique, i.e., the two-tailed paired t test and one-way ANOVA of in vitro data has proposed that there was very significant (P≤0.05) difference in dissolution profile of furosemide SR pellets when compared with pure drug and commercial product. Validation of the process optimization study indicated an extremely high degree of prognostic ability. The study effectively undertook the development of optimized process parameters of pelletization of furosemide pellets with tremendous SR characteristics. PMID:22470891

  17. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. PMID:25541547

  18. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians.

  19. Evaluation of Listeria monocytogenes survival in ice cream mixes flavored with herbal tea using Taguchi method.

    PubMed

    Ozturk, Ismet; Golec, Adem; Karaman, Safa; Sagdic, Osman; Kayacier, Ahmed

    2010-10-01

    In this study, the effects of the incorporation of some herbal teas at different concentrations into the ice cream mix on the population of Listeria monocytogenes were studied using Taguchi method. The ice cream mix samples flavored with herbal teas were prepared using green tea and sage at different concentrations. Afterward, fresh culture of L. monocytogenes was inoculated into the samples and the L. monocytogenes was counted at different storage periods. Taguchi method was used for experimental design and analysis. In addition, some physicochemical properties of samples were examined. Results suggested that there was some effect, although little, on the population of L. monocytogenes when herbal tea was incorporated into the ice cream mix. Additionally, the use of herbal tea caused a decrease in the pH values of the samples and significant changes in the color values.

  20. Multi-Response Optimization of Carbidic Austempered Ductile Iron Production Parameters using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Dhanapal, P.; Mohamed Nazirudeen, S. S.; Chandrasekar, A.

    2012-04-01

    Carbide Austempered Ductile Iron (CADI) is the family of ductile iron containing wear resistance alloy carbides in the ausferrite matrix. This CADI is manufactured by selecting and characterizing the proper material composition through the melting route done. In an effort to arrive the optimal production parameters of multi responses, Taguchi method and Grey relational analysis have been applied. To analyze the effect of production parameters on the mechanical properties signal-to-noise ratio and Grey relational grade have been calculated based on the design of experiments. An analysis of variance was calculated to find the amount of contribution of factors on mechanical properties and their significance. The analytical results of Taguchi method were compared with the experimental values, and it shows that both are identical.

  1. Experimental study of optimal self compacting concrete with spent foundry sand as partial replacement for M-sand using Taguchi approach

    NASA Astrophysics Data System (ADS)

    Nirmala, D. B.; Raviraj, S.

    2016-06-01

    This paper presents the application of Taguchi approach to obtain optimal mix proportion for Self Compacting Concrete (SCC) containing spent foundry sand and M-sand. Spent foundry sand is used as a partial replacement for M-sand. The SCC mix has seven control factors namely, Coarse aggregate, M-sand with Spent Foundry sand, Cement, Fly ash, Water, Super plasticizer and Viscosity modifying agent. Modified Nan Su method is used to proportion the initial SCC mix. L18 (21×37) Orthogonal Arrays (OA) with the seven control factors having 3 levels is used in Taguchi approach which resulted in 18 SCC mix proportions. All mixtures are extensively tested both in fresh and hardened states to verify whether they meet the practical and technical requirements of SCC. The quality characteristics considering "Nominal the better" situation is applied to the test results to arrive at the optimal SCC mix proportion. Test results indicate that the optimal mix satisfies the requirements of fresh and hardened properties of SCC. The study reveals the feasibility of using spent foundry sand as a partial replacement of M-sand in SCC and also that Taguchi method is a reliable tool to arrive at optimal mix proportion of SCC.

  2. Optimisation of Lime-Soda process parameters for reduction of hardness in aqua-hatchery practices using Taguchi methods.

    PubMed

    Yavalkar, S P; Bhole, A G; Babu, P V Vijay; Prakash, Chandra

    2012-04-01

    This paper presents the optimisation of Lime-Soda process parameters for the reduction of hardness in aqua-hatchery practices in the context of M. rosenbergii. The fresh water in the development of fisheries needs to be of suitable quality. Lack of desirable quality in available fresh water is generally the confronting restraint. On the Indian subcontinent, groundwater is the only source of raw water, having varying degree of hardness and thus is unsuitable for the fresh water prawn hatchery practices (M. rosenbergii). In order to make use of hard water in the context of aqua-hatchery, Lime-Soda process has been recommended. The efficacy of the various process parameters like lime, soda ash and detention time, on the reduction of hardness needs to be examined. This paper proposes to determine the parameter settings for the CIFE well water, which is pretty hard by using Taguchi experimental design method. Orthogonal Arrays of Taguchi, Signal-to-Noise Ratio, the analysis of variance (ANOVA) have been applied to determine their dosage and analysed for their effect on hardness reduction. The tests carried out with optimal levels of Lime-Soda process parameters confirmed the efficacy of the Taguchi optimisation method. Emphasis has been placed on optimisation of chemical doses required to reduce the total hardness using Taguchi method and ANOVA, to suit the available raw water quality for aqua-hatchery practices, especially for fresh water prawn M. rosenbergii.

  3. A feasibility investigation for modeling and optimization of temperature in bone drilling using fuzzy logic and Taguchi optimization methodology.

    PubMed

    Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar

    2014-11-01

    Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process.

  4. Near Field and Far Field Effects in the Taguchi-Optimized Design of AN InP/GaAs-BASED Double Wafer-Fused Mqw Long-Wavelength Vertical-Cavity Surface-Emitting Laser

    NASA Astrophysics Data System (ADS)

    Menon, P. S.; Kandiah, K.; Mandeep, J. S.; Shaari, S.; Apte, P. R.

    Long-wavelength VCSELs (LW-VCSEL) operating in the 1.55 μm wavelength regime offer the advantages of low dispersion and optical loss in fiber optic transmission systems which are crucial in increasing data transmission speed and reducing implementation cost of fiber-to-the-home (FTTH) access networks. LW-VCSELs are attractive light sources because they offer unique features such as low power consumption, narrow beam divergence and ease of fabrication for two-dimensional arrays. This paper compares the near field and far field effects of the numerically investigated LW-VCSEL for various design parameters of the device. The optical intensity profile far from the device surface, in the Fraunhofer region, is important for the optical coupling of the laser with other optical components. The near field pattern is obtained from the structure output whereas the far-field pattern is essentially a two-dimensional fast Fourier Transform (FFT) of the near-field pattern. Design parameters such as the number of wells in the multi-quantum-well (MQW) region, the thickness of the MQW and the effect of using Taguchi's orthogonal array method to optimize the device design parameters on the near/far field patterns are evaluated in this paper. We have successfully increased the peak lasing power from an initial 4.84 mW to 12.38 mW at a bias voltage of 2 V and optical wavelength of 1.55 μm using Taguchi's orthogonal array. As a result of the Taguchi optimization and fine tuning, the device threshold current is found to increase along with a slight decrease in the modulation speed due to increased device widths.

  5. A simple procedure for optimising the polymerase chain reaction (PCR) using modified Taguchi methods.

    PubMed Central

    Cobb, B D; Clarkson, J M

    1994-01-01

    Taguchi methods are used widely as the basis for development trials during industrial process design. Here, we describe their suitability for optimisation of the PCR. Unlike conventional strategies, these arrays revealed the effects and interactions of specific reaction components simultaneously using just a few reactions, negating the need for extensive experimental investigation. Reaction components which effected product yield were easily determined. In addition, this technique was applied to the qualitative investigation of RAPD-PCR profiles, where optimisation of the size and distribution of a number of products was determined. Images PMID:7937094

  6. Designing an Experimental "Accident"

    ERIC Educational Resources Information Center

    Picker, Lester

    1974-01-01

    Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)

  7. Synthesis of graphene by cobalt-catalyzed decomposition of methane in plasma-enhanced CVD: Optimization of experimental parameters with Taguchi method

    NASA Astrophysics Data System (ADS)

    Mehedi, H.-A.; Baudrillart, B.; Alloyeau, D.; Mouhoub, O.; Ricolleau, C.; Pham, V. D.; Chacon, C.; Gicquel, A.; Lagoute, J.; Farhat, S.

    2016-08-01

    This article describes the significant roles of process parameters in the deposition of graphene films via cobalt-catalyzed decomposition of methane diluted in hydrogen using plasma-enhanced chemical vapor deposition (PECVD). The influence of growth temperature (700-850 °C), molar concentration of methane (2%-20%), growth time (30-90 s), and microwave power (300-400 W) on graphene thickness and defect density is investigated using Taguchi method which enables reaching the optimal parameter settings by performing reduced number of experiments. Growth temperature is found to be the most influential parameter in minimizing the number of graphene layers, whereas microwave power has the second largest effect on crystalline quality and minor role on thickness of graphene films. The structural properties of PECVD graphene obtained with optimized synthesis conditions are investigated with Raman spectroscopy and corroborated with atomic-scale characterization performed by high-resolution transmission electron microscopy and scanning tunneling microscopy, which reveals formation of continuous film consisting of 2-7 high quality graphene layers.

  8. Application of the nonlinear, double-dynamic Taguchi method to the precision positioning device using combined piezo-VCM actuator.

    PubMed

    Liu, Yung-Tien; Fung, Rong-Fong; Wang, Chun-Chao

    2007-02-01

    In this research, the nonlinear, double-dynamic Taguchi method was used as design and analysis methods for a high-precision positioning device using the combined piezo-voice-coil motor (VCM) actuator. An experimental investigation into the effects of two input signals and three control factors were carried out to determine the optimum parametric configuration of the positioning device. The double-dynamic Taguchi method, which permits optimization of several control factors concurrently, is particularly suitable for optimizing the performance of a positioning device with multiple actuators. In this study, matrix experiments were conducted with L9(3(4)) orthogonal arrays (OAs). The two most critical processes for the optimization of positioning device are the identification of the nonlinear ideal function and the combination of the double-dynamic signal factors for the ideal function's response. The driving voltage of the VCM and the waveform amplitude of the PZT actuator are combined into a single quality characteristic to evaluate the positioning response. The application of the double-dynamic Taguchi method, with dynamic signal-to-noise ratio (SNR) and L9(3(4)) OAs, reduced the number of necessary experiments. The analysis of variance (ANOVA) was applied to set the optimum parameters based on the high-precision positioning process.

  9. Optimizing Cu(II) removal from aqueous solution by magnetic nanoparticles immobilized on activated carbon using Taguchi method.

    PubMed

    Ebrahimi Zarandi, Mohammad Javad; Sohrabi, Mahmoud Reza; Khosravi, Morteza; Mansouriieh, Nafiseh; Davallo, Mehran; Khosravan, Azita

    2016-01-01

    This study synthesized magnetic nanoparticles (Fe(3)O(4)) immobilized on activated carbon (AC) and used them as an effective adsorbent for Cu(II) removal from aqueous solution. The effect of three parameters, including the concentration of Cu(II), dosage of Fe(3)O(4)/AC magnetic nanocomposite and pH on the removal of Cu(II) using Fe(3)O(4)/AC nanocomposite were studied. In order to examine and describe the optimum condition for each of the mentioned parameters, Taguchi's optimization method was used in a batch system and L9 orthogonal array was used for the experimental design. The removal percentage (R%) of Cu(II) and uptake capacity (q) were transformed into an accurate signal-to-noise ratio (S/N) for a 'larger-the-better' response. Taguchi results, which were analyzed based on choosing the best run by examining the S/N, were statistically tested using analysis of variance; the tests showed that all the parameters' main effects were significant within a 95% confidence level. The best conditions for removal of Cu(II) were determined at pH of 7, nanocomposite dosage of 0.1 gL(-1) and initial Cu(II) concentration of 20 mg L(-1) at constant temperature of 25 °C. Generally, the results showed that the simple Taguchi's method is suitable to optimize the Cu(II) removal experiments. PMID:27386981

  10. Parametric Optimization of Wire Electrical Discharge Machining of Powder Metallurgical Cold Worked Tool Steel using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sudhakara, Dara; Prasanthi, Guvvala

    2016-08-01

    Wire Cut EDM is an unconventional machining process used to build components of complex shape. The current work mainly deals with optimization of surface roughness while machining P/M CW TOOL STEEL by Wire cut EDM using Taguchi method. The process parameters of the Wire Cut EDM is ON, OFF, IP, SV, WT, and WP. L27 OA is used for to design of the experiments for conducting experimentation. In order to find out the effecting parameters on the surface roughness, ANOVA analysis is engaged. The optimum levels for getting minimum surface roughness is ON = 108 µs, OFF = 63 µs, IP = 11 A, SV = 68 V and WT = 8 g.

  11. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  12. Modified Artificial Diet for Rearing of Tobacco Budworm, Helicoverpa armigera, using the Taguchi Method and Derringer's Desirability Function

    PubMed Central

    Assemi, H.; Rezapanah, M.; Vafaei-Shoushtari, R.

    2012-01-01

    With the aim to improve the mass rearing feasibility of tobacco budworm, Helicoverpa armigera Hübner (Lepidoptera: Noctuidae), design of experimental methodology using Taguchi orthogonal array was applied. To do so, the effect of 16 ingredients of an artificial diet including bean, wheat germ powder, Nipagin, ascorbic acid, formaldehyde, oil, agar, distilled water, ascorbate, yeast, chloramphenicol, benomyl, penicillin, temperature, humidity, and container size on some biological characteristics of H. armigera was evaluated. The selected 16 factors were considered at two levels (32 experiments) in the experimental design. Among the selected factors, penicillin, container size, formaldehyde, chloramphenicol, wheat germ powder, and agar showed significant effect on the mass rearing performance. Derringer's desirability function was used for simultaneous optimization of mass rearing of tobacco budworm, H. armigera, on a modified artificial diet. Derived optimum operating conditions obtained by Derringer's desirability function and Taguchi methodology decreased larval period from 19 to 15.5 days (18.42 % improvement), decreased the pupal period from 12.29 to 11 days (10.49 % improvement), increased the longevity of adults from 14.51 to 21 days (44.72 % improvement), increased the number of eggs/female from 211.21 to 260, and increased egg hatchability from 54.2% to 72% (32.84 % improvement). The proposed method facilitated a systematic mathematical approach with a few well-defined experimental sets. PMID:23425103

  13. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  14. Total Quality Management: Statistics and Graphics III - Experimental Design and Taguchi Methods. AIR 1993 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schwabe, Robert A.

    Interest in Total Quality Management (TQM) at institutions of higher education has been stressed in recent years as an important area of activity for institutional researchers. Two previous AIR Forum papers have presented some of the statistical and graphical methods used for TQM. This paper, the third in the series, first discusses some of the…

  15. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.

  16. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  17. Optimization of a Three-Component Green Corrosion Inhibitor Mixture for Using in Cooling Water by Experimental Design

    NASA Astrophysics Data System (ADS)

    Asghari, E.; Ashassi-Sorkhabi, H.; Ahangari, M.; Bagheri, R.

    2016-04-01

    Factors such as inhibitor concentration, solution hydrodynamics, and temperature influence the performance of corrosion inhibitor mixtures. The simultaneous studying of the impact of different factors is a time- and cost-consuming process. The use of experimental design methods can be useful in minimizing the number of experiments and finding local optimized conditions for factors under the investigation. In the present work, the inhibition performance of a three-component inhibitor mixture against corrosion of St37 steel rotating disk electrode, RDE, was studied. The mixture was composed of citric acid, lanthanum(III) nitrate, and tetrabutylammonium perchlorate. In order to decrease the number of experiments, the L16 Taguchi orthogonal array was used. The "control factors" were the concentration of each component and the rotation rate of RDE and the "response factor" was the inhibition efficiency. The scanning electron microscopy and energy dispersive x-ray spectroscopy techniques verified the formation of islands of adsorbed citrate complexes with lanthanum ions and insoluble lanthanum(III) hydroxide. From the Taguchi analysis results the mixture of 0.50 mM lanthanum(III) nitrate, 0.50 mM citric acid, and 2.0 mM tetrabutylammonium perchlorate under the electrode rotation rate of 1000 rpm was found as optimum conditions.

  18. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  19. [Development of an optimized formulation of damask marmalade with low energy level using Taguchi methodology].

    PubMed

    Villarroel, Mario; Castro, Ruth; Junod, Julio

    2003-06-01

    The goal of this present study was the development of an optimized formula of damask marmalade low in calories applying Taguchi methodology to improve the quality of this product. The selection of this methodology lies on the fact that in real life conditions the result of an experiment frequently depends on the influence of several variables, therefore, one expedite way to solve this problem is utilizing factorial desings. The influence of acid, thickener, sweetener and aroma additives, as well as time of cooking, and possible interactions among some of them, were studied trying to get the best combination of these factors to optimize the sensorial quality of an experimental formulation of dietetic damask marmalade. An orthogonal array L8 (2(7)) was applied in this experience, as well as level average analysis was carried out according Taguchi methodology to determine the suitable working levels of the design factors previously choiced, to achieve a desirable product quality. A sensory trained panel was utilized to analyze the marmalade samples using a composite scoring test with a descriptive acuantitative scale ranging from 1 = Bad, 5 = Good. It was demonstrated that the design factors sugar/aspartame, pectin and damask aroma had a significant effect (p < 0.05) on the sensory quality of the marmalade with 82% of contribution on the response. The optimal combination result to be: citric acid 0.2%; pectin 1%; 30 g sugar/16 mg aspartame/100 g, damask aroma 0.5 ml/100 g, time of cooking 5 minutes. Regarding chemical composition, the most important results turned out to be the decrease in carbohydrate content compaired with traditional marmalade with a reduction of 56% in coloric value and also the amount of dietary fiber greater than similar commercial products. Assays of storage stability were carried out on marmalade samples submitted to different temperatures held in plastic bags of different density. Non percetible sensorial, microbiological and chemical changes

  20. Study on interaction between palladium(ІІ)-Linezolid chelate with eosin by resonance Rayleigh scattering, second order of scattering and frequency doubling scattering methods using Taguchi orthogonal array design

    NASA Astrophysics Data System (ADS)

    Thakkar, Disha; Gevriya, Bhavesh; Mashru, R. C.

    2014-03-01

    Linezolid reacted with palladium to form 1:1 binary cationic chelate which further reacted with eosin dye to form 1:1 ternary ion association complex at pH 4 of Walpole's acetate buffer in the presence of methyl cellulose. As a result not only absorption spectra were changed but Resonance Rayleigh Scattering (RRS), Second-order Scattering (SOS) and Frequency Doubling Scattering (FDS) intensities were greatly enhanced. The analytical wavelengths of RRS, SOS and FDS (λex/λem) of ternary complex were located at 538 nm/538 nm, 240 nm/480 nm and 660 nm/330 nm, respectively. The linearity range for RRS, SOS and FDS methods were 0.01-0.5 μg mL-1, 0.1-2 μg mL-1 and 0.2-1.8 μg mL-1, respectively. The sensitivity order of three methods was as RRS > SOS > FDS. Accuracy of all methods were determined by recovery studies and showed recovery between 98% and 102%. Intraday and inter day precision were checked for all methods and %RSD was found to be less than 2 for all methods. The effects of foreign substances were tested on RRS method and it showed the method had good selectivity. For optimization of process parameter, Taguchi orthogonal array design L8(24) was used and ANOVA was adopted to determine the statistically significant control factors that affect the scattering intensities of methods. The reaction mechanism, composition of ternary ion association complex and reasons for scattering intensity enhancement was discussed in this work.

  1. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  2. Multi-response optimization in the development of oleo-hydrophobic cotton fabric using Taguchi based grey relational analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Naseer; Kamal, Shahid; Raza, Zulfiqar Ali; Hussain, Tanveer; Anwar, Faiza

    2016-03-01

    Present study under takes multi-response optimization of water and oil repellent finishing of bleached cotton fabric under Taguchi based grey relational analysis. We considered three input variables, viz. concentrations of the finish (Oleophobol CP-C) and cross linking agent (Knittex FEL), and curing temperature. The responses included: water and oil contact angles, air permeability, crease recovery angle, stiffness, and tear and tensile strengths of the finished fabric. The experiments were conducted under L9 orthogonal array in Taguchi design. The grey relational analysis was also included to set the quality characteristics as reference sequence and to decide the optimal parameter combinations. Additionally, the analysis of variance was employed to determine the most significant factor. The results demonstrate great improvement in the desired quality parameters of the developed fabric. The optimization approach reported in this study could be effectively used to reduce expensive trial and error experimentation for new product development and process optimization involving multiple responses. The product optimized in this study was characterized by using advanced analytical techniques, and has potential applications in rainwear and other outdoor apparel.

  3. The Experimental Design Ability Test (EDAT)

    ERIC Educational Resources Information Center

    Sirum, Karen; Humburg, Jennifer

    2011-01-01

    Higher education goals include helping students develop evidence based reasoning skills; therefore, scientific thinking skills such as those required to understand the design of a basic experiment are important. The Experimental Design Ability Test (EDAT) measures students' understanding of the criteria for good experimental design through their…

  4. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  5. Parametric optimization of selective laser melting for forming Ti6Al4V samples by Taguchi method

    NASA Astrophysics Data System (ADS)

    Sun, Jianfeng; Yang, Yongqiang; Wang, Di

    2013-07-01

    In this study, a selective laser melting experiment was carried out with Ti6Al4V alloy powders. To produce samples with maximum density, selective laser melting parameters of laser power, scanning speed, powder thickness, hatching space and scanning strategy were carefully selected. As a statistical design of experimental technique, the Taguchi method was used to optimize the selected parameters. The results were analyzed using analyses of variance (ANOVA) and the signal-to-noise (S/N) ratios by design-expert software for the optimal parameters, and a regression model was established. The regression equation revealed a linear relationship among the density, laser power, scanning speed, powder thickness and scanning strategy. From the experiments, sample with density higher than 95% was obtained. The microstructure of obtained sample was mainly composed of acicular martensite, α phase and β phase. The micro-hardness was 492 HV0.2.

  6. Taguchi Based Regression Analysis of End-Wall Film Cooling in a Gas Turbine Cascade with Single Row of Holes

    NASA Astrophysics Data System (ADS)

    Ravi, D.; Parammasivam, K. M.

    2016-09-01

    Numerical investigations were conducted on a turbine cascade, with end-wall cooling by a single row of cylindrical holes, inclined at 30°. The mainstream fluid was hot air and the coolant was CO2 gas. Based on the Reynolds number, the flow was turbulent at the inlet. The film hole row position, its pitch and blowing ratio was varied with five different values. Taguchi approach was used in designing a L25 orthogonal array (OA) for these parameters. The end-wall averaged film cooling effectiveness (bar η) was chosen as the quality characteristic. CFD analyses were carried out using Ansys Fluent on computational domains designed with inputs from OA. Experiments were conducted for one chosen OA configuration and the computational results were found to correlate well with experimental measurements. The responses from the CFD analyses were fed to the statistical tool to develop a correlation for bar η using regression analysis.

  7. Application of Taguchi Method to Investigate the Effects of Process Factors on the Production of Industrial Piroxicam Polymorphs and Optimization of Dissolution Rate of Powder.

    PubMed

    Shahbazian, Alen; Davood, Asghar; Dabirsiaghi, Alireza

    2016-01-01

    Piroxicam has two different crystalline forms (known as needle and cubic forms), that they are different in physicochemical properties such as biological solubility. In the current research, using Taguchi experimental design approach the influences of five operating variables on formation of the piroxicam polymorph shapes in recrystallization were studied. The variables include type of solvent, cooling methods, type of mixture paddle, pH, and agitator speed. Statistical analysis of results revealed the significance order of factors affecting the product quality and quantity. At first using the Taguchi experimental method, the influence of process factors on the yield, particle size and dissolution rate of piroxicam powder was statistically investigated. The optimum conditions to achieve the best dissolution rate of piroxicam were determined experimentally. The results were analyzed using Qualitek4 software and it was revealed that the type of solvent and method of cooling respectively are the most important factors that affect the dissolution rate. It was also experimentally achieved that some factors such as type of agitator paddle, pH and agitation rate have no significant effects on dissolution rate.

  8. Application of Taguchi Method to Investigate the Effects of Process Factors on the Production of Industrial Piroxicam Polymorphs and Optimization of Dissolution Rate of Powder.

    PubMed

    Shahbazian, Alen; Davood, Asghar; Dabirsiaghi, Alireza

    2016-01-01

    Piroxicam has two different crystalline forms (known as needle and cubic forms), that they are different in physicochemical properties such as biological solubility. In the current research, using Taguchi experimental design approach the influences of five operating variables on formation of the piroxicam polymorph shapes in recrystallization were studied. The variables include type of solvent, cooling methods, type of mixture paddle, pH, and agitator speed. Statistical analysis of results revealed the significance order of factors affecting the product quality and quantity. At first using the Taguchi experimental method, the influence of process factors on the yield, particle size and dissolution rate of piroxicam powder was statistically investigated. The optimum conditions to achieve the best dissolution rate of piroxicam were determined experimentally. The results were analyzed using Qualitek4 software and it was revealed that the type of solvent and method of cooling respectively are the most important factors that affect the dissolution rate. It was also experimentally achieved that some factors such as type of agitator paddle, pH and agitation rate have no significant effects on dissolution rate. PMID:27642310

  9. GCFR shielding design and supporting experimental programs

    SciTech Connect

    Perkins, R.G.; Hamilton, C.J.; Bartine, D.

    1980-05-01

    The shielding for the conceptual design of the gas-cooled fast breeder reactor (GCFR) is described, and the component exposure design criteria which determine the shield design are presented. The experimental programs for validating the GCFR shielding design methods and data (which have been in existence since 1976) are also discussed.

  10. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  11. The photocatalytic degradation of cationic surfactant from wastewater in the presence of nano-zinc oxide using Taguchi method

    NASA Astrophysics Data System (ADS)

    Giahi, M.; Moradidoost, A.; Bagherinia, M. A.; Taghavi, H.

    2013-12-01

    The photocatalytic degradation of cetyl pyridinium chloride (CPC) has been investigated in aqueous phase using ultraviolet (UV) and ZnO nanopowder. Kinetic analysis showed that the extent of surfactant photocatalytic degradation can be fitted with pseudo-first-order model and photochemical elimination of CPC could be studied by Taguchi method. Our experimental design was based on testing five factors, i.e., dosage of K2S2O8, concentration of CPC, amount of ZnO, irradiation time and initial pH. Each factor was tested at four levels. The optimum parameters were found to be pH 5.0; amount of ZnO 11 mg; K2S2O8 3 mM; CPC 10 mg/L; irradiation time, 8 h.

  12. Adsorption of cefixime from aqueous solutions using modified hardened paste of Portland cement by perlite; optimization by Taguchi method.

    PubMed

    Rasoulifard, Mohammad Hossein; Khanmohammadi, Soghra; Heidari, Azam

    2016-01-01

    In the present study, we have used a simple and cost-effective removal technique by a commercially available Fe-Al-SiO2 containing complex material (hardened paste of Portland cement (HPPC)). The adsorbing performance of HPPC and modified HPPC with perlite for removal of cefixime from aqueous solutions was investigated comparatively by using batch adsorption studies. HPPC has been selected because of the main advantages such as high efficiency, simple separation of sludge, low-cost and abundant availability. A Taguchi orthogonal array experimental design with an OA16 (4(5)) matrix was employed to optimize the affecting factors of adsorbate concentration, adsorbent dosage, type of adsorbent, contact time and pH. On the basis of equilibrium adsorption data, Langmuir, Freundlich and Temkin adsorption isotherm models were also confirmed. The results showed that HPPC and modified HPPC were both efficient adsorbents for cefixime removal. PMID:27642826

  13. Adsorption of cefixime from aqueous solutions using modified hardened paste of Portland cement by perlite; optimization by Taguchi method.

    PubMed

    Rasoulifard, Mohammad Hossein; Khanmohammadi, Soghra; Heidari, Azam

    2016-01-01

    In the present study, we have used a simple and cost-effective removal technique by a commercially available Fe-Al-SiO2 containing complex material (hardened paste of Portland cement (HPPC)). The adsorbing performance of HPPC and modified HPPC with perlite for removal of cefixime from aqueous solutions was investigated comparatively by using batch adsorption studies. HPPC has been selected because of the main advantages such as high efficiency, simple separation of sludge, low-cost and abundant availability. A Taguchi orthogonal array experimental design with an OA16 (4(5)) matrix was employed to optimize the affecting factors of adsorbate concentration, adsorbent dosage, type of adsorbent, contact time and pH. On the basis of equilibrium adsorption data, Langmuir, Freundlich and Temkin adsorption isotherm models were also confirmed. The results showed that HPPC and modified HPPC were both efficient adsorbents for cefixime removal.

  14. Designing High Quality Research in Special Education: Group Experimental Designs.

    ERIC Educational Resources Information Center

    Gersten, Russell; Lloyd, John Wills; Baker, Scott

    This paper, a result of a series of meetings of researchers, discusses critical issues related to the conduct of high-quality intervention research in special education using experimental and quasi-experimental designs that compare outcomes for different groups of students. It stresses the need to balance design components that satisfy laboratory…

  15. Determination of the optimal time and cost of manufacturing flow of an assembly using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Petrila, S.; Brabie, G.; Chirita, B.

    2016-08-01

    The optimization of the parts and assembly manufacturing operation was carried out in order to minimize both the time and cost of production as appropriate. The optimization was made by using the Taguchi method. The Taguchi method is based on the plans of experiences that vary the input and outputs factors. The application of the Taguchi method in order to optimize the flow of the analyzed assembly production is made in the following: to find the optimal combination between the manufacturing operations; to choose the variant involving the use of equipment performance; to delivery operations based on automation. The final aim of the Taguchi method application is that the entire assembly to be achieved at minimum cost and in a short time. Philosophy Taguchi method of optimizing product quality is synthesized from three basic concepts: quality must be designed into the product and not he product inspected after it has been manufactured; the higher quality is obtained when the deviation from the proposed target is low or when uncontrollable factors action has no influence on it, which translates robustness; costs entailed quality are expressed as a function of deviation from the nominal value [1]. When determining the number of experiments involving the study of a phenomenon by this method, follow more restrictive conditions [2].

  16. Experimental design of a waste glass study

    SciTech Connect

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150{degrees}C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases.

  17. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  18. More efficiency in fuel consumption using gearbox optimization based on Taguchi method

    NASA Astrophysics Data System (ADS)

    Goharimanesh, Masoud; Akbari, Aliakbar; Akbarzadeh Tootoonchi, Alireza

    2014-05-01

    Automotive emission is becoming a critical threat to today's human health. Many researchers are studying engine designs leading to less fuel consumption. Gearbox selection plays a key role in an engine design. In this study, Taguchi quality engineering method is employed, and optimum gear ratios in a five speed gear box is obtained. A table of various gear ratios is suggested by design of experiment techniques. Fuel consumption is calculated through simulating the corresponding combustion dynamics model. Using a 95 % confidence level, optimal parameter combinations are determined using the Taguchi method. The level of importance of the parameters on the fuel efficiency is resolved using the analysis of signal-to-noise ratio as well as analysis of variance.

  19. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism. PMID:27090148

  20. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  1. Using experimental design to define boundary manikins.

    PubMed

    Bertilsson, Erik; Högberg, Dan; Hanson, Lars

    2012-01-01

    When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design. PMID:22317428

  2. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, Darrell; Curtis, Andrew

    2011-08-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms. This paper examines the influence of the NFL theorems on linearized statistical experimental design (SED). We consider four design algorithms with three different design objective functions to examine their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent to the study of transverse isotropy in many disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. We discuss differences in the performance of each design algorithm, providing a guideline for selecting design algorithms for other problems. As a by-product we demonstrate and discuss the principle of diminishing returns in SED, namely, that the value of experimental design decreases with experiment size. Another outcome of this study is a simple rule-of-thumb for prescribing optimal experiments for ellipse fitting, which bypasses the computational expense of SED. This is used to define a template for optimizing survey designs, under simple assumptions, for Amplitude Variations with Azimuth and Offset (AVAZ) seismics in the specialized problem of fracture characterization, such as is of interest in the petroleum industry. Finally, we discuss the scope of our conclusions for the NFL theorems as they apply to nonlinear and Bayesian SED.

  3. Taguchi's off line method and Multivariate loss function approach for quality management and optimization of process parameters -A review

    NASA Astrophysics Data System (ADS)

    Bharti, P. K.; Khan, M. I.; Singh, Harbinder

    2010-10-01

    Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.

  4. Taguchi Optimization on the Initial Thickness and Pre-aging of Nano-/Ultrafine-Grained Al-0.2 wt.%Sc Alloy Produced by ARB

    NASA Astrophysics Data System (ADS)

    Yousefieh, Mohammad; Tamizifar, Morteza; Boutorabi, Seyed Mohammad Ali; Borhani, Ehsan

    2016-08-01

    In this study, Taguchi design method with L9 orthogonal array has been used to optimize the initial thickness and pre-aging parameters (temperature and time) for the mechanical properties of Al-0.2 wt.% Sc alloy heavily deformed by accumulative roll bonding (ARB) up to ten cycles. Analysis of variance was performed on the measured data and signal-to-noise ratios. It was found that the pre-aging temperature has the most significant parameter affecting the mechanical properties by percentage contribution of 64.51%. Pre-aging time (19.29%) has the next most significant effect, while initial thickness (5.31%) has statistically less significant effect. In order to confirm experimental conclusions, verification experiments were carried out at optimum working conditions. Under these conditions, the yield strength was 6.51 times higher and toughness was 6.86% lower compared with the starting Al-Sc material. Moreover, mean grain size was decreased to 220 nm by setting the control parameters, which was the lowest value obtained in this study. It was concluded that the Taguchi method was found to be a promising technique to obtain the optimum conditions for such studies. Consequently, by controlling the parameter levels, the high-strength and high-toughness Al-Sc samples were fabricated through pre-aging and subsequent ARB process.

  5. Taguchi Optimization on the Initial Thickness and Pre-aging of Nano-/Ultrafine-Grained Al-0.2 wt.%Sc Alloy Produced by ARB

    NASA Astrophysics Data System (ADS)

    Yousefieh, Mohammad; Tamizifar, Morteza; Boutorabi, Seyed Mohammad Ali; Borhani, Ehsan

    2016-10-01

    In this study, Taguchi design method with L9 orthogonal array has been used to optimize the initial thickness and pre-aging parameters (temperature and time) for the mechanical properties of Al-0.2 wt.% Sc alloy heavily deformed by accumulative roll bonding (ARB) up to ten cycles. Analysis of variance was performed on the measured data and signal-to-noise ratios. It was found that the pre-aging temperature has the most significant parameter affecting the mechanical properties by percentage contribution of 64.51%. Pre-aging time (19.29%) has the next most significant effect, while initial thickness (5.31%) has statistically less significant effect. In order to confirm experimental conclusions, verification experiments were carried out at optimum working conditions. Under these conditions, the yield strength was 6.51 times higher and toughness was 6.86% lower compared with the starting Al-Sc material. Moreover, mean grain size was decreased to 220 nm by setting the control parameters, which was the lowest value obtained in this study. It was concluded that the Taguchi method was found to be a promising technique to obtain the optimum conditions for such studies. Consequently, by controlling the parameter levels, the high-strength and high-toughness Al-Sc samples were fabricated through pre-aging and subsequent ARB process.

  6. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  7. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  8. Surface Roughness Prediction Model using Zirconia Toughened Alumina (ZTA) Turning Inserts: Taguchi Method and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Mandal, Nilrudra; Doloi, Biswanath; Mondal, Biswanath

    2016-01-01

    In the present study, an attempt has been made to apply the Taguchi parameter design method and regression analysis for optimizing the cutting conditions on surface finish while machining AISI 4340 steel with the help of the newly developed yttria based Zirconia Toughened Alumina (ZTA) inserts. These inserts are prepared through wet chemical co-precipitation route followed by powder metallurgy process. Experiments have been carried out based on an orthogonal array L9 with three parameters (cutting speed, depth of cut and feed rate) at three levels (low, medium and high). Based on the mean response and signal to noise ratio (SNR), the best optimal cutting condition has been arrived at A3B1C1 i.e. cutting speed is 420 m/min, depth of cut is 0.5 mm and feed rate is 0.12 m/min considering the condition smaller is the better approach. Analysis of Variance (ANOVA) is applied to find out the significance and percentage contribution of each parameter. The mathematical model of surface roughness has been developed using regression analysis as a function of the above mentioned independent variables. The predicted values from the developed model and experimental values are found to be very close to each other justifying the significance of the model. A confirmation run has been carried out with 95 % confidence level to verify the optimized result and the values obtained are within the prescribed limit.

  9. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, D.; Curtis, A.

    2009-12-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome

  10. Conceptual design of Fusion Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Seki, Yasushi; Takatsu, Hideyuki; Iida, Hiromasa

    1991-08-01

    Safety analysis and evaluation have been made for the FER (Fusion Experimental Reactor) as well as for the ITER (International Thermonuclear Experimental Reactor) which are basically the same in terms of safety. This report describes the results obtained in fiscal years 1988 - 1990, in addition to a summary of the results obtained prior to 1988. The report shows the philosophy of the safety design, safety analysis and evaluation for each of the operation conditions, namely, normal operation, repair and maintenance, and accident. Considerations for safety regulations and standards are also added.

  11. Rational Experimental Design for Electrical Resistivity Imaging

    NASA Astrophysics Data System (ADS)

    Mitchell, V.; Pidlisecky, A.; Knight, R.

    2008-12-01

    Over the past several decades advances in the acquisition and processing of electrical resistivity data, through multi-channel acquisition systems and new inversion algorithms, have greatly increased the value of these data to near-surface environmental and hydrological problems. There has, however, been relatively little advancement in the design of actual surveys. Data acquisition still typically involves using a small number of traditional arrays (e.g. Wenner, Schlumberger) despite a demonstrated improvement in data quality from the use of non-standard arrays. While optimized experimental design has been widely studied in applied mathematics and the physical and biological sciences, it is rarely implemented for non-linear problems, such as electrical resistivity imaging (ERI). We focus specifically on using ERI in the field for monitoring changes in the subsurface electrical resistivity structure. For this application we seek an experimental design method that can be used in the field to modify the data acquisition scheme (spatial and temporal sampling) based on prior knowledge of the site and/or knowledge gained during the imaging experiment. Some recent studies have investigated optimized design of electrical resistivity surveys by linearizing the problem or with computationally-intensive search algorithms. We propose a method for rational experimental design based on the concept of informed imaging, the use of prior information regarding subsurface properties and processes to develop problem-specific data acquisition and inversion schemes. Specifically, we use realistic subsurface resistivity models to aid in choosing source configurations that maximize the information content of our data. Our approach is based on first assessing the current density within a region of interest, in order to provide sufficient energy to the region of interest to overcome a noise threshold, and then evaluating the direction of current vectors, in order to maximize the

  12. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.

  13. Bioinspiration: applying mechanical design to experimental biology.

    PubMed

    Flammang, Brooke E; Porter, Marianne E

    2011-07-01

    The production of bioinspired and biomimetic constructs has fostered much collaboration between biologists and engineers, although the extent of biological accuracy employed in the designs produced has not always been a priority. Even the exact definitions of "bioinspired" and "biomimetic" differ among biologists, engineers, and industrial designers, leading to confusion regarding the level of integration and replication of biological principles and physiology. By any name, biologically-inspired mechanical constructs have become an increasingly important research tool in experimental biology, offering the opportunity to focus research by creating model organisms that can be easily manipulated to fill a desired parameter space of structural and functional repertoires. Innovative researchers with both biological and engineering backgrounds have found ways to use bioinspired models to explore the biomechanics of organisms from all kingdoms to answer a variety of different questions. Bringing together these biologists and engineers will hopefully result in an open discourse of techniques and fruitful collaborations for experimental and industrial endeavors.

  14. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905

  15. Application of Taguchi's method to optimize fiber Raman amplifier

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif

    2016-04-01

    Taguchi's method is introduced to perform multiobjective optimization of fiber Raman amplifier (FRA). The optimization requirements are to maximize gain and keep gain ripple minimum over the operating bandwidth of a wavelength division multiplexed (WDM) communication link. Mathematical formulations of FRA and corresponding numerical solution techniques are discussed. A general description of Taguchi's method and how it can be integrated with the FRA optimization problem are presented. The proposed method is used to optimize two different configurations of FRA. The performance of Taguchi's method is compared with genetic algorithm and particle swarm optimization in terms of output performance and convergence rate. Taguchi's method is found to produce good results with fast convergence rate, which makes it well suited for the nonlinear optimization problems.

  16. Optimizing quality of digital mammographic imaging using Taguchi analysis with an ACR accreditation phantom.

    PubMed

    Chen, Ching-Yuan; Pan, Lung-Fa; Chiang, Fu-Tsai; Yeh, Da-Ming; Pan, Lung-Kwang

    2016-07-01

    This work demonstrated the improvement of the visualization of lesions by modulating the factors of an X-ray mammography imaging system using Taguchi analysis. Optimal combinations of X-ray operating factors in each group of level combination were determined using the Taguchi method, in which all factors were organized into only 18 groups, yielding analytical results with the same confidence as if each factor had been examined independently. The 4 considered operating factors of the X-ray machine were (1) anode material (target), (2) kVp, (3) mAs and (4) field of view (FOV). Each of these factors had 2 or 3 levels. Therefore, 54 (2×3×3×3 = 54) combinations were generated. The optimal settings were Rh as the target, 28 kVp, 80 mAs and 19×23 cm(2) FOV. The grade of exposed mammographic phantom image increased from the automatic exposure control (AEC) setting 70.92 to 72.00 under the optimal setting, meeting the minimum standard (70.00) set by Taiwan's Department of Health. The average glandular dose (AGD) of the exposed phantom, 0.182 cGy, was lower than that, 0.203 cGy, under the AEC setting. The Taguchi method was extremely promising for the design of imaging protocols in clinical diagnosis. PMID:27282343

  17. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  18. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our

  19. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  20. Optimization of parameters for the synthesis of Y2Cu2O5 nanoparticles by Taguchi method and comparison of their magnetic and optical properties with their bulk counterpart

    NASA Astrophysics Data System (ADS)

    Farbod, Mansoor; Rafati, Zahra; Shoushtari, Morteza Zargar

    2016-06-01

    Y2Cu2O5 nanoparticles were synthesized by sol-gel combustion method and effects of different factors on the size of nanoparticles were investigated. In order to reduce the experimental stages, Taguchi robust design method was employed. Acid citric:Cu+2 M ratio, pH, sintering temperature and time were chosen as the parameters for optimization. Among these factors the solution pH had the most influence and the others had nearly the same influence on the nanoparticles sizes. Based on the predicted conditions by Taguchi design, the sample with a minimum particle size of 47 nm was prepared. The magnetic behavior of Y2Cu2O5 nanoparticles were measured and found that at low fields they are soft ferromagnetic but at high fields they behave paramagnetically. The magnetic behavior of nanoparticles were compared to their bulk counterparts and found that the Mr of the samples was slightly different, but the Hc of the nanoparticles was 76% of the bulk sample. The maximum absorbance peak of UV-vis spectrum showed a blue shift for the smaller particles.

  1. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities.

  2. [Design and experimentation of marine optical buoy].

    PubMed

    Yang, Yue-Zhong; Sun, Zhao-Hua; Cao, Wen-Xi; Li, Cai; Zhao, Jun; Zhou, Wen; Lu, Gui-Xin; Ke, Tian-Cun; Guo, Chao-Ying

    2009-02-01

    Marine optical buoy is of important value in terms of calibration and validation of ocean color remote sensing, scientific observation, coastal environment monitoring, etc. A marine optical buoy system was designed which consists of a main and a slave buoy. The system can measure the distribution of irradiance and radiance over the sea surface, in the layer near sea surface and in the euphotic zone synchronously, during which some other parameters are also acquired such as spectral absorption and scattering coefficients of the water column, the velocity and direction of the wind, and so on. The buoy was positioned by GPS. The low-power integrated PC104 computer was used as the control core to collect data automatically. The data and commands were real-timely transmitted by CDMA/GPRS wireless networks or by the maritime satellite. The coastal marine experimentation demonstrated that the buoy has small pitch and roll rates in high sea state conditions and thus can meet the needs of underwater radiometric measurements, the data collection and remote transmission are reliable, and the auto-operated anti-biofouling devices can ensure that the optical sensors work effectively for a period of several months.

  3. An optimization of superhydrophobic polyvinylidene fluoride/zinc oxide materials using Taguchi method

    NASA Astrophysics Data System (ADS)

    Mohamed, Adel M. A.; Jafari, Reza; Farzaneh, Masoud

    2014-01-01

    This article is focused on the preparation and characterization of PVDF/ZnO composite materials. The superhydrophobic surface was prepared through spray coating of a mixture of PVDF polymer and ZnO nanoparticles on aluminum substrate. Stearic acid was added to improve the dispersion of ZnO. Taguchi's design of experiment method using MINITAB15 was used to rank several factors that may affect the superhydrophobic properties in order to formulate the optimum conditions. The Taguchi orthogonal array L9 was applied with three level of consideration for each factor. ANOVA were carried out to identify the significant factors that affect the water contact angle. Confirmation tests were performed on the predicted optimum process parameters. The crystallinity and morphology of PVDF-ZnO membranes were determined by Fourier transform infrared (FTIR) spectroscopy and scanning electron microscopy (SEM). The results of Taguchi method indicate that the ZnO and stearic acid contents were the parameters making significant contribution toward improvement in hydrophobicity of PVDF materials. As the content of ZnO nanoparticles increased, the values of water contact angle increased, ranging from 122° to 159°, while the contact angle hysteresis and sliding angle decreased to 3.5° and 2.5°, respectively. The SEM results show that hierarchical micro-nanostructure of ZnO plays an important role in the formation of the superhydrophobic surface. FTIR results showed that, in the absence or present ZnO nanoparticles, the crystallization of the PVDF occurred predominantly in the β-phase.

  4. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  5. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  6. Preparation of photocatalytic ZnO nanoparticles and application in photochemical degradation of betamethasone sodium phosphate using taguchi approach

    NASA Astrophysics Data System (ADS)

    Giahi, M.; Farajpour, G.; Taghavi, H.; Shokri, S.

    2014-07-01

    In this study, ZnO nanoparticles were prepared by a sol-gel method for the first time. Taguchi method was used to identify the several factors that may affect degradation percentage of betamethasone sodium phosphate in wastewater in UV/K2S2O8/nano-ZnO system. Our experimental design consisted of testing five factors, i.e., dosage of K2S2O8, concentration of betamethasone sodium phosphate, amount of ZnO, irradiation time and initial pH. With four levels of each factor tested. It was found that, optimum parameters are irradiation time, 180 min; pH 9.0; betamethasone sodium phosphate, 30 mg/L; amount of ZnO, 13 mg; K2S2O8, 1 mM. The percentage contribution of each factor was determined by the analysis of variance (ANOVA). The results showed that irradiation time; pH; amount of ZnO; drug concentration and dosage of K2S2O8 contributed by 46.73, 28.56, 11.56, 6.70, and 6.44%, respectively. Finally, the kinetics process was studied and the photodegradation rate of betamethasone sodium phosphate was found to obey pseudo-first-order kinetics equation represented by the Langmuir-Hinshelwood model.

  7. Effect of olive mill waste addition on the properties of porous fired clay bricks using Taguchi method.

    PubMed

    Sutcu, Mucahit; Ozturk, Savas; Yalamac, Emre; Gencel, Osman

    2016-10-01

    Production of porous clay bricks lightened by adding olive mill waste as a pore making additive was investigated. Factors influencing the brick manufacturing process were analyzed by an experimental design, Taguchi method, to find out the most favorable conditions for the production of bricks. The optimum process conditions for brick preparation were investigated by studying the effects of mixture ratios (0, 5 and 10 wt%) and firing temperatures (850, 950 and 1050 °C) on the physical, thermal and mechanical properties of the bricks. Apparent density, bulk density, apparent porosity, water absorption, compressive strength, thermal conductivity, microstructure and crystalline phase formations of the fired brick samples were measured. It was found that the use of 10% waste addition reduced the bulk density of the samples up to 1.45 g/cm(3). As the porosities increased from 30.8 to 47.0%, the compressive strengths decreased from 36.9 to 10.26 MPa at firing temperature of 950 °C. The thermal conductivities of samples fired at the same temperature showed a decrease of 31% from 0.638 to 0.436 W/mK, which is hopeful for heat insulation in the buildings. Increasing of the firing temperature also affected their mechanical and physical properties. This study showed that the olive mill waste could be used as a pore maker in brick production. PMID:27343435

  8. Effect of olive mill waste addition on the properties of porous fired clay bricks using Taguchi method.

    PubMed

    Sutcu, Mucahit; Ozturk, Savas; Yalamac, Emre; Gencel, Osman

    2016-10-01

    Production of porous clay bricks lightened by adding olive mill waste as a pore making additive was investigated. Factors influencing the brick manufacturing process were analyzed by an experimental design, Taguchi method, to find out the most favorable conditions for the production of bricks. The optimum process conditions for brick preparation were investigated by studying the effects of mixture ratios (0, 5 and 10 wt%) and firing temperatures (850, 950 and 1050 °C) on the physical, thermal and mechanical properties of the bricks. Apparent density, bulk density, apparent porosity, water absorption, compressive strength, thermal conductivity, microstructure and crystalline phase formations of the fired brick samples were measured. It was found that the use of 10% waste addition reduced the bulk density of the samples up to 1.45 g/cm(3). As the porosities increased from 30.8 to 47.0%, the compressive strengths decreased from 36.9 to 10.26 MPa at firing temperature of 950 °C. The thermal conductivities of samples fired at the same temperature showed a decrease of 31% from 0.638 to 0.436 W/mK, which is hopeful for heat insulation in the buildings. Increasing of the firing temperature also affected their mechanical and physical properties. This study showed that the olive mill waste could be used as a pore maker in brick production.

  9. Enhancement of process capability for strip force of tight sets of optical fiber using Taguchi's Quality Engineering

    NASA Astrophysics Data System (ADS)

    Lin, Wen-Tsann; Wang, Shen-Tsu; Li, Meng-Hua; Huang, Chiao-Tzu

    2012-03-01

    Strip force is the key to identifying the quality of product during manufacturing tight sets of fiber. This study used Integrated computer-aided manufacturing DEFinition 0 (IDEF0) modeling to discuss detailed cladding processes of tight sets of fiber in transnational optical connector manufacturing. The results showed that, the key factor causing an instable interface connection is the extruder adjustment process. The factors causing improper strip force were analyzed through literature, practice, and gray relational analysis. The parameters design method of Taguchi's Quality Engineering was used to determine the optimal experimental combinations for processes of tight sets of fiber. This study employed case empirical analysis to obtain a model for improving the process of strip force of tight sets of fiber, and determines the correlation factors that affect the processes of quality for tight sets of fiber. The findings indicated that, process capability index (CPK) increased significantly, which can facilitate improvement of the product process capability and quality. The empirical results can serve as a reference for improving the product quality of the optical fiber industry.

  10. Identification of Dysfunctional Cooperative Learning Teams Using Taguchi Quality Indexes

    ERIC Educational Resources Information Center

    Hsiung, Chin-Min

    2011-01-01

    In this study, dysfunctional cooperative learning teams are identified by comparing the Taguchi "larger-the-better" quality index for the academic achievement of students in a cooperative learning condition with that of students in an individualistic learning condition. In performing the experiments, 42 sophomore mechanical engineering students…

  11. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  12. Dysprosium sorption by polymeric composite bead: robust parametric optimization using Taguchi method.

    PubMed

    Yadav, Kartikey K; Dasgupta, Kinshuk; Singh, Dhruva K; Varshney, Lalit; Singh, Harvinderpal

    2015-03-01

    Polyethersulfone-based beads encapsulating di-2-ethylhexyl phosphoric acid have been synthesized and evaluated for the recovery of rare earth values from the aqueous media. Percentage recovery and the sorption behavior of Dy(III) have been investigated under wide range of experimental parameters using these beads. Taguchi method utilizing L-18 orthogonal array has been adopted to identify the most influential process parameters responsible for higher degree of recovery with enhanced sorption of Dy(III) from chloride medium. Analysis of variance indicated that the feed concentration of Dy(III) is the most influential factor for equilibrium sorption capacity, whereas aqueous phase acidity influences the percentage recovery most. The presence of polyvinyl alcohol and multiwalled carbon nanotube modified the internal structure of the composite beads and resulted in uniform distribution of organic extractant inside polymeric matrix. The experiment performed under optimum process conditions as predicted by Taguchi method resulted in enhanced Dy(III) recovery and sorption capacity by polymeric beads with minimum standard deviation.

  13. Dysprosium sorption by polymeric composite bead: robust parametric optimization using Taguchi method.

    PubMed

    Yadav, Kartikey K; Dasgupta, Kinshuk; Singh, Dhruva K; Varshney, Lalit; Singh, Harvinderpal

    2015-03-01

    Polyethersulfone-based beads encapsulating di-2-ethylhexyl phosphoric acid have been synthesized and evaluated for the recovery of rare earth values from the aqueous media. Percentage recovery and the sorption behavior of Dy(III) have been investigated under wide range of experimental parameters using these beads. Taguchi method utilizing L-18 orthogonal array has been adopted to identify the most influential process parameters responsible for higher degree of recovery with enhanced sorption of Dy(III) from chloride medium. Analysis of variance indicated that the feed concentration of Dy(III) is the most influential factor for equilibrium sorption capacity, whereas aqueous phase acidity influences the percentage recovery most. The presence of polyvinyl alcohol and multiwalled carbon nanotube modified the internal structure of the composite beads and resulted in uniform distribution of organic extractant inside polymeric matrix. The experiment performed under optimum process conditions as predicted by Taguchi method resulted in enhanced Dy(III) recovery and sorption capacity by polymeric beads with minimum standard deviation. PMID:25660520

  14. Experimental design in analytical chemistry--part II: applications.

    PubMed

    Ebrahimi-Najafabadi, Heshmatollah; Leardi, Riccardo; Jalali-Heravi, Mehdi

    2014-01-01

    This paper reviews the applications of experimental design to optimize some analytical chemistry techniques such as extraction, chromatography separation, capillary electrophoresis, spectroscopy, and electroanalytical methods.

  15. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  16. Statistical Experimental Design Guided Optimization of a One-Pot Biphasic Multienzyme Total Synthesis of Amorpha-4,11-diene

    PubMed Central

    Chen, Xixian; Zhang, Congqiang; Zou, Ruiyang; Zhou, Kang; Stephanopoulos, Gregory; Too, Heng Phon

    2013-01-01

    In vitro synthesis of chemicals and pharmaceuticals using enzymes is of considerable interest as these biocatalysts facilitate a wide variety of reactions under mild conditions with excellent regio-, chemo- and stereoselectivities. A significant challenge in a multi-enzymatic reaction is the need to optimize the various steps involved simultaneously so as to obtain high-yield of a product. In this study, statistical experimental design was used to guide the optimization of a total synthesis of amorpha-4,11-diene (AD) using multienzymes in the mevalonate pathway. A combinatorial approach guided by Taguchi orthogonal array design identified the local optimum enzymatic activity ratio for Erg12:Erg8:Erg19:Idi:IspA to be 100∶100∶1∶25∶5, with a constant concentration of amorpha-4,11-diene synthase (Ads, 100 mg/L). The model also identified an unexpected inhibitory effect of farnesyl pyrophosphate synthase (IspA), where the activity was negatively correlated with AD yield. This was due to the precipitation of farnesyl pyrophosphate (FPP), the product of IspA. Response surface methodology was then used to optimize IspA and Ads activities simultaneously so as to minimize the accumulation of FPP and the result showed that Ads to be a critical factor. By increasing the concentration of Ads, a complete conversion (∼100%) of mevalonic acid (MVA) to AD was achieved. Monovalent ions and pH were effective means of enhancing the specific Ads activity and specific AD yield significantly. The results from this study represent the first in vitro reconstitution of the mevalonate pathway for the production of an isoprenoid and the approaches developed herein may be used to produce other isopentenyl pyrophosphate (IPP)/dimethylallyl pyrophosphate (DMAPP) based products. PMID:24278153

  17. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  18. Multiple performance characteristics optimization for Al 7075 on electric discharge drilling by Taguchi grey relational theory

    NASA Astrophysics Data System (ADS)

    Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj

    2015-05-01

    Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.

  19. Multiresponse Optimization of Laser Cladding Steel + VC Using Grey Relational Analysis in the Taguchi Method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Kovacevic, Radovan

    2016-07-01

    Laser cladding of metal matrix composite coatings (MMCs) has become an effective and economic method to improve the wear resistance of mechanical components. The clad quality characteristics such as clad height, carbide fraction, carbide dissolution, and matrix hardness in MMCs determine the wear resistance of the coatings. These clad quality characteristics are influenced greatly by the laser cladding processing parameters. In this study, American Iron and Steel Institute (AISI) 420 + 20% vanadium carbide (VC) was deposited on mild steel with a high powder direct diode laser. The Taguchi-based Grey relational method was used to optimize the laser cladding processing parameters (laser power, scanning speed, and powder feed rate) with the consideration of multiple clad characteristics related to wear resistance (clad height, carbide volume fraction, and Fe-matrix hardness). A Taguchi L9 orthogonal array was designed to study the effects of processing parameters on each response. The contribution and significance of each processing parameter on each clad characteristic were investigated by the analysis of variance (ANOVA). The Grey relational grade acquired from Grey relational analysis was used as the performance characteristic to obtain the optimal combination of processing parameters. Based on the optimal processing parameters, the phases and microstructure of the laser-cladded coating were characterized by using x-ray diffraction (XRD) and scanning electron microscopy (SEM) with energy-dispersive spectroscopy (EDS).

  20. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  1. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  2. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    ERIC Educational Resources Information Center

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  3. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  4. Autism genetics: Methodological issues and experimental design.

    PubMed

    Sacco, Roberto; Lintas, Carla; Persico, Antonio M

    2015-10-01

    Autism is a complex neuropsychiatric disorder of developmental origin, where multiple genetic and environmental factors likely interact resulting in a clinical continuum between "affected" and "unaffected" individuals in the general population. During the last two decades, relevant progress has been made in identifying chromosomal regions and genes in linkage or association with autism, but no single gene has emerged as a major cause of disease in a large number of patients. The purpose of this paper is to discuss specific methodological issues and experimental strategies in autism genetic research, based on fourteen years of experience in patient recruitment and association studies of autism spectrum disorder in Italy.

  5. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  6. Information measures in nonlinear experimental design

    NASA Technical Reports Server (NTRS)

    Niple, E.; Shaw, J. H.

    1980-01-01

    Some different approaches to the problem of designing experiments which estimate the parameters of nonlinear models are discussed. The assumption in these approaches that the information in a set of data can be represented by a scalar is criticized, and the nonscalar discrimination information is proposed as the proper measure to use. The two-step decay example in Box and Lucas (1959) is used to illustrate the main points of the discussion.

  7. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  8. Relation between experimental and non-experimental study designs. HB vaccines: a case study

    PubMed Central

    Jefferson, T.; Demicheli, V.

    1999-01-01

    STUDY OBJECTIVE: To examine the relation between experimental and non- experimental study design in vaccinology. DESIGN: Assessment of each study design's capability of testing four aspects of vaccine performance, namely immunogenicity (the capacity to stimulate the immune system), duration of immunity conferred, incidence and seriousness of side effects, and number of infections prevented by vaccination. SETTING: Experimental and non-experimental studies on hepatitis B (HB) vaccines in the Cochrane Vaccines Field Database. RESULTS: Experimental and non-experimental vaccine study designs are frequently complementary but some aspects of vaccine quality can only be assessed by one of the types of study. More work needs to be done on the relation between study quality and its significance in terms of effect size.   PMID:10326054

  9. Collimator design for experimental minibeam radiation therapy

    SciTech Connect

    Babcock, Kerry; Sidhu, Narinder; Kundapur, Vijayananda; Ali, Kaiser

    2011-04-15

    Purpose: To design and optimize a minibeam collimator for minibeam radiation therapy studies using a 250 kVp x-ray machine as a simulated synchrotron source. Methods: A Philips RT250 orthovoltage x-ray machine was modeled using the EGSnrc/BEAMnrc Monte Carlo software. The resulting machine model was coupled to a model of a minibeam collimator with a beam aperture of 1 mm. Interaperture spacing and collimator thickness were varied to produce a minibeam with the desired peak-to-valley ratio. Results: Proper design of a minibeam collimator with Monte Carlo methods requires detailed knowledge of the x-ray source setup. For a cathode-ray tube source, the beam spot size, target angle, and source shielding all determine the final valley-to-peak dose ratio. Conclusions: A minibeam collimator setup was created, which can deliver a 30 Gy peak dose minibeam radiation therapy treatment at depths less than 1 cm with a valley-to-peak dose ratio on the order of 23%.

  10. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    PubMed

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  11. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  12. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  13. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  14. Analysis of spinal lumbar interbody fusion cage subsidence using Taguchi method, finite element analysis, and artificial neural network

    NASA Astrophysics Data System (ADS)

    Nassau, Christopher John; Litofsky, N. Scott; Lin, Yuyi

    2012-09-01

    Subsidence, when implant penetration induces failure of the vertebral body, occurs commonly after spinal reconstruction. Anterior lumbar interbody fusion (ALIF) cages may subside into the vertebral body and lead to kyphotic deformity. No previous studies have utilized an artificial neural network (ANN) for the design of a spinal interbody fusion cage. In this study, the neural network was applied after initiation from a Taguchi L 18 orthogonal design array. Three-dimensional finite element analysis (FEA) was performed to address the resistance to subsidence based on the design changes of the material and cage contact region, including design of the ridges and size of the graft area. The calculated subsidence is derived from the ANN objective function which is defined as the resulting maximum von Mises stress (VMS) on the surface of a simulated bone body after axial compressive loading. The ANN was found to have minimized the bone surface VMS, thereby optimizing the ALIF cage given the design space. Therefore, the Taguchi-FEA-ANN approach can serve as an effective procedure for designing a spinal fusion cage and improving the biomechanical properties.

  15. Nitric acid treated multi-walled carbon nanotubes optimized by Taguchi method

    NASA Astrophysics Data System (ADS)

    Shamsuddin, Shahidah Arina; Derman, Mohd Nazree; Hashim, Uda; Kashif, Muhammad; Adam, Tijjani; Halim, Nur Hamidah Abdul; Tahir, Muhammad Faheem Mohd

    2016-07-01

    Electron transfer rate (ETR) of CNTs can be enhanced by increasing the amounts of COOH groups to their wall and opened tips. With the aim to achieve the highest production amount of COOH, Taguchi robust design has been used for the first time to optimize the surface modification of MWCNTs by nitric acid oxidation. Three main oxidation parameters which are concentration of acid, treatment temperature and treatment time have been selected as the control factors that will be optimized. The amounts of COOH produced are measured by using FTIR spectroscopy through the absorbance intensity. From the analysis, we found that acid concentration and treatment time had the most important influence on the production of COOH. Meanwhile, the treatment temperature will only give intermediate effect. The optimum amount of COOH can be achieved with the treatment by 8.0 M concentration of nitric acid at 120 °C for 2 hour.

  16. Process improvement in laser hot wire cladding for martensitic stainless steel based on the Taguchi method

    NASA Astrophysics Data System (ADS)

    Huang, Zilin; Wang, Gang; Wei, Shaopeng; Li, Changhong; Rong, Yiming

    2016-09-01

    Laser hot wire cladding, with the prominent features of low heat input, high energy efficiency, and high precision, is widely used for remanufacturing metal parts. The cladding process, however, needs to be improved by using a quantitative method. In this work, volumetric defect ratio was proposed as the criterion to describe the integrity of forming quality for cladding layers. Laser deposition experiments with FV520B, one of martensitic stainless steels, were designed by using the Taguchi method. Four process variables, namely, laser power ( P), scanning speed ( V s), wire feed rate ( V f), and wire current ( I), were optimized based on the analysis of signal-to-noise (S/N) ratio. Metallurgic observation of cladding layer was conducted to compare the forming quality and to validate the analysis method. A stable and continuous process with the optimum parameter combination produced uniform microstructure with minimal defects and cracks, which resulted in a good metallurgical bonding interface.

  17. Study of Titanium Alloy Sheet During H-sectioned Rolling Forming Using the Taguchi Method

    SciTech Connect

    Chen, D.-C.; Gu, W.-S.; Hwang, Y.-M.

    2007-05-17

    This study employs commercial DEFORM three-dimensional finite element code to investigate the plastic deformation behavior of Ti-6Al-4V titanium alloy sheet during the H-sectioned rolling process. The simulations are based on a rigid-plastic model and assume that the upper and lower rolls are rigid bodies and that the temperature rise induced during rolling is sufficiently small that it can be ignored. The effects of the roll profile, the friction factor between the rolls and the titanium alloy, the rolling temperature and the roll radii on the rolling force, the roll torque and the effective strain induced in the rolled product are examined. The Taguchi method is employed to optimize the H-sectioned rolling process parameters. The results confirm the effectiveness of this robust design methodology in optimizing the H-sectioned rolling process parameters for the current Ti-6Al-4V titanium alloy.

  18. Process improvement in laser hot wire cladding for martensitic stainless steel based on the Taguchi method

    NASA Astrophysics Data System (ADS)

    Huang, Zilin; Wang, Gang; Wei, Shaopeng; Li, Changhong; Rong, Yiming

    2016-07-01

    Laser hot wire cladding, with the prominent features of low heat input, high energy efficiency, and high precision, is widely used for remanufacturing metal parts. The cladding process, however, needs to be improved by using a quantitative method. In this work, volumetric defect ratio was proposed as the criterion to describe the integrity of forming quality for cladding layers. Laser deposition experiments with FV520B, one of martensitic stainless steels, were designed by using the Taguchi method. Four process variables, namely, laser power (P), scanning speed (V s), wire feed rate (V f), and wire current (I), were optimized based on the analysis of signal-to-noise (S/N) ratio. Metallurgic observation of cladding layer was conducted to compare the forming quality and to validate the analysis method. A stable and continuous process with the optimum parameter combination produced uniform microstructure with minimal defects and cracks, which resulted in a good metallurgical bonding interface.

  19. Fundamentals of experimental design: lessons from beyond the textbook world

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  20. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  1. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  2. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  3. Characterizing the Experimental Procedure in Science Laboratories: A Preliminary Step towards Students Experimental Design

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cedric; Ney, Muriel; Sanchez, Eric; Wajeman, Claire

    2012-01-01

    Many studies have stressed students' lack of understanding of experiments in laboratories. Some researchers suggest that if students design all or parts of entire experiment, as part of an inquiry-based approach, it would overcome certain difficulties. It requires that a procedure be written for experimental design. The aim of this paper is to…

  4. Parametric study of the biopotential equation for breast tumour identification using ANOVA and Taguchi method.

    PubMed

    Ng, Eddie Y K; Ng, W Kee

    2006-03-01

    Extensive literatures have shown significant trend of progressive electrical changes according to the proliferative characteristics of breast epithelial cells. Physiologists also further postulated that malignant transformation resulted from sustained depolarization and a failure of the cell to repolarize after cell division, making the area where cancer develops relatively depolarized when compared to their non-dividing or resting counterparts. In this paper, we present a new approach, the Biofield Diagnostic System (BDS), which might have the potential to augment the process of diagnosing breast cancer. This technique was based on the efficacy of analysing skin surface electrical potentials for the differential diagnosis of breast abnormalities. We developed a female breast model, which was close to the actual, by considering the breast as a hemisphere in supine condition with various layers of unequal thickness. Isotropic homogeneous conductivity was assigned to each of these compartments and the volume conductor problem was solved using finite element method to determine the potential distribution developed due to a dipole source. Furthermore, four important parameters were identified and analysis of variance (ANOVA, Yates' method) was performed using design (n = number of parameters, 4). The effect and importance of these parameters were analysed. The Taguchi method was further used to optimise the parameters in order to ensure that the signal from the tumour is maximum as compared to the noise from other factors. The Taguchi method used proved that probes' source strength, tumour size and location of tumours have great effect on the surface potential field. For best results on the breast surface, while having the biggest possible tumour size, low amplitudes of current should be applied nearest to the breast surface.

  5. Refinement of experimental design and conduct in laboratory animal research.

    PubMed

    Bailoo, Jeremy D; Reichlin, Thomas S; Würbel, Hanno

    2014-01-01

    The scientific literature of laboratory animal research is replete with papers reporting poor reproducibility of results as well as failure to translate results to clinical trials in humans. This may stem in part from poor experimental design and conduct of animal experiments. Despite widespread recognition of these problems and implementation of guidelines to attenuate them, a review of the literature suggests that experimental design and conduct of laboratory animal research are still in need of refinement. This paper will review and discuss possible sources of biases, highlight advantages and limitations of strategies proposed to alleviate them, and provide a conceptual framework for improving the reproducibility of laboratory animal research.

  6. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER. PMID:27008024

  7. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  8. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  9. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  10. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  11. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  12. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  13. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  14. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  15. Optimization of low-frequency low-intensity ultrasound-mediated microvessel disruption on prostate cancer xenografts in nude mice using an orthogonal experimental design

    PubMed Central

    YANG, YU; BAI, WENKUN; CHEN, YINI; LIN, YANDUAN; HU, BING

    2015-01-01

    The present study aimed to provide a complete exploration of the effect of sound intensity, frequency, duty cycle, microbubble volume and irradiation time on low-frequency low-intensity ultrasound (US)-mediated microvessel disruption, and to identify an optimal combination of the five factors that maximize the blockage effect. An orthogonal experimental design approach was used. Enhanced US imaging and acoustic quantification were performed to assess tumor blood perfusion. In the confirmatory test, in addition to acoustic quantification, the specimens of the tumor were stained with hematoxylin and eosin and observed using light microscopy. The results revealed that sound intensity, frequency, duty cycle, microbubble volume and irradiation time had a significant effect on the average peak intensity (API). The extent of the impact of the variables on the API was in the following order: Sound intensity; frequency; duty cycle; microbubble volume; and irradiation time. The optimum conditions were found to be as follows: Sound intensity, 1.00 W/cm2; frequency, 20 Hz; duty cycle, 40%; microbubble volume, 0.20 ml; and irradiation time, 3 min. In the confirmatory test, the API was 19.97±2.66 immediately subsequent to treatment, and histological examination revealed signs of tumor blood vessel injury in the optimum parameter combination group. In conclusion, the Taguchi L18 (3)6 orthogonal array design was successfully applied for determining the optimal parameter combination of API following treatment. Under the optimum orthogonal design condition, a minimum API of 19.97±2.66 subsequent to low-frequency and low-intensity mediated blood perfusion blockage was obtained. PMID:26722279

  16. Design and experimental evaluation of compact radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Fredmonski, A. J.; Huber, F. W.; Roelke, R. J.; Simonyi, S.

    1991-01-01

    The application of a multistage 3D Euler solver to the aerodynamic design of two compact radial-inflow turbines is presented, along with experimental results evaluating and validating the designs. The objectives of the program were to design, fabricate, and rig test compact radial-inflow turbines with equal or better efficiency relative to conventional designs, while having 40 percent less rotor length than current traditionally-sized radial turbines. The approach to achieving these objectives was to apply a calibrated 3D multistage Euler code to accurately predict and control the high rotor flow passage velocities and high aerodynamic loadings resulting from the reduction in rotor length. A comparison of the advanced compact designs to current state-of-the-art configurations is presented.

  17. Application of Taguchi technique coupled with grey relational analysis for multiple performance characteristics optimization of EDM parameters on ST 42 steel

    NASA Astrophysics Data System (ADS)

    Prayogo, Galang Sandy; Lusi, Nuraini

    2016-04-01

    The optimization technique of machining parameters considering multiple performance characteristics of non conventional machining EDM process using Taguchi method combined with grey relational analysis (GRA) is presented in this study. ST 42 steel was chosen as material work piece and graphite as electrode during this experiment. Performance characteristics such as material removal rate and overcut are selected to evaluated the effect of machining parameters. Current, pulse on time, pulse off time and discharging time/ Z down were selected as machining parameters. The experiments was conducted by varying that machining parameters in three different levels. Based on the Taguchi quality design concept, a L27 orthogonal array table was chosen for the experiments. By using the combination of GRA and Taguchi, the optimization of complicated multiple performance characteristics was transformed into the optimization of a single response performance index. Optimal levels of machining parameters were identified by using Grey Relational Analysis method. The statistical application of analysis of variance was used to determine the relatively significant machining parameters. The result of confirmation test indicted that the determined optimal combination of machining parameters effectively improve the performance characteristics of the machining EDM process on ST 42 steel.

  18. Application of Taguchi approach to optimize the sol-gel process of the quaternary Cu2ZnSnS4 with good optical properties

    NASA Astrophysics Data System (ADS)

    Nkuissi Tchognia, Joël Hervé; Hartiti, Bouchaib; Ridah, Abderraouf; Ndjaka, Jean-Marie; Thevenin, Philippe

    2016-07-01

    Present research deals with the optimal deposition parameters configuration for the synthesis of Cu2ZnSnS4 (CZTS) thin films using the sol-gel method associated to spin coating on ordinary glass substrates without sulfurization. The Taguchi design with a L9 (34) orthogonal array, a signal-to-noise (S/N) ratio and an analysis of variance (ANOVA) are used to optimize the performance characteristic (optical band gap) of CZTS thin films. Four deposition parameters called factors namely the annealing temperature, the annealing time, the ratios Cu/(Zn + Sn) and Zn/Sn were chosen. To conduct the tests using the Taguchi method, three levels were chosen for each factor. The effects of the deposition parameters on structural and optical properties are studied. The determination of the most significant factors of the deposition process on optical properties of as-prepared films is also done. The results showed that the significant parameters are Zn/Sn ratio and the annealing temperature by applying the Taguchi method.

  19. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  20. Design and experimental results for the S805 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    An airfoil for horizontal-axis wind-turbine applications, the S805, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  1. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D M

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  2. Design and Implementation of an Experimental Segway Model

    NASA Astrophysics Data System (ADS)

    Younis, Wael; Abdelati, Mohammed

    2009-03-01

    The segway is the first transportation product to stand, balance, and move in the same way we do. It is a truly 21st-century idea. The aim of this research is to study the theory behind building segway vehicles based on the stabilization of an inverted pendulum. An experimental model has been designed and implemented through this study. The model has been tested for its balance by running a Proportional Derivative (PD) algorithm on a microprocessor chip. The model has been identified in order to serve as an educational experimental platform for segways.

  3. Optimization of the imaging quality of 64-slice CT acquisition protocol using Taguchi analysis: A phantom study.

    PubMed

    Pan, Lung Fa; Erdene, Erdenetsetseg; Chen, Chun Chi; Pan, Lung Kwang

    2015-01-01

    In this study, the phantom imaging quality of 64-slice CT acquisition protocol was quantitatively evaluated using Taguchi. The phantom acrylic line group was designed and assembled with multiple layers of solid water plate in order to imitate the adult abdomen, and scanned with Philips brilliance CT in order to simulate a clinical examination. According to the Taguchi L8(2(7)) orthogonal array, four major factors of the acquisition protocol were optimized, including (A) CT slice thickness, (B) the image reconstruction filter type, (C) the spiral CT pitch, and (D) the matrix size. The reconstructed line group phantom image was counted by four radiologists for three discrete rounds in order to obtain the averages and standard deviations of the line counts and the corresponding signal to noise ratios (S/N). The quantified S/N values were analyzed and the optimal combination of the four factor settings was determined to be comprised of (A) a 1-mm thickness, (B) a sharp filter type, (C) a 1.172 spiral CT pitch, and (D) a 1024×1024 matrix size. The dominant factors included the (A) filter type and the cross interaction between the filter type and CT slice thickness (A×B). The minor factors were determined to be (C) the spiral CT pitch and (D) the matrix size since neither was capable of yielding a 95% confidence level in the ANOVA test. PMID:26405931

  4. Development of a cell formation heuristic by considering realistic data using principal component analysis and Taguchi's method

    NASA Astrophysics Data System (ADS)

    Kumar, Shailendra; Sharma, Rajiv Kumar

    2015-12-01

    Over the last four decades of research, numerous cell formation algorithms have been developed and tested, still this research remains of interest to this day. Appropriate manufacturing cells formation is the first step in designing a cellular manufacturing system. In cellular manufacturing, consideration to manufacturing flexibility and production-related data is vital for cell formation. The consideration to this realistic data makes cell formation problem very complex and tedious. It leads to the invention and implementation of highly advanced and complex cell formation methods. In this paper an effort has been made to develop a simple and easy to understand/implement manufacturing cell formation heuristic procedure with considerations to the number of production and manufacturing flexibility-related parameters. The heuristic minimizes inter-cellular movement cost/time. Further, the proposed heuristic is modified for the application of principal component analysis and Taguchi's method. Numerical example is explained to illustrate the approach. A refinement in the results is observed with adoption of principal component analysis and Taguchi's method.

  5. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  6. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  7. Experimental design principles for isotopically instationary 13C labeling experiments.

    PubMed

    Nöh, Katharina; Wiechert, Wolfgang

    2006-06-01

    13C metabolic flux analysis (MFA) is a well-established tool in Metabolic Engineering that found numerous applications in recent years. However, one strong limitation of the current method is the requirement of an-at least approximate-isotopic stationary state at sampling time. This requirement leads to a principle lower limit for the duration of a 13C labeling experiment. A new methodological development is based on repeated sampling during the instationary transient of the 13C labeling dynamics. The statistical and computational treatment of such instationary experiments is a completely new terrain. The computational effort is very high because large differential equations have to be solved and, moreover, the intracellular pool sizes play a significant role. For this reason, the present contribution works out principles and strategies for the experimental design of instationary experiments based on a simple example network. Hereby, the potential of isotopically instationary experiments is investigated in detail. Various statistical results on instationary flux identifiability are presented and possible pitfalls of experimental design are discussed. Finally, a framework for almost optimal experimental design of isotopically instationary experiments is proposed which provides a practical guideline for the analysis of large-scale networks.

  8. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  9. Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using Taguchi method.

    PubMed

    Chen, Zhengsuo; Deng, Hongbo; Chen, Can; Yang, Ying; Xu, Heng

    2014-01-01

    Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues.The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. Taguchi method was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions.

  10. Biosorption of malachite green from aqueous solutions by Pleurotus ostreatus using Taguchi method

    PubMed Central

    2014-01-01

    Dyes released into the environment have been posing a serious threat to natural ecosystems and aquatic life due to presence of heat, light, chemical and other exposures stable. In this study, the Pleurotus ostreatus (a macro-fungus) was used as a new biosorbent to study the biosorption of hazardous malachite green (MG) from aqueous solutions. The effective disposal of P. ostreatus is a meaningful work for environmental protection and maximum utilization of agricultural residues. The operational parameters such as biosorbent dose, pH, and ionic strength were investigated in a series of batch studies at 25°C. Freundlich isotherm model was described well for the biosorption equilibrium data. The biosorption process followed the pseudo-second-order kinetic model. Taguchi method was used to simplify the experimental number for determining the significance of factors and the optimum levels of experimental factors for MG biosorption. Biosorbent dose and initial MG concentration had significant influences on the percent removal and biosorption capacity. The highest percent removal reached 89.58% and the largest biosorption capacity reached 32.33 mg/g. The Fourier transform infrared spectroscopy (FTIR) showed that the functional groups such as, carboxyl, hydroxyl, amino and phosphonate groups on the biosorbent surface could be the potential adsorption sites for MG biosorption. P. ostreatus can be considered as an alternative biosorbent for the removal of dyes from aqueous solutions. PMID:24620852

  11. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  12. Constrained Response Surface Optimisation and Taguchi Methods for Precisely Atomising Spraying Process

    NASA Astrophysics Data System (ADS)

    Luangpaiboon, P.; Suwankham, Y.; Homrossukon, S.

    2010-10-01

    This research presents a development of a design of experiment technique for quality improvement in automotive manufacturing industrial. The quality of interest is the colour shade, one of the key feature and exterior appearance for the vehicles. With low percentage of first time quality, the manufacturer has spent a lot of cost for repaired works as well as the longer production time. To permanently dissolve such problem, the precisely spraying condition should be optimized. Therefore, this work will apply the full factorial design, the multiple regression, the constrained response surface optimization methods or CRSOM, and Taguchi's method to investigate the significant factors and to determine the optimum factor level in order to improve the quality of paint shop. Firstly, 2κ full factorial was employed to study the effect of five factors including the paint flow rate at robot setting, the paint levelling agent, the paint pigment, the additive slow solvent, and non volatile solid at spraying of atomizing spraying machine. The response values of colour shade at 15 and 45 degrees were measured using spectrophotometer. Then the regression models of colour shade at both degrees were developed from the significant factors affecting each response. Consequently, both regression models were placed into the form of linear programming to maximize the colour shade subjected to 3 main factors including the pigment, the additive solvent and the flow rate. Finally, Taguchi's method was applied to determine the proper level of key variable factors to achieve the mean value target of colour shade. The factor of non volatile solid was found to be one more additional factor at this stage. Consequently, the proper level of all factors from both experiment design methods were used to set a confirmation experiment. It was found that the colour shades, both visual at 15 and 45 angel of measurement degrees of spectrophotometer, were nearly closed to the target and the defective at

  13. Design and experimental study of a novel giant magnetostrictive actuator

    NASA Astrophysics Data System (ADS)

    Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Dongwei; Huang, Yingjie; Xie, Wenqiang

    2016-12-01

    Giant magnetostrictive actuator has been widely used in precise driving occasions for its excellent performance. However, in driving a switching valve, especially the ball-valve in an electronic controlled injector, the actuator can't exhibit its good performance for limits in output displacement and responding speed. A novel giant magnetostrictive actuator, which can reach its maximum displacement for being exerted with no bias magnetic field, is designed in this paper. Simultaneously, elongating of the giant magetostrictive material is converted to shortening of the actuator's axial dimension with the help of an output rod in "T" type. Furthermore, to save responding time, the driving voltage with high opening voltage while low holding voltage is designed. Responding time and output displacement are studied experimentally with the help of a measuring system. From measured results, designed driving voltage can improve the responding speed of actuator displacement quite effectively. And, giant magnetostrictive actuator can output various steady-state displacements to reach more driving effects.

  14. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  15. Amplified energy harvester from footsteps: design, modeling, and experimental analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ya; Chen, Wusi; Guzman, Plinio; Zuo, Lei

    2014-04-01

    This paper presents the design, modeling and experimental analysis of an amplified footstep energy harvester. With the unique design of amplified piezoelectric stack harvester the kinetic energy generated by footsteps can be effectively captured and converted into usable DC power that could potentially be used to power many electric devices, such as smart phones, sensors, monitoring cameras, etc. This doormat-like energy harvester can be used in crowded places such as train stations, malls, concerts, airport escalator/elevator/stairs entrances, or anywhere large group of people walk. The harvested energy provides an alternative renewable green power to replace power requirement from grids, which run on highly polluting and global-warming-inducing fossil fuels. In this paper, two modeling approaches are compared to calculate power output. The first method is derived from the single degree of freedom (SDOF) constitutive equations, and then a correction factor is applied onto the resulting electromechanically coupled equations of motion. The second approach is to derive the coupled equations of motion with Hamilton's principle and the constitutive equations, and then formulate it with the finite element method (FEM). Experimental testing results are presented to validate modeling approaches. Simulation results from both approaches agree very well with experimental results where percentage errors are 2.09% for FEM and 4.31% for SDOF.

  16. Experimental Verification of Structural-Acoustic Modelling and Design Optimization

    NASA Astrophysics Data System (ADS)

    MARBURG, S.; BEER, H.-J.; GIER, J.; HARDTKE, H.-J.; RENNERT, R.; PERRET, F.

    2002-05-01

    A number of papers have been published on the simulation of structural-acoustic design optimization. However, extensive work is required to verify these results in practical applications. Herein, a steel box of 1·0×1·1×1·5 m with an external beam structure welded on three surface plates was investigated. This investigation included experimental modal analysis and experimental measurements of certain noise transfer functions (sound pressure at points inside the box due to force excitation at beam structure). Using these experimental data, the finite element model of the structure was tuned to provide similar results. With a first structural mode at less than 20 Hz, the reliable frequency range was identified up to about 60 Hz. Obviously, the finite element model could not be further improved only by mesh refinement. The tuning process will be explained in detail since there was a number of changes that helped to improve the structure. Other changes did not improve the structure. Although this model of the box could be expected as a rather simple structure, it can be considered to be a complex structure for simulation purposes. A defined modification of the physical model verified the simulation model. In a final step, the optimal location of stiffening beam structures was predicted by simulation. Their effect on the noise transfer function was experimentally verified. This paper critically discusses modelling techniques that are applied for structural-acoustic simulation of sedan bodies.

  17. Optimization of microchannel heat sink using genetic algorithm and Taguchi method

    NASA Astrophysics Data System (ADS)

    Singh, Bhanu Pratap; Garg, Harry; Lall, Arun K.

    2016-04-01

    Active cooling using microchannel is a challenging area. The optimization and miniaturization of the devices is increasing the heat loads and affecting the operating performance of the system. The microchannel based cooling systems are widely used and overcomes most of the limitations of the existing solutions. Microchannels help in reducing dimensions and therefore finding many important applications in the microfluidics domain. The microchannel performance is related to the geometry, material and flow conditions. Optimized selection of controllable parameters is a key issue while designing the microchannel based cooling system. The proposed work presents a simulation based study according to Taguchi design of experiment with Reynolds number, aspect ratio and plenum length as input parameters to determine SN ratio. The objective of this study is to maximize the heat transfer. Mathematical models based on these parameters were developed which helps in global optimization using Genetic Algorithm. Genetic algorithm further employed to optimize the input parameters and generates global solution points for the proposed work. It was concluded that the optimized value for heat transfer coefficient and Nusselt number was 2620.888 W/m2K and 3.4708 as compare to values obtained through SN ratio based parametric study i.e. 2601.3687 W/m2K and 3.447 respectively. Hence an error of 0.744% and 0.68% was detected in heat transfer coefficient and Nusselt number respectively.

  18. Optimization of catalyst formation conditions for synthesis of carbon nanotubes using Taguchi method

    NASA Astrophysics Data System (ADS)

    Pander, Adam; Hatta, Akimitsu; Furuta, Hiroshi

    2016-05-01

    A growth of Carbon Nanotubes (CNTs) suffers many difficulties in finding optimum growth parameters, reproducibility and mass-production, related to the large number of parameters influencing synthesis process. Choosing the proper parameters can be a time consuming process, and still may not give the optimal growth values. One of the possible solutions to decrease the number of the experiments, is to apply optimization methods to the design of the experiment parameter matrix. In this work, Taguchi method of designing experiments is applied to optimize the formation of iron catalyst during annealing process by analyzing average roughness and size of particles. The annealing parameters were: annealing time (tAN), hydrogen flow rate (fH2), temperature (TAN) and argon flow rate (fAr). Plots of signal-to-noise ratios showed that temperature and annealing time have the highest impact on final results of experiment. For more detailed study of the influence of parameters, the interaction plots of tested parameters were analyzed. For the final evaluation, CNT forests were grown on silicon substrates with AlOX/Fe catalyst by thermal chemical vapor deposition method. Based on obtained results, the average diameter of CNTs was decreased by 67% and reduced from 9.1 nm (multi-walled CNTs) to 3.0 nm (single-walled CNTs).

  19. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  20. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  1. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  2. Technological issues and experimental design of gene association studies.

    PubMed

    Distefano, Johanna K; Taverna, Darin M

    2011-01-01

    Genome-wide association studies (GWAS), in which thousands of single-nucleotide polymorphisms (SNPs) spanning the genome are genotyped in individuals who are phenotypically well characterized, -currently represent the most popular strategy for identifying gene regions associated with common -diseases and related quantitative traits. Improvements in technology and throughput capability, development of powerful statistical tools, and more widespread acceptance of pooling-based genotyping approaches have led to greater utilization of GWAS in human genetics research. However, important considerations for optimal experimental design, including selection of the most appropriate genotyping platform, can enhance the utility of the approach even further. This chapter reviews experimental and technological issues that may affect the success of GWAS findings and proposes strategies for developing the most comprehensive, logical, and cost-effective approaches for genotyping given the population of interest.

  3. Acting like a physicist: Student approach study to experimental design

    NASA Astrophysics Data System (ADS)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  4. Development of prilling process for biodegradable microspheres through experimental designs.

    PubMed

    Fabien, Violet; Minh-Quan, Le; Michelle, Sergent; Guillaume, Bastiat; Van-Thanh, Tran; Marie-Claire, Venier-Julienne

    2016-02-10

    The prilling process proposes a microparticle formulation easily transferable to the pharmaceutical production, leading to monodispersed and highly controllable microspheres. PLGA microspheres were used for carrying an encapsulated protein and adhered stem cells on its surface, proposing a tool for regeneration therapy against injured tissue. This work focused on the development of the production of PLGA microspheres by the prilling process without toxic solvent. The required production quality needed a complete optimization of the process. Seventeen parameters were studied through experimental designs and led to an acceptable production. The key parameters and mechanisms of formation were highlighted. PMID:26656302

  5. Designing artificial enzymes from scratch: Experimental study and mesoscale simulation

    NASA Astrophysics Data System (ADS)

    Komarov, Pavel V.; Zaborina, Olga E.; Klimova, Tamara P.; Lozinsky, Vladimir I.; Khalatur, Pavel G.; Khokhlov, Alexey R.

    2016-09-01

    We present a new concept for designing biomimetic analogs of enzymatic proteins; these analogs are based on the synthetic protein-like copolymers. α-Chymotrypsin is used as a prototype of the artificial catalyst. Our experimental study shows that in the course of free radical copolymerization of hydrophobic and hydrophilic monomers the target globular nanostructures of a "core-shell" morphology appear in a selective solvent. Using a mesoscale computer simulation, we show that the protein-like globules can have a large number of catalytic centers located at the hydrophobic core/hydrophilic shell interface.

  6. An improved retinal densitometer: design concepts and experimental applications.

    PubMed

    Baker, H D; Henderson, R; O'Keefe, L P

    1989-07-01

    A photon-counting retinal densitometer is described that has been designed optically and electronically for improved sensitivity and reliability. The device allows measurement of visual pigments through the undilated natural pupils of subjects at relatively low levels of measuring lights, and serves also as an adaptometer for direct comparisons between pigment bleaching or regeneration and light or dark adaptation. Instrumental control and data collection are by computer to permit rapid and simple data analysis and comparisons between subjects. The methods by which the sensitivity and reliability have been enhanced are described in detail, and some examples of experimental results are presented.

  7. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  8. On the proper study design applicable to experimental balneology.

    PubMed

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  9. Fatigue of NiTi SMA-pulley system using Taguchi and ANOVA

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Jaronie; Leary, Martin; Subic, Aleksandar

    2016-05-01

    Shape memory alloy (SMA) actuators can be integrated with a pulley system to provide mechanical advantage and to reduce packaging space; however, there appears to be no formal investigation of the effect of a pulley system on SMA structural or functional fatigue. In this work, cyclic testing was conducted on nickel-titanium (NiTi) SMA actuators on a pulley system and a control experiment (without pulley). Both structural and functional fatigues were monitored until fracture, or a maximum of 1E5 cycles were achieved for each experimental condition. The Taguchi method and analysis of the variance (ANOVA) were used to optimise the SMA-pulley system configurations. In general, one-way ANOVA at the 95% confidence level showed no significant difference between the structural or functional fatigue of SMA-pulley actuators and SMA actuators without pulley. Within the sample of SMA-pulley actuators, the effect of activation duration had the greatest significance for both structural and functional fatigue, and the pulley configuration (angle of wrap and sheave diameter) had a greater statistical significance than load magnitude for functional fatigue. This work identified that structural and functional fatigue performance of SMA-pulley systems is optimised by maximising sheave diameter and using an intermediate wrap-angle, with minimal load and activation duration. However, these parameters may not be compatible with commercial imperatives. A test was completed for a commercially optimal SMA-pulley configuration. This novel observation will be applicable to many areas of SMA-pulley system applications development.

  10. Plasma arc cutting optimization parameters for aluminum alloy with two thickness by using Taguchi method

    NASA Astrophysics Data System (ADS)

    Abdulnasser, B.; Bhuvenesh, R.

    2016-07-01

    Manufacturing companies define the qualities of thermal removing process based on the dimension and physical appearance of the cutting material surface. The surface roughness of the cutting area for the material and the material removal rate being removed during the manual plasma arc cutting process were importantly considered. Plasma arc cutter machine model PS-100 was used to cut the specimens made from aluminium alloy 1100 manually based on the selected parameters setting. Two different thicknesses of specimens, 3mm and 6mm were used. The material removal rate (MRR) was measured by determining the difference between the weight of specimens before and after the cutting process. The surface roughness (Ra) was measured by using MITUTOYO CS-3100 machine and analysis was conducted to determine the average roughness (Ra) value, Taguchi method was utilized as an experimental layout to obtain MRR and Ra values. The results indicate that the current and cutting speed is the most significant parameters, followed by the arc gap for both rate of material removal and surface roughness.

  11. Optimization of the ASPN Process to Bright Nitriding of Woodworking Tools Using the Taguchi Approach

    NASA Astrophysics Data System (ADS)

    Walkowicz, J.; Staśkiewicz, J.; Szafirowicz, K.; Jakrzewski, D.; Grzesiak, G.; Stępniak, M.

    2013-02-01

    The subject of the research is optimization of the parameters of the Active Screen Plasma Nitriding (ASPN) process of high speed steel planing knives used in woodworking. The Taguchi approach was applied for development of the plan of experiments and elaboration of obtained experimental results. The optimized ASPN parameters were: process duration, composition and pressure of the gaseous atmosphere, the substrate BIAS voltage and the substrate temperature. The results of the optimization procedure were verified by the tools' behavior in the sharpening operation performed in normal industrial conditions. The ASPN technology proved to be extremely suitable for nitriding the woodworking planing tools, which because of their specific geometry, in particular extremely sharp wedge angles, could not be successfully nitrided using conventional direct current plasma nitriding method. The carried out research proved that the values of fracture toughness coefficient K Ic are in correlation with maximum spalling depths of the cutting edge measured after sharpening, and therefore may be used as a measure of the nitrided planing knives quality. Based on this criterion the optimum parameters of the ASPN process for nitriding high speed planing knives were determined.

  12. Design of vibration compensation interferometer for Experimental Advanced Superconducting Tokamak.

    PubMed

    Yang, Y; Li, G S; Liu, H Q; Jie, Y X; Ding, W X; Brower, D L; Zhu, X; Wang, Z X; Zeng, L; Zou, Z Y; Wei, X C; Lan, T

    2014-11-01

    A vibration compensation interferometer (wavelength at 0.532 μm) has been designed and tested for Experimental Advanced Superconducting Tokamak (EAST). It is designed as a sub-system for EAST far-infrared (wavelength at 432.5 μm) poloarimeter/interferometer system. Two Acoustic Optical Modulators have been applied to produce the 1 MHz intermediate frequency. The path length drift of the system is lower than 2 wavelengths within 10 min test, showing the system stability. The system sensitivity has been tested by applying a periodic vibration source on one mirror in the system. The vibration is measured and the result matches the source period. The system is expected to be installed on EAST by the end of 2014.

  13. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    SciTech Connect

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  14. Skin Microbiome Surveys Are Strongly Influenced by Experimental Design.

    PubMed

    Meisel, Jacquelyn S; Hannigan, Geoffrey D; Tyldsley, Amanda S; SanMiguel, Adam J; Hodkinson, Brendan P; Zheng, Qi; Grice, Elizabeth A

    2016-05-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provides more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e., gastrointestinal) and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource and cost intensive, provides evidence of a community's functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This study highlights the importance of experimental design for downstream results in skin microbiome surveys. PMID:26829039

  15. Experimental Vertical Stability Studies for ITER Performance and Design Guidance

    SciTech Connect

    Humphreys, D A; Casper, T A; Eidietis, N; Ferrera, M; Gates, D A; Hutchinson, I H; Jackson, G L; Kolemen, E; Leuer, J A; Lister, J; LoDestro, L L; Meyer, W H; Pearlstein, L D; Sartori, F; Walker, M L; Welander, A S; Wolfe, S M

    2008-10-13

    Operating experimental devices have provided key inputs to the design process for ITER axisymmetric control. In particular, experiments have quantified controllability and robustness requirements in the presence of realistic noise and disturbance environments, which are difficult or impossible to characterize with modeling and simulation alone. This kind of information is particularly critical for ITER vertical control, which poses some of the highest demands on poloidal field system performance, since the consequences of loss of vertical control can be very severe. The present work describes results of multi-machine studies performed under a joint ITPA experiment on fundamental vertical control performance and controllability limits. We present experimental results from Alcator C-Mod, DIII-D, NSTX, TCV, and JET, along with analysis of these data to provide vertical control performance guidance to ITER. Useful metrics to quantify this control performance include the stability margin and maximum controllable vertical displacement. Theoretical analysis of the maximum controllable vertical displacement suggests effective approaches to improving performance in terms of this metric, with implications for ITER design modifications. Typical levels of noise in the vertical position measurement which can challenge the vertical control loop are assessed and analyzed.

  16. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  17. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  18. Experimental design in phylogenetics: testing predictions from expected information.

    PubMed

    San Mauro, Diego; Gower, David J; Cotton, James A; Zardoya, Rafael; Wilkinson, Mark; Massingham, Tim

    2012-07-01

    Taxon and character sampling are central to phylogenetic experimental design; yet, we lack general rules. Goldman introduced a method to construct efficient sampling designs in phylogenetics, based on the calculation of expected Fisher information given a probabilistic model of sequence evolution. The considerable potential of this approach remains largely unexplored. In an earlier study, we applied Goldman's method to a problem in the phylogenetics of caecilian amphibians and made an a priori evaluation and testable predictions of which taxon additions would increase information about a particular weakly supported branch of the caecilian phylogeny by the greatest amount. We have now gathered mitogenomic and rag1 sequences (some newly determined for this study) from additional caecilian species and studied how information (both expected and observed) and bootstrap support vary as each new taxon is individually added to our previous data set. This provides the first empirical test of specific predictions made using Goldman's method for phylogenetic experimental design. Our results empirically validate the top 3 (more intuitive) taxon addition predictions made in our previous study, but only information results validate unambiguously the 4th (less intuitive) prediction. This highlights a complex relationship between information and support, reflecting that each measures different things: Information is related to the ability to estimate branch length accurately and support to the ability to estimate the tree topology accurately. Thus, an increase in information may be correlated with but does not necessitate an increase in support. Our results also provide the first empirical validation of the widely held intuition that additional taxa that join the tree proximal to poorly supported internal branches are more informative and enhance support more than additional taxa that join the tree more distally. Our work supports the view that adding more data for a single (well

  19. Laccase production by Coriolopsis caperata RCK2011: Optimization under solid state fermentation by Taguchi DOE methodology

    PubMed Central

    Nandal, Preeti; Ravella, Sreenivas Rao; Kuhad, Ramesh Chander

    2013-01-01

    Laccase production by Coriolopsis caperata RCK2011 under solid state fermentation was optimized following Taguchi design of experiment. An orthogonal array layout of L18 (21 × 37) was constructed using Qualitek-4 software with eight most influensive factors on laccase production. At individual level pH contributed higher influence, whereas, corn steep liquor (CSL) accounted for more than 50% of the severity index with biotin and KH2PO4 at the interactive level. The optimum conditions derived were; temperature 30°C, pH 5.0, wheat bran 5.0 g, inoculum size 0.5 ml (fungal cell mass = 0.015 g dry wt.), biotin 0.5% w/v, KH2PO4 0.013% w/v, CSL 0.1% v/v and 0.5 mM xylidine as an inducer. The validation experiments using optimized conditions confirmed an improvement in enzyme production by 58.01%. The laccase production to the level of 1623.55 Ugds−1 indicates that the fungus C. caperata RCK2011 has the commercial potential for laccase. PMID:23463372

  20. Parameters optimization of laser brazing in crimping butt using Taguchi and BPNN-GA

    NASA Astrophysics Data System (ADS)

    Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu

    2015-04-01

    The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by Taguchi L25 orthogonal array through the statistical design method. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid method, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.

  1. Optimizing conditions for production of high levels of soluble recombinant human growth hormone using Taguchi method.

    PubMed

    Savari, Marzieh; Zarkesh Esfahani, Sayyed Hamid; Edalati, Masoud; Biria, Davoud

    2015-10-01

    Human growth hormone (hGH) is synthesized and stored by somatotroph cells of the anterior pituitary gland and can effect on body metabolism. This protein can be used to treat hGH deficiency, Prader-Willi syndrome and Turner syndrome. The limitations in current technology for soluble recombinant protein production, such as inclusion body formation, decrease its usage for therapeutic purposes. To achieve high levels of soluble form of recombinant human growth hormone (rhGH) we used suitable host strain, appropriate induction temperature, induction time and culture media composition. For this purpose, 32 experiments were designed using Taguchi method and the levels of produced proteins in all 32 experiments were evaluated primarily by ELISA and dot blotting and finally the purified rhGH protein products assessed by SDS-PAGE and Western blotting techniques. Our results indicate that media, bacterial strains, temperature and induction time have significant effects on the production of rhGH. The low cultivation temperature of 25°C, TB media (with 3% ethanol and 0.6M glycerol), Origami strain and a 10-h induction time increased the solubility of human growth hormone.

  2. Assessing accuracy of measurements for a Wingate Test using the Taguchi method.

    PubMed

    Franklin, Kathryn L; Gordon, Rae S; Davies, Bruce; Baker, Julien S

    2008-01-01

    The purpose of this study was to establish the effects of four variables on the results obtained for a Wingate Anaerobic Test (WAnT). This study used a 30 second WAnT and compared data collection and analysed in different ways in order to form conclusions as to the relative importance of the variables on the results. Data was collected simultaneously by a commercially available software correction system manufactured by Cranlea Ltd., (Birmingham, England) system and an alternative method of data collection which involves the direct measurement of the flywheel velocity and the brake force. Data was compared using a design of experiments technique, the Taguchi method. Four variables were examined - flywheel speed, braking force, moment of inertia of the flywheel, and time intervals over which the work and power were calculated. The choice of time interval was identified as the most influential variable on the results. While the other factors have an influence on the results, the decreased time interval over which the data is averaged gave 9.8% increase in work done, 40.75% increase in peak power and 13.1% increase in mean power. PMID:18373285

  3. Optimizing conditions for production of high levels of soluble recombinant human growth hormone using Taguchi method.

    PubMed

    Savari, Marzieh; Zarkesh Esfahani, Sayyed Hamid; Edalati, Masoud; Biria, Davoud

    2015-10-01

    Human growth hormone (hGH) is synthesized and stored by somatotroph cells of the anterior pituitary gland and can effect on body metabolism. This protein can be used to treat hGH deficiency, Prader-Willi syndrome and Turner syndrome. The limitations in current technology for soluble recombinant protein production, such as inclusion body formation, decrease its usage for therapeutic purposes. To achieve high levels of soluble form of recombinant human growth hormone (rhGH) we used suitable host strain, appropriate induction temperature, induction time and culture media composition. For this purpose, 32 experiments were designed using Taguchi method and the levels of produced proteins in all 32 experiments were evaluated primarily by ELISA and dot blotting and finally the purified rhGH protein products assessed by SDS-PAGE and Western blotting techniques. Our results indicate that media, bacterial strains, temperature and induction time have significant effects on the production of rhGH. The low cultivation temperature of 25°C, TB media (with 3% ethanol and 0.6M glycerol), Origami strain and a 10-h induction time increased the solubility of human growth hormone. PMID:26151869

  4. Comparing simulated emission from molecular clouds using experimental design

    SciTech Connect

    Yeremi, Miayan; Flynn, Mallory; Loeppky, Jason; Rosolowsky, Erik; Offner, Stella

    2014-03-10

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.

  5. Taguchi methods applied to oxygen-enriched diesel engine experiments

    SciTech Connect

    Marr, W.W.; Sekar, R.R.; Cole, R.L.; Marciniak, T.J. ); Longman, D.E. )

    1992-01-01

    This paper describes a test series conducted on a six-cylinder diesel engine to study the impacts of controlled factors (i.e., oxygen content of the combustion air, water content of the fuel, fuel rate, and fuel-injection timing) on engine emissions using Taguchi methods. Three levels of each factor were used in the tests. Only the main effects of the factors were examined; no attempt was made to analyze the interactions among the factors. It was found that, as in the case of the single-cylinder engine tests, oxygen in the combustion air was very effective in reducing particulate and smoke emissions. Increases in NO[sub x] due to the oxygen enrichment observed in the single-cylinder tests also occurred in the present six-cylinder tests. Water in the emulsified fuel was found to be much less effective in decreasing NO[sub x] emissions for the six-cylinder engine than it was for the single-cylinder engine.

  6. Taguchi methods applied to oxygen-enriched diesel engine experiments

    SciTech Connect

    Marr, W.W.; Sekar, R.R.; Cole, R.L.; Marciniak, T.J.; Longman, D.E.

    1992-12-01

    This paper describes a test series conducted on a six-cylinder diesel engine to study the impacts of controlled factors (i.e., oxygen content of the combustion air, water content of the fuel, fuel rate, and fuel-injection timing) on engine emissions using Taguchi methods. Three levels of each factor were used in the tests. Only the main effects of the factors were examined; no attempt was made to analyze the interactions among the factors. It was found that, as in the case of the single-cylinder engine tests, oxygen in the combustion air was very effective in reducing particulate and smoke emissions. Increases in NO{sub x} due to the oxygen enrichment observed in the single-cylinder tests also occurred in the present six-cylinder tests. Water in the emulsified fuel was found to be much less effective in decreasing NO{sub x} emissions for the six-cylinder engine than it was for the single-cylinder engine.

  7. An experimental high energy therapeutic ultrasound equipment: design and characterisation.

    PubMed

    Kirkhorn, T; Almquist, L O; Persson, H W; Holmer, N G

    1997-05-01

    High energy ultrasound equipment for well controlled experimental work on extracorporeal shockwave lithotripsy (ESWL) and hyperthermia has been built. The design of two sets of equipment with operating frequencies of 0.5 and 1.6 MHz, respectively, is described and characterised in terms of measured generated pressure fields. The treatment heads consist of six or seven focused ultrasound transducers. The transducers have a diameter of 50 mm each and are mounted in a hemispherical Plexiglass fixture with a geometrical focus 100 mm from the transducer surfaces. Measurements were performed in a water bath in several planes perpendicular to the central axis of the ultrasound beam, using a miniature hydrophone which was positioned with a computer controlled stepping motor system. Resulting diagram plots show well defined pressure foci, located at the geometrical foci of the transducer units.

  8. A rationally designed CD4 analogue inhibits experimental allergic encephalomyelitis

    NASA Astrophysics Data System (ADS)

    Jameson, Bradford A.; McDonnell, James M.; Marini, Joseph C.; Korngold, Robert

    1994-04-01

    EXPERIMENTAL allergic encephalomyelitis (EAE) is an acute inflammatory autoimmune disease of the central nervous system that can be elicited in rodents and is the major animal model for the study of multiple sclerosis (MS)1,2. The pathogenesis of both EAE and MS directly involves the CD4+ helper T-cell subset3-5. Anti-CD4 monoclonal antibodies inhibit the development of EAE in rodents6-9, and are currently being used in human clinical trials for MS. We report here that similar therapeutic effects can be achieved in mice using a small (rationally designed) synthetic analogue of the CD4 protein surface. It greatly inhibits both clinical incidence and severity of EAE with a single injection, but does so without depletion of the CD4+ subset and without the inherent immunogenicity of an antibody. Furthermore, this analogue is capable of exerting its effects on disease even after the onset of symptoms.

  9. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    PubMed

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  10. Improved production of tannase by Klebsiella pneumoniae using Indian gooseberry leaves under submerged fermentation using Taguchi approach.

    PubMed

    Kumar, Mukesh; Singh, Amrinder; Beniwal, Vikas; Salar, Raj Kumar

    2016-12-01

    Tannase (tannin acyl hydrolase E.C 3.1.1.20) is an inducible, largely extracellular enzyme that causes the hydrolysis of ester and depside bonds present in various substrates. Large scale industrial application of this enzyme is very limited owing to its high production costs. In the present study, cost effective production of tannase by Klebsiella pneumoniae KP715242 was studied under submerged fermentation using different tannin rich agro-residues like Indian gooseberry leaves (Phyllanthus emblica), Black plum leaves (Syzygium cumini), Eucalyptus leaves (Eucalyptus glogus) and Babul leaves (Acacia nilotica). Among all agro-residues, Indian gooseberry leaves were found to be the best substrate for tannase production under submerged fermentation. Sequential optimization approach using Taguchi orthogonal array screening and response surface methodology was adopted to optimize the fermentation variables in order to enhance the enzyme production. Eleven medium components were screened primarily by Taguchi orthogonal array design to identify the most contributing factors towards the enzyme production. The four most significant contributing variables affecting tannase production were found to be pH (23.62 %), tannin extract (20.70 %), temperature (20.33 %) and incubation time (14.99 %). These factors were further optimized with central composite design using response surface methodology. Maximum tannase production was observed at 5.52 pH, 39.72 °C temperature, 91.82 h of incubation time and 2.17 % tannin content. The enzyme activity was enhanced by 1.26 fold under these optimized conditions. The present study emphasizes the use of agro-residues as a potential substrate with an aim to lower down the input costs for tannase production so that the enzyme could be used proficiently for commercial purposes.

  11. Improved production of tannase by Klebsiella pneumoniae using Indian gooseberry leaves under submerged fermentation using Taguchi approach.

    PubMed

    Kumar, Mukesh; Singh, Amrinder; Beniwal, Vikas; Salar, Raj Kumar

    2016-12-01

    Tannase (tannin acyl hydrolase E.C 3.1.1.20) is an inducible, largely extracellular enzyme that causes the hydrolysis of ester and depside bonds present in various substrates. Large scale industrial application of this enzyme is very limited owing to its high production costs. In the present study, cost effective production of tannase by Klebsiella pneumoniae KP715242 was studied under submerged fermentation using different tannin rich agro-residues like Indian gooseberry leaves (Phyllanthus emblica), Black plum leaves (Syzygium cumini), Eucalyptus leaves (Eucalyptus glogus) and Babul leaves (Acacia nilotica). Among all agro-residues, Indian gooseberry leaves were found to be the best substrate for tannase production under submerged fermentation. Sequential optimization approach using Taguchi orthogonal array screening and response surface methodology was adopted to optimize the fermentation variables in order to enhance the enzyme production. Eleven medium components were screened primarily by Taguchi orthogonal array design to identify the most contributing factors towards the enzyme production. The four most significant contributing variables affecting tannase production were found to be pH (23.62 %), tannin extract (20.70 %), temperature (20.33 %) and incubation time (14.99 %). These factors were further optimized with central composite design using response surface methodology. Maximum tannase production was observed at 5.52 pH, 39.72 °C temperature, 91.82 h of incubation time and 2.17 % tannin content. The enzyme activity was enhanced by 1.26 fold under these optimized conditions. The present study emphasizes the use of agro-residues as a potential substrate with an aim to lower down the input costs for tannase production so that the enzyme could be used proficiently for commercial purposes. PMID:27411334

  12. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  13. A retrospective mathematical analysis of controlled release design and experimentation.

    PubMed

    Rothstein, Sam N; Kay, Jennifer E; Schopfer, Francisco J; Freeman, Bruce A; Little, Steven R

    2012-11-01

    The development and performance evaluation of new biodegradable polymer controlled release formulations relies on successful interpretation and evaluation of in vitro release data. However, depending upon the extent of empirical characterization, release data may be open to more than one qualitative interpretation. In this work, a predictive model for release from degradable polymer matrices was applied to a number of published release data in order to extend the characterization of release behavior. Where possible, the model was also used to interpolate and extrapolate upon collected released data to clarify the overall duration of release and also kinetics of release between widely spaced data points. In each case examined, mathematical predictions of release coincide well with experimental results, offering a more definitive description of each formulation's performance than was previously available. This information may prove particularly helpful in the design of future studies, such as when calculating proper dosing levels or determining experimental end points in order to more comprehensively evaluate a controlled release system's performance.

  14. Quiet Clean Short-Haul Experimental Engine (QSCEE). Preliminary analyses and design report, volume 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental propulsion systems to be built and tested in the 'quiet, clean, short-haul experimental engine' program are presented. The flight propulsion systems are also presented. The following areas are discussed: acoustic design; emissions control; engine cycle and performance; fan aerodynamic design; variable-pitch actuation systems; fan rotor mechanical design; fan frame mechanical design; and reduction gear design.

  15. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  16. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  17. Experimental design considerations in microbiota/inflammation studies.

    PubMed

    Moore, Robert J; Stanley, Dragana

    2016-07-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  18. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    NASA Astrophysics Data System (ADS)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  19. Large-scale experimental design for decentralized SLAM

    NASA Astrophysics Data System (ADS)

    Cunningham, Alex; Dellaert, Frank

    2012-06-01

    This paper presents an analysis of large scale decentralized SLAM under a variety of experimental conditions to illustrate design trade-offs relevant to multi-robot mapping in challenging environments. As a part of work through the MAST CTA, the focus of these robot teams is on the use of small-scale robots with limited sensing, communication and computational resources. To evaluate mapping algorithms with large numbers (50+) of robots, we developed a simulation incorporating sensing of unlabeled landmarks, line-of-sight blocking obstacles, and communication modeling. Scenarios are randomly generated with variable models for sensing, communication, and robot behavior. The underlying Decentralized Data Fusion (DDF) algorithm in these experiments enables robots to construct a map of their surroundings by fusing local sensor measurements with condensed map information from neighboring robots. Each robot maintains a cache of previously collected condensed maps from neighboring robots, and actively distributes these maps throughout the network to ensure resilience to communication and node failures. We bound the size of the robot neighborhoods to control the growth of the size of neighborhood maps. We present the results of experiments conducted in these simulated scenarios under varying measurement models and conditions while measuring mapping performance. We discuss the trade-offs between mapping performance and scenario design, including robot teams separating and joining, multi-robot data association, exploration bounding, and neighborhood sizes.

  20. Experimental Reality: Principles for the Design of Augmented Environments

    NASA Astrophysics Data System (ADS)

    Lahlou, Saadi

    The Laboratory of Design for Cognition at EDF R&D (LDC) is a living laboratory, which we created to develop Augmented Environment (AE) for collaborative work, more specifically “cognitive work” (white collars, engineers, office workers). It is a corporate laboratory in a large industry, where natural activity of real users is observed in a continuous manner in various spaces (project space, meeting room, lounge, etc.) The RAO room, an augmented meeting room, is used daily for “normal” meetings; it is also the “mother room” of all augmented meeting rooms in the company, where new systems, services, and devices are tested. The LDC has gathered a unique set of data on the use of AE, and developed various observation and design techniques, described in this chapter. LDC uses novel techniques of digital ethnography, some of which were invented there (SubCam, offsat) and some of which were developed elsewhere and adapted (360° video, WebDiver, etc.). At LDC, some new theories have also been developed to explain behavior and guide innovation: cognitive attractors, experimental reality, and the triple-determination framework.

  1. Experimental design considerations in microbiota/inflammation studies

    PubMed Central

    Moore, Robert J; Stanley, Dragana

    2016-01-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  2. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  3. Bearing diagnosis based on Mahalanobis-Taguchi-Gram-Schmidt method

    NASA Astrophysics Data System (ADS)

    Shakya, Piyush; Kulkarni, Makarand S.; Darpe, Ashish K.

    2015-02-01

    A methodology is developed for defect type identification in rolling element bearings using the integrated Mahalanobis-Taguchi-Gram-Schmidt (MTGS) method. Vibration data recorded from bearings with seeded defects on outer race, inner race and balls are processed in time, frequency, and time-frequency domains. Eleven damage identification parameters (RMS, Peak, Crest Factor, and Kurtosis in time domain, amplitude of outer race, inner race, and ball defect frequencies in FFT spectrum and HFRT spectrum in frequency domain and peak of HHT spectrum in time-frequency domain) are computed. Using MTGS, these damage identification parameters (DIPs) are fused into a single DIP, Mahalanobis distance (MD), and gain values for the presence of all DIPs are calculated. The gain value is used to identify the usefulness of DIP and the DIPs with positive gain are again fused into MD by using Gram-Schmidt Orthogonalization process (GSP) in order to calculate Gram-Schmidt Vectors (GSVs). Among the remaining DIPs, sign of GSVs of frequency domain DIPs is checked to classify the probable defect. The approach uses MTGS method for combining the damage parameters and in conjunction with the GSV classifies the defect. A Defect Occurrence Index (DOI) is proposed to rank the probability of existence of a type of bearing damage (ball defect/inner race defect/outer race defect/other anomalies). The methodology is successfully validated on vibration data from a different machine, bearing type and shape/configuration of the defect. The proposed methodology is also applied on the vibration data acquired from the accelerated life test on the bearings, which established the applicability of the method on naturally induced and naturally progressed defect. It is observed that the methodology successfully identifies the correct type of bearing defect. The proposed methodology is also useful in identifying the time of initiation of a defect and has potential for implementation in a real time environment.

  4. Design review of the Brazilian Experimental Solar Telescope

    NASA Astrophysics Data System (ADS)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  5. Plackett-Burman experimental design to facilitate syntactic foam development

    DOE PAGES

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less

  6. Experimental designs for testing differences in survival among salmonid populations

    SciTech Connect

    Hoffmann, A.; Busack, C.; Knudsen, C.

    1995-03-01

    The Yakima Fisheries Project (YFP) is a supplementation plan for enhancing salmon runs in the Yakima River basin. It is presumed that inadequate spawning and rearing, habitat are limiting, factors to population abundance of spring chinook salmon. Therefore, the supplementation effort for spring chinook salmon is focused on introducing hatchery-raised smolts into the basin to compensate for the lack of spawning habitat. However, based on empirical evidence in the Yakima basin, hatchery-reared salmon have survived poorly compared to wild salmon. Therefore, the YFP has proposed to alter the optimal conventional treatment (OCT), which is the state-of-the-art hatchery rearing method, to a new innovative treatment (NIT). The NIT is intended to produce hatchery fish that mimic wild fish and thereby to enhance their survival over that of OCT fish. A limited application of the NIT (LNIT) has also been proposed to reduce the cost of applying the new treatment, yet retain the benefits of increased survival. This research was conducted to test whether the uncertainty using the experimental design was within the limits specified by the Planning Status Report (PSR).

  7. Validation of a buffet meal design in an experimental restaurant.

    PubMed

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes.

  8. Plackett-Burman experimental design to facilitate syntactic foam development

    SciTech Connect

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix and the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.

  9. Investigation and Parameter Optimization of a Hydraulic Ram Pump Using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Sarma, Dhrupad; Das, Monotosh; Brahma, Bipul; Pandwar, Deepak; Rongphar, Sermirlong; Rahman, Mafidur

    2016-06-01

    The main objective of this research work is to investigate the effect of Waste Valve height and Pressure Chamber height on the output flow rate of a Hydraulic ram pump. Also the second objective of this work is to optimize them for a hydraulic ram pump delivering water up to a height of 3.81 m (12.5 feet ) from the ground with a drive head (inlet head) of 1.86 m (6.11 feet). Two one-factor-at-a-time experiments have been conducted to decide the levels of the selected input parameters. After deciding the input parameters, an experiment has been designed using Taguchi's L9 Orthogonal Array with three repetitions. Analysis of Variance (ANOVA) is carried out to verify the significance of effect of the factors on the output flow rate of the pump. Results show that the height of the Waste Valve and height of the Pressure Chamber have significant effect on the outlet flow of the pump. For a pump of drive pipe diameter (inlet pipe) 31.75 mm (1.25 in.) and delivery pipe diameter of 12.7 mm (0.5 in.) the optimum setting was found out to be at a height of 114.3 mm (4.5 in.) of the Waste Valve and 406.4 mm (16 in.) of the Pressure vessel providing a delivery flow rate of 93.14 l per hour. For the same pump estimated range of output flow rate is, 90.65-94.97 l/h.

  10. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  11. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Conditions of an Experimental Permit § 437.85 Allowable design changes; modification of an experimental... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE...

  12. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Conditions of an Experimental Permit § 437.85 Allowable design changes; modification of an experimental... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE...

  13. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    NASA Astrophysics Data System (ADS)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  14. Numerical simulation and experimental assessment for cold cylindrical deep drawing without blank-holder

    NASA Astrophysics Data System (ADS)

    Chiorescu, D.; Chiorescu, E.; Filipov, F.

    2016-08-01

    The metal forming process through plastic deformation, represented by deep drawing, is an extremely vast research field. In this article we analyse the influence of the die punch clearance, the average velocity in the active phase as well as of the lubrication on the deep drawing quality revealed by the thickness evenness on the finished product surface. For thorough research and in order to minimize the number of experimental trials, a fractional factorial design of TAGUCHI type was developed attached to an orthogonal array, thus analysing the contribution of the three aforementioned parameters to the quality of cylindrical deep drawing without a blank holder. In order to compare the experimental results, a conceptual 3D model of the system punch-blank-die was made, which respects entirely the geometry of the active elements and of the blank, but schematizes/approximates the material properties of the blank. Thus, using these simulations, we can investigate the variation of the deformation parameters throughout the drawing process: from the initial blank form to the final drawn part. The numerical simulation of the drawing of cylindrical cups was made using the ANSYS V14 program, the Explicit Dynamic module. Using the signal-to-noise ratio suggested by TAGUCHI, we determined the influence of each of the three parameters under study on deep drawing quality, as well as their optimal values.

  15. Experimental verification of Space Platform battery discharger design optimization

    NASA Technical Reports Server (NTRS)

    Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.

    1991-01-01

    The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.

  16. Critical Zone Experimental Design to Assess Soil Processes and Function

    NASA Astrophysics Data System (ADS)

    Banwart, Steve

    2010-05-01

    experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  17. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  18. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  19. Return to Our Roots: Raising Radishes To Teach Experimental Design.

    ERIC Educational Resources Information Center

    Stallings, William M.

    To provide practice in making design decisions, collecting and analyzing data, and writing and documenting results, a professor of statistics has his graduate students in statistics and research methodology classes design and perform an experiment on the effects of fertilizers on the growth of radishes. This project has been required of students…

  20. Introduction to Experimental Design: Can You Smell Fear?

    ERIC Educational Resources Information Center

    Willmott, Chris J. R.

    2011-01-01

    The ability to design appropriate experiments in order to interrogate a research question is an important skill for any scientist. The present article describes an interactive lecture-based activity centred around a comparison of two contrasting approaches to investigation of the question "Can you smell fear?" A poorly designed experiment (a video…

  1. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  2. Experimental design: computer simulation for improving the precision of an experiment.

    PubMed

    van Wilgenburg, Henk; Zillesen, Piet G van Schaick; Krulichova, Iva

    2004-06-01

    An interactive computer-assisted learning program, ExpDesign, that has been developed for simulating animal experiments, is introduced. The program guides students through the steps for designing animal experiments and estimating optimal sample sizes. Principles are introduced for controlling variation, establishing the experimental unit, selecting randomised block and factorial experimental designs, and applying the appropriate statistical analysis. Sample Power is a supporting tool that visualises the process of estimating the sample size. The aim of developing the ExpDesign program has been to make biomedical research workers more familiar with some basic principles of experimental design and statistics and to facilitate discussions with statisticians.

  3. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  4. Experimental design for research on shock-turbulence interaction

    NASA Technical Reports Server (NTRS)

    Radcliffe, S. W.

    1969-01-01

    Report investigates the production of acoustic waves in the interaction of a supersonic shock and a turbulence environment. The five stages of the investigation are apparatus design, development of instrumentation, preliminary experiment, turbulence generator selection, and main experiments.

  5. International Thermonuclear Experimental Reactor (ITER) neutral beam design

    SciTech Connect

    Myers, T.J.; Brook, J.W.; Spampinato, P.T.; Mueller, J.P.; Luzzi, T.E.; Sedgley, D.W. . Space Systems Div.)

    1990-10-01

    This report discusses the following topics on ITER neutral beam design: ion dump; neutralizer and module gas flow analysis; vacuum system; cryogenic system; maintainability; power distribution; and system cost.

  6. Using a hybrid approach to optimize experimental network design for aquifer parameter identification.

    PubMed

    Chang, Liang-Cheng; Chu, Hone-Jay; Lin, Yu-Pin; Chen, Yu-Wen

    2010-10-01

    This research develops an optimum design model of groundwater network using genetic algorithm (GA) and modified Newton approach, based on the experimental design conception. The goal of experiment design is to minimize parameter uncertainty, represented by the covariance matrix determinant of estimated parameters. The design problem is constrained by a specified cost and solved by GA and a parameter identification model. The latter estimates optimum parameter value and its associated sensitivity matrices. The general problem is simplified into two classes of network design problems: an observation network design problem and a pumping network design problem. Results explore the relationship between the experimental design and the physical processes. The proposed model provides an alternative to solve optimization problems for groundwater experimental design. PMID:19757116

  7. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    DOE PAGES

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less

  8. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    SciTech Connect

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.

  9. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    SciTech Connect

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  10. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-01

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.

  11. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-01

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model. PMID:26575545

  12. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers.

    PubMed

    Eriksson, Tobias J R; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio ( SNR ) ≃ 15 dB in transmit-receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  13. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers

    PubMed Central

    Eriksson, Tobias J. R.; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N.; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio (SNR)≃15 dB in transmit–receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  14. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  15. Design and experimental validation of a compact collimated Knudsen source.

    PubMed

    Wouters, Steinar H W; Ten Haaf, Gijs; Mutsaers, Peter H A; Vredenbregt, Edgar J D

    2016-08-01

    In this paper, the design and performance of a collimated Knudsen source, which has the benefit of a simple design over recirculating sources, is discussed. Measurements of the flux, transverse velocity distribution, and brightness of the resulting rubidium beam at different source temperatures were conducted to evaluate the performance. The scaling of the flux and brightness with the source temperature follows the theoretical predictions. The transverse velocity distribution in the transparent operation regime also agrees with the simulated data. The source was tested up to a temperature of 433 K and was able to produce a flux in excess of 10(13) s(-1). PMID:27587111

  16. Optimization of preservatives in a topical formulation using experimental design.

    PubMed

    Rahali, Y; Pensé-Lhéritier, A-M; Mielcarek, C; Bensouda, Y

    2009-12-01

    Optimizing the preservative regime for a preparation requires the antimicrobial effectiveness of several preservative combinations to be determined. In this study, three preservatives were tested: benzoic acid, sorbic acid and benzylic alcohol. Their preservative effects were evaluated using the antimicrobial preservative efficacy test (challenge-test) of the European Pharmacopeia (EP). A D-optimal mixture design was used to provide a maximum of information from a limited number of experiments. The results of this study were analysed with the help of the Design Expert software and enabled us to formulate emulsions satisfying both requirements A and B of the EP.

  17. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The paper summarizes the results obtained in an exploratory evaluation of ceramics for automobile thermal reactors. Candidate ceramic materials were evaluated in several reactor designs using both engine dynamometer and vehicle road tests. Silicon carbide contained in a corrugated metal support structure exhibited the best performance, lasting 1100 hours in engine dynamometer tests and for more than 38,600 kilimeters (24,000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  18. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The results obtained in an exploratory evaluation of ceramics for automobile thermal reactors are summarized. Candidate ceramic materials were evaluated in several reactor designs by using both engine-dynamometer and vehicle road tests. Silicon carbide contained in a corrugated-metal support structure exhibited the best performance, lasting 1100 hr in engine-dynamometer tests and more than 38,600 km (24000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as those containing silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  19. The Inquiry Flame: Scaffolding for Scientific Inquiry through Experimental Design

    ERIC Educational Resources Information Center

    Pardo, Richard; Parker, Jennifer

    2010-01-01

    In the lesson presented in this article, students learn to organize their thinking and design their own inquiry experiments through careful observation of an object, situation, or event. They then conduct these experiments and report their findings in a lab report, poster, trifold board, slide, or video that follows the typical format of the…

  20. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  1. Acting Like a Physicist: Student Approach Study to Experimental Design

    ERIC Educational Resources Information Center

    Karelina, Anna; Etkina, Eugenia

    2007-01-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In…

  2. Creativity in Advertising Design Education: An Experimental Study

    ERIC Educational Resources Information Center

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  3. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  4. Applying the Taguchi method to river water pollution remediation strategy optimization.

    PubMed

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  5. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    PubMed Central

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-01-01

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km. PMID:24739765

  6. Optimization of Physical Working Environment Setting to Improve Productivity and Minimize Error by Taguchi and VIKOR Methods

    NASA Astrophysics Data System (ADS)

    Ilma Rahmillah, Fety

    2016-01-01

    The working environment is one factor that has contribution to the worker's performance, especially for continuous and monotonous works. L9 Taguchi design experiment for inner array is used to design the experiment which was carried out in laboratory whereas L4 is for outer array. Four control variables with three levels of each are used to get the optimal combination of working environment setting. Four responses are also measured to know the effect of four control factors. Results shown that by using ANOVA, the effect of illumination, temperature, and instrumental music to the number of ouput, number of error, and rating perceived discomfort is significant with the total variance explained of 54,67%, 60,67%, and 75,22% respectively. By using VIKOR method, it yields the optimal combination of experiment 66 with the setting condition of A3-B2-C1-D3. The illumination is 325-350 lux, temperature is 240-260C, fast category of instrumental music, and 70-80 dB for intensity of the music being played.

  7. On the construction of experimental designs for a given task by jointly optimizing several quality criteria: Pareto-optimal experimental designs.

    PubMed

    Sánchez, M S; Sarabia, L A; Ortiz, M C

    2012-11-19

    Experimental designs for a given task should be selected on the base of the problem being solved and of some criteria that measure their quality. There are several such criteria because there are several aspects to be taken into account when making a choice. The most used criteria are probably the so-called alphabetical optimality criteria (for example, the A-, E-, and D-criteria related to the joint estimation of the coefficients, or the I- and G-criteria related to the prediction variance). Selecting a proper design to solve a problem implies finding a balance among these several criteria that measure the performance of the design in different aspects. Technically this is a problem of multi-criteria optimization, which can be tackled from different views. The approach presented here addresses the problem in its real vector nature, so that ad hoc experimental designs are generated with an algorithm based on evolutionary algorithms to find the Pareto-optimal front. There is not theoretical limit to the number of criteria that can be studied and, contrary to other approaches, no just one experimental design is computed but a set of experimental designs all of them with the property of being Pareto-optimal in the criteria needed by the user. Besides, the use of an evolutionary algorithm makes it possible to search in both continuous and discrete domains and avoid the need of having a set of candidate points, usual in exchange algorithms.

  8. Tocorime Apicu: design and validation of an experimental search engine

    NASA Astrophysics Data System (ADS)

    Walker, Reginald L.

    2001-07-01

    In the development of an integrated, experimental search engine, Tocorime Apicu, the incorporation and emulation of the evolutionary aspects of the chosen biological model (honeybees) and the field of high-performance knowledge discovery in databases results in the coupling of diverse fields of research: evolutionary computations, biological modeling, machine learning, statistical methods, information retrieval systems, active networks, and data visualization. The use of computer systems provides inherent sources of self-similarity traffic that result from the interaction of file transmission, caching mechanisms, and user-related processes. These user-related processes are initiated by the user, application programs, or the operating system (OS) for the user's benefit. The effect of Web transmission patterns, coupled with these inherent sources of self-similarity associated with the above file system characteristics, provide an environment for studying network traffic. The goal of the study was client-based, but with no user interaction. New methodologies and approaches were needed as network packet traffic increased in the LAN, LAN+WAN, and WAN. Statistical tools and methods for analyzing datasets were used to organize data captured at the packet level for network traffic between individual source/destination pairs. Emulation of the evolutionary aspects of the biological model equips the experimental search engine with an adaptive system model which will eventually have the capability to evolve with an ever- changing World Wide Web environment. The results were generated using a LINUX OS.

  9. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  10. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  11. Experimental characterisation of a novel viscoelastic rectifier design

    PubMed Central

    Ejlebjerg Jensen, Kristian; Szabo, Peter; Okkels, Fridolin; Alves, M. A.

    2012-01-01

    A planar microfluidic system with contractions and obstacles is characterized in terms of anisotropic flow resistance due to viscoelastic effects. The working mechanism is illustrated using streak photography, while the diodicity performance is quantified by pressure drop measurements. The point of maximum performance is found to occur at relatively low elasticity levels, with diodicity around 3.5. Based on a previously published numerical work [Ejlebjerg et al., Appl. Phys. Lett. 100, 234102 (2012)], 2D simulations of the FENE-CR differential constitutive model are also presented, but limited reproducibility and uncertainties of the experimental data prevent a direct comparison at low elasticity, where the flow is essentially two-dimensional. PMID:24324532

  12. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  13. Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design

    NASA Technical Reports Server (NTRS)

    Page, L. W.; From, T. P.

    1977-01-01

    Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.

  14. The ISR Asymmetrical Capacitor Thruster: Experimental Results and Improved Designs

    NASA Technical Reports Server (NTRS)

    Canning, Francis X.; Cole, John; Campbell, Jonathan; Winet, Edwin

    2004-01-01

    A variety of Asymmetrical Capacitor Thrusters has been built and tested at the Institute for Scientific Research (ISR). The thrust produced for various voltages has been measured, along with the current flowing, both between the plates and to ground through the air (or other gas). VHF radiation due to Trichel pulses has been measured and correlated over short time scales to the current flowing through the capacitor. A series of designs were tested, which were increasingly efficient. Sharp features on the leading capacitor surface (e.g., a disk) were found to increase the thrust. Surprisingly, combining that with sharp wires on the trailing edge of the device produced the largest thrust. Tests were performed for both polarizations of the applied voltage, and for grounding one or the other capacitor plate. In general (but not always) it was found that the direction of the thrust depended on the asymmetry of the capacitor rather than on the polarization of the voltage. While no force was measured in a vacuum, some suggested design changes are given for operation in reduced pressures.

  15. Shock-driven mixing: Experimental design and initial conditions

    NASA Astrophysics Data System (ADS)

    Friedman, Gavin; Prestridge, Katherine; Mejia-Alvarez, Ricardo; Leftwich, Megan

    2012-03-01

    A new Vertical Shock Tube (VST) has been designed to study shock-induced mixing due to the Richtmyer-Meshkov Instability (RMI) developing on a 3-D multi-mode interface between two gases. These studies characterize how interface contours, gas density difference, and Mach No. affect the ensuing mixing by using simultaneous measurements of velocity/density fields. The VST allows for the formation of a single stably-stratified interface, removing complexities of the dual interface used in prior RMI work. The VST also features a new diaphragmless driver, making feasible larger ensembles of data by reducing intra-shot time, and a larger viewing window allowing new observations of late-time mixing. The initial condition (IC) is formed by a co-flow system, chosen to minimize diffusion at the gas interface. To ensure statistically stationary ICs, a contoured nozzle has been manufactured to form repeatable co-flowing jets that are manipulated by a flapping splitter plate to generate perturbations that span the VST. This talk focuses on the design of the IC flow system and shows initial results characterizing the interface.

  16. Shock-Driven Mixing: Experimental Design and Initial Conditions

    NASA Astrophysics Data System (ADS)

    Friedman, Gavin; Prestridge, Kathy; Mejia-Alvarez, Ricardo; Leftwich, Megan

    2011-06-01

    A new Vertical Shock Tube (VST) has been designed to study shock-induced mixing due to the Richtmyer-Meshkov Instability (RMI) developing on a 3-D multi-mode interface between two gases. These studies characterize how interface contours, gas density difference, and Mach No. affect the ensuing mixing by using simultaneous measurements of velocity/density fields. The VST allows for the formation of a single stably-stratified interface, removing complexities of the dual interface used in prior RMI work. The VST also features a new diaphragmless driver, making feasible larger ensembles of data by reducing intra-shot time, and a larger viewing window allowing new observations of late-time mixing. The initial condition (IC) is formed by a co-flow system, chosen to minimize diffusion at the gas interface. To ensure statistically stationary ICs, a contoured nozzle has been manufactured to form repeatable co-flowing jets that are manipulated by a flapping splitter plate to generate perturbations that span the VST. This talk focuses on the design of the IC flow system and shows initial results characterizing the interface.

  17. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  18. A rational design change methodology based on experimental and analytical modal analysis

    SciTech Connect

    Weinacht, D.J.; Bennett, J.G.

    1993-08-01

    A design methodology that integrates analytical modeling and experimental characterization is presented. This methodology represents a powerful tool for making rational design decisions and changes. An example of its implementation in the design, analysis, and testing of a precisions machine tool support structure is given.

  19. Synthetic tracked aperture ultrasound imaging: design, simulation, and experimental evaluation.

    PubMed

    Zhang, Haichong K; Cheng, Alexis; Bottenus, Nick; Guo, Xiaoyu; Trahey, Gregg E; Boctor, Emad M

    2016-04-01

    Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA-called synthetic tracked aperture ultrasound (STRATUS) imaging-by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality. PMID:27088108

  20. A Modified Experimental Hut Design for Studying Responses of Disease-Transmitting Mosquitoes to Indoor Interventions: The Ifakara Experimental Huts

    PubMed Central

    Okumu, Fredros O.; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J.

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  1. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  2. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  3. 2-[(Hydroxymethyl)amino]ethanol in water as a preservative: Study of formaldehyde released by Taguchi's method

    NASA Astrophysics Data System (ADS)

    Wisessirikul, W.; Loykulnant, S.; Montha, S.; Fhulua, T.; Prapainainar, P.

    2016-06-01

    This research studied the quantity of free formaldehyde released from 2- [(hydroxymethyl)amino]ethanol (HAE) in DI water and natural rubber latex mixture using high-performance liquid chromatography (HPLC) technique. The quantity of formaldehyde retained in the solution was cross-checked by using titration technique. The investigated factors were the concentration of preservative (HAE), pH, and temperature. Taguchi's method was used to design the experiments. The number of experiments was reduced to 16 experiments from all possible experiments by orthogonal arrays (3 factors and 4 levels in each factor). Minitab program was used as a tool for statistical calculation and for finding the suitable condition for the preservative system. HPLC studies showed that higher temperature and higher concentration of the preservative influence the amount of formaldehyde released. It was found that conditions at which formaldehyde was released in the lowest amount were 1.6%w/v HAE, 4 to 40 °C, and the original pH. Nevertheless, the pH value of NR latex should be more than 10 (the suitable pH value was found to be 13). This preservative can be used to replace current preservative systems and can maintain the quality of latex for long-term storage. Use of the proposed preservative system was also shown to have reduced impact on the toxicity of the environment.

  4. Patient reactions to personalized medicine vignettes: An experimental design

    PubMed Central

    Butrick, Morgan; Roter, Debra; Kaphingst, Kimberly; Erby, Lori H.; Haywood, Carlton; Beach, Mary Catherine; Levy, Howard P.

    2011-01-01

    Purpose Translational investigation on personalized medicine is in its infancy. Exploratory studies reveal attitudinal barriers to “race-based medicine” and cautious optimism regarding genetically personalized medicine. This study describes patient responses to hypothetical conventional, race-based, or genetically personalized medicine prescriptions. Methods Three hundred eighty-seven participants (mean age = 47 years; 46% white) recruited from a Baltimore outpatient center were randomized to this vignette-based experimental study. They were asked to imagine a doctor diagnosing a condition and prescribing them one of three medications. The outcomes are emotional response to vignette, belief in vignette medication efficacy, experience of respect, trust in the vignette physician, and adherence intention. Results Race-based medicine vignettes were appraised more negatively than conventional vignettes across the board (Cohen’s d = −0.51−0.57−0.64, P < 0.001). Participants rated genetically personalized comparably with conventional medicine (− 0.14−0.15−0.17, P = 0.47), with the exception of reduced adherence intention to genetically personalized medicine (Cohen’s d = −0.38−0.41−0.44, P = 0.009). This relative reluctance to take genetically personalized medicine was pronounced for racial minorities (Cohen’s d =−0.38−0.31−0.25, P = 0.02) and was related to trust in the vignette physician (change in R2 = 0.23, P < 0.001). Conclusions This study demonstrates a relative reluctance to embrace personalized medicine technology, especially among racial minorities, and highlights enhancement of adherence through improved doctor-patient relationships. PMID:21270639

  5. Estimating intervention effects across different types of single-subject experimental designs: empirical illustration.

    PubMed

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S Natasha; Van den Noortgate, Wim

    2015-03-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs often focuses on combining simple AB phase designs or multiple-baseline designs. We discuss the estimation of the average intervention effect estimate across different types of single-subject experimental designs using several multilevel meta-analytic models. We illustrate the different models using a reanalysis of a meta-analysis of single-subject experimental designs (Heyvaert, Saenen, Maes, & Onghena, in press). The intervention effect estimates using univariate 3-level models differ from those obtained using a multivariate 3-level model that takes the dependence between effect sizes into account. Because different results are obtained and the multivariate model has multiple advantages, including more information and smaller standard errors, we recommend researchers to use the multivariate multilevel model to meta-analyze studies that utilize different single-subject designs.

  6. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  7. Strong Lens Time Delay Challenge. I. Experimental Design

    NASA Astrophysics Data System (ADS)

    Dobler, Gregory; Fassnacht, Christopher D.; Treu, Tommaso; Marshall, Phil; Liao, Kai; Hojjati, Alireza; Linder, Eric; Rumbaugh, Nicholas

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ~103 strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a "Time Delay Challenge" (TDC). The challenge is organized as a set of "ladders," each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  8. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    SciTech Connect

    Dobler, Gregory; Fassnacht, Christopher D.; Rumbaugh, Nicholas; Treu, Tommaso; Liao, Kai; Marshall, Phil; Hojjati, Alireza; Linder, Eric

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  9. Bayesian experimental design of a multichannel interferometer for Wendelstein 7-Xa)

    NASA Astrophysics Data System (ADS)

    Dreier, H.; Dinklage, A.; Fischer, R.; Hirsch, M.; Kornejew, P.

    2008-10-01

    Bayesian experimental design (BED) is a framework for the optimization of diagnostics basing on probability theory. In this work it is applied to the design of a multichannel interferometer at the Wendelstein 7-X stellarator experiment. BED offers the possibility to compare diverse designs quantitatively, which will be shown for beam-line designs resulting from different plasma configurations. The applicability of this method is discussed with respect to its computational effort.

  10. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.

    2013-11-01

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c 2 mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  11. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.; Collaboration: DarkLight Collaboration

    2013-11-07

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c{sup 2} mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  12. Neuroimaging in aphasia treatment research: Issues of experimental design for relating cognitive to neural changes

    PubMed Central

    Rapp, Brenda; Caplan, David; Edwards, Susan; Visch-Brink, Evy; Thompson, Cynthia K.

    2012-01-01

    The design of functional neuroimaging studies investigating the neural changes that support treatment-based recovery of targeted language functions in acquired aphasia faces a number of challenges. In this paper, we discuss these challenges and focus on experimental tasks and experimental designs that can be used to address the challenges, facilitate the interpretation of results and promote integration of findings across studies. PMID:22974976

  13. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  14. Design studies for the transmission simulator method of experimental dynamic substructuring.

    SciTech Connect

    Mayes, Randall Lee; Arviso, Michael

    2010-05-01

    In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.

  15. [Diagnosis of liver diseases by classification of laboratory signal factor pattern findings with the Mahalanobis·Taguchi Adjoint method].

    PubMed

    Nakajima, Hisato; Yano, Kouya; Uetake, Shinichirou; Takagi, Ichiro

    2012-02-01

    There are many autoimmune liver diseases in which diagnosis is difficult so that overlap is accepted, and this negatively affects treatment. The initial diagnosis is therefore important for later treatment and convalescence. We distinguished autoimmune cholangitis, autoimmune hepatitis and primary biliary cirrhosis by the Mahalanobis·Taguchi Adjoint (MTA) method in the Mahalanobis·Taguchi system and analyzed the pattern of factor effects by the MTA method. As a result, the characteristic factor effect pattern of each disease was classified, enabling the qualitative evaluation of cases including overlapping cases which were difficult to diagnose.

  16. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  17. Experimental Design for Local School Districts (July 18-August 26, 1966). Final Report.

    ERIC Educational Resources Information Center

    Norton, Daniel P.

    A 6-week summer institute on experimental design was conducted for public school personnel who had been designated by their school administrations as having responsibility for research together with some time released for devotion to research. Of the 32, 17 came from Indiana, 15 from 12 other states. Lectures on statistical principles of design…

  18. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  19. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    ERIC Educational Resources Information Center

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  20. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological…

  1. Exploiting Distance Technology to Foster Experimental Design as a Neglected Learning Objective in Labwork in Chemistry

    ERIC Educational Resources Information Center

    d'Ham, Cedric; de Vries, Erica; Girault, Isabelle; Marzin, Patricia

    2004-01-01

    This paper deals with the design process of a remote laboratory for labwork in chemistry. In particular, it focuses on the mutual dependency of theoretical conjectures about learning in the experimental sciences and technological opportunities in creating learning environments. The design process involves a detailed analysis of the expert task and…

  2. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cédric

    2014-01-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…

  3. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    PubMed

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  4. An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and Taguchi Orthogonal Array

    NASA Astrophysics Data System (ADS)

    Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh

    2016-06-01

    P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a Taguchi L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, Taguchi's method was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.

  5. A Computational/Experimental Study of Two Optimized Supersonic Transport Designs and the Reference H Baseline

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.

    1999-01-01

    Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.

  6. Unique considerations in the design and experimental evaluation of tailored wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen

    1992-01-01

    Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.

  7. Experimental validation of optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Joshi, Suresh M.; Walz, Joseph E.

    1993-01-01

    An optimization-based integrated design approach for flexible space structures is experimentally validated using three types of dissipative controllers, including static, dynamic, and LQG dissipative controllers. The nominal phase-0 of the controls structure interaction evolutional model (CEM) structure is redesigned to minimize the average control power required to maintain specified root-mean-square line-of-sight pointing error under persistent disturbances. The redesign structure, phase-1 CEM, was assembled and tested against phase-0 CEM. It is analytically and experimentally demonstrated that integrated controls-structures design is substantially superior to that obtained through the traditional sequential approach. The capability of a software design tool based on an automated design procedure in a unified environment for structural and control designs is demonstrated.

  8. Experimental design for stable genetic manipulation in mammalian cell lines: lentivirus and alternatives.

    PubMed

    Shearer, Robert F; Saunders, Darren N

    2015-01-01

    The use of third-generation lentiviral vectors is now commonplace in most areas of basic biology. These systems provide a fast, efficient means for modulating gene expression, but experimental design needs to be carefully considered to minimize potential artefacts arising from off-target effects and other confounding factors. This review offers a starting point for those new to lentiviral-based vector systems, addressing the main issues involved with the use of lentiviral systems in vitro and outlines considerations which should be taken into account during experimental design. Factors such as selecting an appropriate system and controls, and practical titration of viral transduction are important considerations for experimental design. We also briefly describe some of the more recent advances in genome editing technology. TALENs and CRISPRs offer an alternative to lentivirus, providing endogenous gene editing with reduced off-target effects often at the expense of efficiency.

  9. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  10. Development of the Neuron Assessment for Measuring Biology Students’ Use of Experimental Design Concepts and Representations

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students’ competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new “experimentation assessments,” 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  11. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy J

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new "experimentation assessments," 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  12. Perspectives on Prediction Variance and Bias in Developing, Assessing, and Comparing Experimental Designs

    SciTech Connect

    Piepel, Gregory F.

    2010-12-01

    The vast majority of response surface methods used in practice to develop, assess, and compare experimental designs focus on variance properties of designs. Because response surface models only approximate the true unknown relationships, models are subject to bias errors as well as variance errors. Beginning with the seminal paper of Box and Draper (1959) and over the subsequent 50 years, methods that consider bias and mean-squared-error (variance and bias) properties of designs have been presented in the literature. However, these methods are not widely implemented in software and are not routinely used to develop, assess, and compare experimental designs in practice. Methods for developing, assessing, and comparing response surface designs that account for variance properties are reviewed. Brief synopses of publications that consider bias or mean-squared-error properties are provided. The difficulties and approaches for addressing bias properties of designs are summarized. Perspectives on experimental design methods that account for bias and/or variance properties and on future needs are presented.

  13. Experimental Design and Data collection of a finishing end milling operation of AISI 1045 steel

    PubMed Central

    Dias Lopes, Luiz Gustavo; de Brito, Tarcísio Gonçalves; de Paiva, Anderson Paulo; Peruchi, Rogério Santana; Balestrassi, Pedro Paulo

    2016-01-01

    In this Data in Brief paper, a central composite experimental design was planned to collect the surface roughness of an end milling operation of AISI 1045 steel. The surface roughness values are supposed to suffer some kind of variation due to the action of several factors. The main objective here was to present a multivariate experimental design and data collection including control factors, noise factors, and two correlated responses, capable of achieving a reduced surface roughness with minimal variance. Lopes et al. (2016) [1], for example, explores the influence of noise factors on the process performance. PMID:26909374

  14. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1992-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  15. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1991-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  16. The effectiveness of family planning programs evaluated with true experimental designs.

    PubMed Central

    Bauman, K E

    1997-01-01

    OBJECTIVES: This paper describes the magnitude of effects for family planning programs evaluated with true experimental designs. METHODS: Studies that used true experimental designs to evaluate family planning programs were identified and their results subjected to meta-analysis. RESULTS: For the 14 studies with the information needed to calculate effect size, the Pearson r between program and effect variables ranged from -.08 to .09 and averaged .08. CONCLUSIONS: The programs evaluated in the studies considered have had, on average, smaller effects than many would assume and desire. PMID:9146451

  17. The consequences of consumer diversity loss: different answers from different experimental designs.

    PubMed

    Byrnes, Jarrett E; Stachowicz, John J

    2009-10-01

    Predators are often the most vulnerable group to extinction, yet the consequences of changing predator diversity are poorly understood. One source of confusion has been different experimental designs. The multiple-predator effects literature typically employs an additive design, while the biodiversity ecosystem function literature typically uses a replacement design. Separately, these designs each detect only a subset of the changes in food web interactions caused by predator loss. Here, we measure the impact of consumer diversity on sessile marine invertebrates using a combination additive-replacement design. We couple this with a meta-analysis of previous combination experiments. We use these two approaches to explore how each design can detect different types of interactions among predators. We find that, while high diversity does lead to more negative interspecific interactions, the strength of these interactions is often weaker than negative intraspecific interactions caused by increasing the density of a single species alone. We conclude that a hybrid design is the optimal method to explore the mechanisms behind the effects of changing predator diversity. If researchers merely want to know the consequences of changing predator diversity, at a bare minimum, the experimental design must mimic the actual changes in both predator density and diversity in their system of interest. However, only a hybrid design can distinguish the consequences of shifting the balance of interspecific and intraspecific interactions within a community, an issue of great importance when considering both natural diversity loss and pest biocontrol.

  18. Process optimization for Ni(II) removal from wastewater by calcined oyster shell powders using Taguchi method.

    PubMed

    Yen, Hsing Yuan; Li, Jun Yan

    2015-09-15

    Waste oyster shells cause great environmental concerns and nickel is a harmful heavy metal. Therefore, we applied the Taguchi method to take care of both issues by optimizing the controllable factors for Ni(II) removal by calcined oyster shell powders (OSP), including the pH (P), OSP calcined temperature (T), Ni(II) concentration (C), OSP dose (D), and contact time (t). The results show that their percentage contribution in descending order is P (64.3%) > T (18.9%) > C (8.8%) > D (5.1%) > t (1.7%). The optimum condition is pH of 10 and OSP calcined temperature of 900 °C. Under the optimum condition, the Ni(II) can be removed almost completely; the higher the pH, the more the precipitation; the higher the calcined temperature, the more the adsorption. The latter is due to the large number of porosities created at the calcination temperature of 900 °C. The porosities generate a large amount of cavities which significantly increase the surface area for adsorption. A multiple linear regression equation obtained to correlate Ni(II) removal with the controllable factors is: Ni(II) removal(%) = 10.35 × P + 0.045 × T - 1.29 × C + 19.33 × D + 0.09 × t - 59.83. This equation predicts Ni(II) removal well and can be used for estimating Ni(II) removal during the design stage of Ni(II) removal by calcined OSP. Thus, OSP can be used to remove nickel effectively and the formula for removal prediction is developed for practical applications.

  19. Using R in experimental design with BIBD: An application in health sciences

    NASA Astrophysics Data System (ADS)

    Oliveira, Teresa A.; Francisco, Carla; Oliveira, Amílcar; Ferreira, Agostinho

    2016-06-01

    Considering the implementation of an Experimental Design, in any field, the experimenter must pay particular attention and look for the best strategies in the following steps: planning the design selection, conduct the experiments, collect observed data, proceed to analysis and interpretation of results. The focus is on providing both - a deep understanding of the problem under research and a powerful experimental process at a reduced cost. Mainly thanks to the possibility of allowing to separate variation sources, the importance of Experimental Design in Health Sciences is strongly recommended since long time. Particular attention has been devoted to Block Designs and more precisely to Balanced Incomplete Block Designs - in this case the relevance states from the fact that these designs allow testing simultaneously a number of treatments bigger than the block size. Our example refers to a possible study of inter reliability of the Parkinson disease, taking into account the UPDRS (Unified Parkinson's disease rating scale) in order to test if there are significant differences between the specialists who evaluate the patients performances. Statistical studies on this disease were described for example in Richards et al (1994), where the authors investigate the inter-rater Reliability of the Unified Parkinson's Disease Rating Scale Motor Examination. We consider a simulation of a practical situation in which the patients were observed by different specialists and the UPDRS on assessing the impact of Parkinson's disease in patients was observed. Assigning treatments to the subjects following a particular BIBD(9,24,8,3,2) structure, we illustrate that BIB Designs can be used as a powerful tool to solve emerging problems in this area. Once a structure with repeated blocks allows to have some block contrasts with minimum variance, see Oliveira et al. (2006), the design with cardinality 12 was selected for the example. R software was used for computations.

  20. Effects of experimental design on calibration curve precision in routine analysis.

    PubMed

    Pimentel, M F; Neto, B de B; Saldanha, T C; Araújo, M C

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data.

  1. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.; Crum, Jarrod V.

    2015-07-24

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer

  2. β-galactosidase Production by Aspergillus niger ATCC 9142 Using Inexpensive Substrates in Solid-State Fermentation: Optimization by Orthogonal Arrays Design

    PubMed Central

    Kazemi, Samaneh; Khayati, Gholam; Faezi-Ghasemi, Mohammad

    2016-01-01

    Background: Enzymatic hydrolysis of lactose is one of the most important biotechnological processes in the food industry, which is accomplished by enzyme β-galactosidase (β-gal, β-D-galactoside galactohydrolase, EC 3.2.1.23), trivial called lactase. Orthogonal arrays design is an appropriate option for the optimization of biotechnological processes for the production of microbial enzymes. Methods: Design of experimental (DOE) methodology using Taguchi orthogonal array (OA) was employed to screen the most significant levels of parameters, including the solid substrates (wheat straw, rice straw, and peanut pod), the carbon/nitrogen (C/N) ratios, the incubation time, and the inducer. The level of β-gal production was measured by a photometric enzyme activity assay using the artificial substrate ortho-Nitrophenyl-β-D-galactopyranoside. Results: The results showed that C/N ratio (0.2% [w/v], incubation time (144 hour), and solid substrate (wheat straw) were the best conditions determined by the design of experiments using the Taguchi approach. Conclusion: Our finding showed that the use of rice straw and peanut pod, as solid-state substrates, led to 2.041-folds increase in the production of the enzyme, as compared to rice straw. In addition, the presence of an inducer did not have any significant impact on the enzyme production levels.

  3. 78 FR 5162 - Designation of a Nonessential Experimental Population of Central Valley Spring-Run Chinook Salmon...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-24

    ... January 16, 2013 we, NMFS, published a proposed rule (78 FR 3381) to designate a nonessential experimental... Experimental Population of Central Valley Spring-Run Chinook Salmon Below Friant Dam in the San Joaquin River..., published a proposed rule to designate a nonessential experimental population of Central Valley...

  4. EXPERIMENTAL PROGRAM IN ENGINEERING AND DESIGN DATA PROCESSING TECHNOLOGY. FINAL REPORT.

    ERIC Educational Resources Information Center

    KOHR, RICHARD L.; WOLFE, GEORGE P.

    AN EXPERIMENTAL PROGRAM IN ENGINEERING AND DESIGN DATA PROCESSING TECHNOLOGY WAS UNDERTAKEN TO DEVELOP A PROPOSED CURRICULUM OUTLINE AND ADMISSION STANDARDS FOR OTHER INSTITUTIONS IN THE PLANNING OF PROGRAMS TO TRAIN COMPUTER PROGRAMMERS. OF THE FIRST CLASS OF 26 STUDENTS, 17 COMPLETED THE PROGRAM AND 12 (INCLUDING ONE WHO DID NOT GRADUATE) WERE…

  5. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  6. Design and Experimental Investigation of a Single-stage Turbine with a Downstream Stator

    NASA Technical Reports Server (NTRS)

    Plohr, Henry W; Holeski, Donald E; Forrette, Robert E

    1957-01-01

    The high-work-output turbine had an experimental efficiency of 0.830 at the design point and a maximum efficiency of 0.857. The downstream stator was effective in providing axial flow out of the turbine for almost the whole range of turbine operation.

  7. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    ERIC Educational Resources Information Center

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  8. Guided-Inquiry Labs Using Bean Beetles for Teaching the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    Schlueter, Mark A.; D'Costa, Allison R.

    2013-01-01

    Guided-inquiry lab activities with bean beetles ("Callosobruchus maculatus") teach students how to develop hypotheses, design experiments, identify experimental variables, collect and interpret data, and formulate conclusions. These activities provide students with real hands-on experiences and skills that reinforce their understanding of the…

  9. Guided Inquiry in a Biochemistry Laboratory Course Improves Experimental Design Ability

    ERIC Educational Resources Information Center

    Goodey, Nina M.; Talgar, Cigdem P.

    2016-01-01

    Many biochemistry laboratory courses expose students to laboratory techniques through pre-determined experiments in which students follow stepwise protocols provided by the instructor. This approach fails to provide students with sufficient opportunities to practice experimental design and critical thinking. Ten inquiry modules were created for a…

  10. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  11. Experimental design applied to the formulation of lipsticks with particular features.

    PubMed

    Zanotti, F; Masiello, S; Bader, S; Guarneri, M; Vojnovic, D

    1998-08-01

    In our work a non-classical experimental design was applied to obtain lipsticks endowed with particular characteristics. Our aim was to formulate lipsticks that leave a brilliant and shiny colour application and have a transparent look. The emollient substances and the waxes (consistency factors) were identified as the main variables of the system. A two phase experimental strategy was thought out: the optimal quantities of consistency factors were selected using a Doehlert experimental matrix, whereas the correct mixtures of emollients were determined using a Scheffé simplex-centroid design. These two design were combined and a set of 49 experiments was obtained. The experiments carried out allowed the definition of a zone of two phases in which the objectives were attained: the correct types and appropriate quantities of emollients and waxes were determined. To find a possible correlation between some mixtures and the lipsticks' sensorial behaviour, differential scanning calorimetry was used. These results, in addition to those obtained using the experimental design allowed us to select the best lipstick formula. (c) Rapid Science Ltd. 1998. PMID:18505505

  12. An Experimental Two-Way Video Teletraining System: Design, Development and Evaluation.

    ERIC Educational Resources Information Center

    Simpson, Henry; And Others

    1991-01-01

    Describes the design, development, and evaluation of an experimental two-way video teletraining (VTT) system by the Navy that consisted of two classrooms linked by a land line to enable two-way audio/video communication. Trends in communication and computer technology for training are described, and a cost analysis is included. (12 references)…

  13. A Course on Experimental Design for Different University Specialties: Experiences and Changes over a Decade

    ERIC Educational Resources Information Center

    Martinez Luaces, Victor; Velazquez, Blanca; Dee, Valerie

    2009-01-01

    We analyse the origin and development of an Experimental Design course which has been taught in several faculties of the Universidad de la Republica and other institutions in Uruguay, over a 10-year period. At the end of the course, students were assessed by carrying out individual work projects on real-life problems, which was innovative for…

  14. Building upon the Experimental Design in Media Violence Research: The Importance of Including Receiver Interpretations.

    ERIC Educational Resources Information Center

    Potter, W. James; Tomasello, Tami K.

    2003-01-01

    Argues that the inclusion of viewer interpretation variables in experimental design and analysis procedures can greatly increase the methodology's ability to explain variance. Focuses attention on the between-group differences, while an analysis of how individual participants interpret the cues in the stimulus material focused attention on the…

  15. Experimental design applied to the formulation of lipsticks with particular features.

    PubMed

    Zanotti, F; Masiello, S; Bader, S; Guarneri, M; Vojnovic, D

    1998-08-01

    In our work a non-classical experimental design was applied to obtain lipsticks endowed with particular characteristics. Our aim was to formulate lipsticks that leave a brilliant and shiny colour application and have a transparent look. The emollient substances and the waxes (consistency factors) were identified as the main variables of the system. A two phase experimental strategy was thought out: the optimal quantities of consistency factors were selected using a Doehlert experimental matrix, whereas the correct mixtures of emollients were determined using a Scheffé simplex-centroid design. These two design were combined and a set of 49 experiments was obtained. The experiments carried out allowed the definition of a zone of two phases in which the objectives were attained: the correct types and appropriate quantities of emollients and waxes were determined. To find a possible correlation between some mixtures and the lipsticks' sensorial behaviour, differential scanning calorimetry was used. These results, in addition to those obtained using the experimental design allowed us to select the best lipstick formula. (c) Rapid Science Ltd. 1998.

  16. Whither Instructional Design and Teacher Training? The Need for Experimental Research

    ERIC Educational Resources Information Center

    Gropper, George L.

    2015-01-01

    This article takes a contrarian position: an "instructional design" or "teacher training" model, because of the sheer number of its interconnected parameters, is too complex to assess or to compare with other models. Models may not be the way to go just yet. This article recommends instead prior experimental research on limited…

  17. Multiple Measures of Juvenile Drug Court Effectiveness: Results of a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Rodriguez, Nancy; Webb, Vincent J.

    2004-01-01

    Prior studies of juvenile drug courts have been constrained by small samples, inadequate comparison groups, or limited outcome measures. The authors report on a 3-year evaluation that examines the impact of juvenile drug court participation on recidivism and drug use. A quasi-experimental design is used to compare juveniles assigned to drug court…

  18. Quiet Clean Short-haul Experimental Engine (QCSEE) Over The Wing (OTW) design report

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The design, fabrication, and testing of two experimental high bypass geared turbofan engines and propulsion systems for short haul passenger aircraft are described. The propulsion technology required for future externally blown flap aircraft with engines located both under the wing and over the wing is demonstrated. Composite structures and digital engine controls are among the topics included.

  19. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not…

  20. SELF-INSTRUCTIONAL SUPPLEMENTS FOR A TELEVISED PHYSICS COURSE, STUDY PLAN AND EXPERIMENTAL DESIGN.

    ERIC Educational Resources Information Center

    KLAUS, DAVID J.; LUMSDAINE, ARTHUR A.

    THE INITIAL PHASES OF A STUDY OF SELF-INSTRUCTIONAL AIDS FOR A TELEVISED PHYSICS COURSE WERE DESCRIBED. THE APPROACH, EXPERIMENTAL DESIGN, PROCEDURE, AND TECHNICAL ASPECTS OF THE STUDY PLAN WERE INCLUDED. THE MATERIALS WERE PREPARED TO SUPPLEMENT THE SECOND SEMESTER OF HIGH SCHOOL PHYSICS. THE MATERIAL COVERED STATIC AND CURRENT ELECTRICITY,…

  1. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  2. Trade-offs in experimental designs for estimating post-release mortality in containment studies

    USGS Publications Warehouse

    Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L

    2014-01-01

    Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.

  3. Bayesian experimental design for identification of model propositions and conceptual model uncertainty reduction

    NASA Astrophysics Data System (ADS)

    Pham, Hai V.; Tsai, Frank T.-C.

    2015-09-01

    The lack of hydrogeological data and knowledge often results in different propositions (or alternatives) to represent uncertain model components and creates many candidate groundwater models using the same data. Uncertainty of groundwater head prediction may become unnecessarily high. This study introduces an experimental design to identify propositions in each uncertain model component and decrease the prediction uncertainty by reducing conceptual model uncertainty. A discrimination criterion is developed based on posterior model probability that directly uses data to evaluate model importance. Bayesian model averaging (BMA) is used to predict future observation data. The experimental design aims to find the optimal number and location of future observations and the number of sampling rounds such that the desired discrimination criterion is met. Hierarchical Bayesian model averaging (HBMA) is adopted to assess if highly probable propositions can be identified and the conceptual model uncertainty can be reduced by the experimental design. The experimental design is implemented to a groundwater study in the Baton Rouge area, Louisiana. We design a new groundwater head observation network based on existing USGS observation wells. The sources of uncertainty that create multiple groundwater models are geological architecture, boundary condition, and fault permeability architecture. All possible design solutions are enumerated using a multi-core supercomputer. Several design solutions are found to achieve an 80%-identifiable groundwater model in 5 years by using six or more existing USGS wells. The HBMA result shows that each highly probable proposition can be identified for each uncertain model component once the discrimination criterion is achieved. The variances of groundwater head predictions are significantly decreased by reducing posterior model probabilities of unimportant propositions.

  4. Intermediate experimental vehicle, ESA program aerodynamics-aerothermodynamics key technologies for spacecraft design and successful flight

    NASA Astrophysics Data System (ADS)

    Dutheil, Sylvain; Pibarot, Julien; Tran, Dac; Vallee, Jean-Jacques; Tribot, Jean-Pierre

    2016-07-01

    With the aim of placing Europe among the world's space players in the strategic area of atmospheric re-entry, several studies on experimental vehicle concepts and improvements of critical re-entry technologies have paved the way for the flight of an experimental space craft. The successful flight of the Intermediate eXperimental Vehicle (IXV), under ESA's Future Launchers Preparatory Programme (FLPP), is definitively a significant step forward from the Atmospheric Reentry Demonstrator flight (1998), establishing Europe as a key player in this field. The IXV project objectives were the design, development, manufacture and ground and flight verification of an autonomous European lifting and aerodynamically controlled reentry system, which is highly flexible and maneuverable. The paper presents, the role of aerodynamics aerothermodynamics as part of the key technologies for designing an atmospheric re-entry spacecraft and securing a successful flight.

  5. Design and structural verification of locomotive bogies using combined analytical and experimental methods

    NASA Astrophysics Data System (ADS)

    Manea, I.; Popa, G.; Girnita, I.; Prenta, G.

    2015-11-01

    The paper presents a practical methodology for design and structural verification of the locomotive bogie frames using a modern software package for design, structural verification and validation through combined, analytical and experimental methods. In the initial stage, the bogie geometry is imported from a CAD program into a finite element analysis program, such as Ansys. The analytical model validation is done by experimental modal analysis carried out on a finished bogie frame. The bogie frame own frequencies and own modes by both experimental and analytic methods are determined and the correlation analysis of the two types of models is performed. If the results are unsatisfactory, the structural optimization should be performed. If the results are satisfactory, the qualification procedures follow by static and fatigue tests carried out in a laboratory with international accreditation in the field. This paper presents an application made on bogie frames for the LEMA electric locomotive of 6000 kW.

  6. Experimental system design for the integration of trapped-ion and superconducting qubit systems

    NASA Astrophysics Data System (ADS)

    De Motte, D.; Grounds, A. R.; Rehák, M.; Rodriguez Blanco, A.; Lekitsch, B.; Giri, G. S.; Neilinger, P.; Oelsner, G.; Il'ichev, E.; Grajcar, M.; Hensinger, W. K.

    2016-07-01

    We present a design for the experimental integration of ion trapping and superconducting qubit systems as a step towards the realization of a quantum hybrid system. The scheme addresses two key difficulties in realizing such a system: a combined microfabricated ion trap and superconducting qubit architecture, and the experimental infrastructure to facilitate both technologies. Developing upon work by Kielpinski et al. (Phys Rev Lett 108(13):130504, 2012. doi: 10.1103/PhysRevLett.108.130504), we describe the design, simulation and fabrication process for a microfabricated ion trap capable of coupling an ion to a superconducting microwave LC circuit with a coupling strength in the tens of kHz. We also describe existing difficulties in combining the experimental infrastructure of an ion trapping set-up into a dilution refrigerator with superconducting qubits and present solutions that can be immediately implemented using current technology.

  7. Conceptual design of a fast-ion D-alpha diagnostic on experimental advanced superconducting tokamak

    SciTech Connect

    Huang, J. Wan, B.; Hu, L.; Hu, C.; Heidbrink, W. W.; Zhu, Y.; Hellermann, M. G. von; Gao, W.; Wu, C.; Li, Y.; Fu, J.; Lyu, B.; Yu, Y.; Ye, M.; Shi, Y.

    2014-11-15

    To investigate the fast ion behavior, a fast ion D-alpha (FIDA) diagnostic system has been planned and is presently under development on Experimental Advanced Superconducting Tokamak. The greatest challenges for the design of a FIDA diagnostic are its extremely low intensity levels, which are usually significantly below the continuum radiation level and several orders of magnitude below the bulk-ion thermal charge-exchange feature. Moreover, an overlaying Motional Stark Effect (MSE) feature in exactly the same wavelength range can interfere. The simulation of spectra code is used here to guide the design and evaluate the diagnostic performance. The details for the parameters of design and hardware are presented.

  8. Conceptual design of a fast-ion D-alpha diagnostic on experimental advanced superconducting tokamak

    NASA Astrophysics Data System (ADS)

    Huang, J.; Heidbrink, W. W.; Wan, B.; von Hellermann, M. G.; Zhu, Y.; Gao, W.; Wu, C.; Li, Y.; Fu, J.; Lyu, B.; Yu, Y.; Shi, Y.; Ye, M.; Hu, L.; Hu, C.

    2014-11-01

    To investigate the fast ion behavior, a fast ion D-alpha (FIDA) diagnostic system has been planned and is presently under development on Experimental Advanced Superconducting Tokamak. The greatest challenges for the design of a FIDA diagnostic are its extremely low intensity levels, which are usually significantly below the continuum radiation level and several orders of magnitude below the bulk-ion thermal charge-exchange feature. Moreover, an overlaying Motional Stark Effect (MSE) feature in exactly the same wavelength range can interfere. The simulation of spectra code is used here to guide the design and evaluate the diagnostic performance. The details for the parameters of design and hardware are presented.

  9. Conceptual design of a fast-ion D-alpha diagnostic on experimental advanced superconducting tokamak.

    PubMed

    Huang, J; Heidbrink, W W; Wan, B; von Hellermann, M G; Zhu, Y; Gao, W; Wu, C; Li, Y; Fu, J; Lyu, B; Yu, Y; Shi, Y; Ye, M; Hu, L; Hu, C

    2014-11-01

    To investigate the fast ion behavior, a fast ion D-alpha (FIDA) diagnostic system has been planned and is presently under development on Experimental Advanced Superconducting Tokamak. The greatest challenges for the design of a FIDA diagnostic are its extremely low intensity levels, which are usually significantly below the continuum radiation level and several orders of magnitude below the bulk-ion thermal charge-exchange feature. Moreover, an overlaying Motional Stark Effect (MSE) feature in exactly the same wavelength range can interfere. The simulation of spectra code is used here to guide the design and evaluate the diagnostic performance. The details for the parameters of design and hardware are presented.

  10. Experimental design and desirability function approach for development of novel anticancer nanocarrier delivery systems.

    PubMed

    Rafati, H; Mirzajani, F

    2011-01-01

    The therapeutic effects of anticancer drugs would highly improve if problems with low water solubility and toxic adverse reactions could be solved. In this work, a full factorial experimental design was used to develop a polymeric nanoparticulate delivery system as an alternative technique for anticancer drug delivery. Nanoparticles containing tamoxifen citrate were prepared and characterized using an O/W emulsification-solvent evaporation technique and different analytical methods. Scanning Electron Microscopy (SEM), particle size analysis and High Pressure Liquid Chromatography (HPLC) were used for characterization of nanoparticles. Nanoparticles' characteristics including size, size distribution, drug loading and the efficiency of encapsulation were optimized by means of a full factorial experimental design over the influence of four different independent variables and desirability function using Design-Expert software. The resulting tamoxifen loaded nanoparticles showed the best response with particle sizes less than 200 nm, improved encapsulation efficiency of more than 80% and the optimum loading of above 30%. The overall results demonstrate the implication of desirability functionin experimental design as a beneficial approach in nanoparticle drug delivery design. PMID:21391432

  11. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    NASA Astrophysics Data System (ADS)

    Girault, Isabelle; d'Ham, Cédric

    2014-08-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a computer environment (copex-chimie) with embedded scaffolds in order to help students to design an experimental procedure. A pre-structuring of the procedure where the students have to choose the actions of their procedure among pre-defined actions and specify the parameters forces the students to face the complexity of the design. However, this is not sufficient for them to succeed; they look for some feedback to improve their procedure and finally abandon their task. In another condition, the students were provided with individualized feedbacks on the errors detected in their procedures by an artificial tutor. These feedbacks proved to be necessary to accompany the students throughout their experimental design without being discouraged. With this kind of scaffold, students worked longer and succeeded better to the task than all the other students.

  12. Designing specific protein–protein interactions using computation, experimental library screening, or integrated methods

    PubMed Central

    Chen, T Scott; Keating, Amy E

    2012-01-01

    Given the importance of protein–protein interactions for nearly all biological processes, the design of protein affinity reagents for use in research, diagnosis or therapy is an important endeavor. Engineered proteins would ideally have high specificities for their intended targets, but achieving interaction specificity by design can be challenging. There are two major approaches to protein design or redesign. Most commonly, proteins and peptides are engineered using experimental library screening and/or in vitro evolution. An alternative approach involves using protein structure and computational modeling to rationally choose sequences predicted to have desirable properties. Computational design has successfully produced novel proteins with enhanced stability, desired interactions and enzymatic function. Here we review the strengths and limitations of experimental library screening and computational structure-based design, giving examples where these methods have been applied to designing protein interaction specificity. We highlight recent studies that demonstrate strategies for combining computational modeling with library screening. The computational methods provide focused libraries predicted to be enriched in sequences with the properties of interest. Such integrated approaches represent a promising way to increase the efficiency of protein design and to engineer complex functionality such as interaction specificity. PMID:22593041

  13. Design and experimental validation of a flutter suppression controller for the active flexible wing

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.

  14. Experimental validation of systematically designed acoustic hyperbolic meta material slab exhibiting negative refraction

    NASA Astrophysics Data System (ADS)

    Christiansen, Rasmus E.; Sigmund, Ole

    2016-09-01

    This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior. The experimental results confirm the predicted refractive capability as well as the predicted transmission at an interface. The study simultaneously provides an estimate of the attenuation inside the slab stemming from the boundary layer effects—insight which can be utilized in the further design of the metamaterial slabs. The capability of tailoring the refractive behavior opens possibilities for different applications. For instance, a slab exhibiting zero refraction across a wide angular range is capable of funneling acoustic energy through it, while a material exhibiting the negative refractive behavior across a wide angular range provides lensing and collimating capabilities.

  15. Design and Experimental Results for the S827 Airfoil; Period of Performance: 1998--1999

    SciTech Connect

    Somers, D. M.

    2005-01-01

    A 21%-thick, natural-laminar-flow airfoil, the S827, for the 75% blade radial station of 40- to 50-meter, stall-regulated, horizontal-axis wind turbines has been designed and analyzed theoretically and verified experimentally in the NASA Langley Low-Turbulence Pressure Tunnel. The primary objective of restrained maximum lift has not been achieved, although the maximum lift is relatively insensitive to roughness, which meets the design goal. The airfoil exhibits a relatively docile stall, which meets the design goal. The primary objective of low profile drag has been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results generally show good agreement with the exception of maximum lift, which is significantly underpredicted.

  16. Fertilizer Response Curves for Commercial Southern Forest Species Defined with an Un-Replicated Experimental Design.

    SciTech Connect

    Coleman, Mark; Aubrey, Doug; Coyle, David, R.; Daniels, Richard, F.

    2005-11-01

    There has been recent interest in use of non-replicated regression experimental designs in forestry, as the need for replication in experimental design is burdensome on limited research budgets. We wanted to determine the interacting effects of soil moisture and nutrient availability on the production of various southeastern forest trees (two clones of Populus deltoides, open pollinated Platanus occidentalis, Liquidambar styraciflua and Pinus taeda). Additionally, we required an understanding of the fertilizer response curve. To accomplish both objectives we developed a composite design that includes a core ANOVA approach to consider treatment interactions, with the addition of non-replicated regression plots receiving a range of fertilizer levels for the primary irrigation treatment.

  17. Comment: Spurious Correlation and Other Observations on Experimental Design for Engineering Dimensional Analysis

    SciTech Connect

    Piepel, Gregory F.

    2013-08-01

    This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also, the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.

  18. Flutter suppression for the Active Flexible Wing - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of a control law for an active flutter suppression system for the Active Flexible Wing wind-tunnel model is presented. The design was accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach relied on a fundamental understanding of the flutter mechanism to formulate understanding of the flutter mechanism to formulate a simple control law structure. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in the design model. The flutter suppression controller was also successfully operated in combination with a rolling maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  19. Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Haller, Harold S.

    2009-01-01

    It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.

  20. Man-machine Integration Design and Analysis System (MIDAS) Task Loading Model (TLM) experimental and software detailed design report

    NASA Technical Reports Server (NTRS)

    Staveland, Lowell

    1994-01-01

    This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.

  1. Study and design of cryogenic propellant acquisition systems. Volume 2: Supporting experimental program

    NASA Technical Reports Server (NTRS)

    Burge, G. W.; Blackmon, J. B.

    1973-01-01

    Areas of cryogenic fuel systems were identified where critical experimental information was needed either to define a design criteria or to establish the feasibility of a design concept or a critical aspect of a particular design. Such data requirements fell into three broad categories: (1) basic surface tension screen characteristics; (2) screen acquisition device fabrication problems; and (3) screen surface tension device operational failure modes. To explore these problems and to establish design criteria where possible, extensive laboratory or bench test scale experiments were conducted. In general, these proved to be quite successful and, in many instances, the test results were directly used in the system design analyses and development. In some cases, particularly those relating to operational-type problems, areas requiring future research were identified, especially screen heat transfer and vibrational effects.

  2. Modeling of retardance in ferrofluid with Taguchi-based multiple regression analysis

    NASA Astrophysics Data System (ADS)

    Lin, Jing-Fung; Wu, Jyh-Shyang; Sheu, Jer-Jia

    2015-03-01

    The citric acid (CA) coated Fe3O4 ferrofluids are prepared by a co-precipitation method and the magneto-optical retardance property is measured by a Stokes polarimeter. Optimization and multiple regression of retardance in ferrofluids are executed by combining Taguchi method and Excel. From the nine tests for four parameters, including pH of suspension, molar ratio of CA to Fe3O4, volume of CA, and coating temperature, influence sequence and excellent program are found. Multiple regression analysis and F-test on the significance of regression equation are performed. It is found that the model F value is much larger than Fcritical and significance level P <0.0001. So it can be concluded that the regression model has statistically significant predictive ability. Substituting excellent program into equation, retardance is obtained as 32.703°, higher than the highest value in tests by 11.4%.

  3. Taking evolutionary circuit design from experimentation to implementation: some useful techniques and a silicon demonstration

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Guo, X.; Keymeulen, D.; Ferguson, M. I.; Duong, V.

    2004-01-01

    Current techniques in evolutionary synthesis of analogue and digital circuits designed at transistor level have focused on achieving the desired functional response, without paying sufficient attention to issues needed for a practical implementation of the resulting solution. No silicon fabrication of circuits with topologies designed by evolution has been done before, leaving open questions on the feasibility of the evolutionary circuit design approach, as well as on how high-performance, robust, or portable such designs could be when implemented in hardware. It is argued that moving from evolutionary 'design-for experimentation' to 'design-for-implementation' requires, beyond inclusion in the fitness function of measures indicative of circuit evaluation factors such as power consumption and robustness to temperature variations, the addition of certain evaluation techniques that are not common in conventional design. Several such techniques that were found to be useful in evolving designs for implementation are presented; some are general, and some are particular to the problem domain of transistor-level logic design, used here as a target application. The example used here is a multifunction NAND/NOR logic gate circuit, for which evolution obtained a creative circuit topology more compact than what has been achieved by multiplexing a NAND and a NOR gate. The circuit was fabricated in a 0.5 mum CMOS technology and silicon tests showed good correspondence with the simulations.

  4. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  5. Analytical and experimental investigation of liquid double drop dynamics: Preliminary design for space shuttle experiments

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The preliminary grant assessed the use of laboratory experiments for simulating low g liquid drop experiments in the space shuttle environment. Investigations were begun of appropriate immiscible liquid systems, design of experimental apparatus and analyses. The current grant continued these topics, completed construction and preliminary testing of the experimental apparatus, and performed experiments on single and compound liquid drops. A continuing assessment of laboratory capabilities, and the interests of project personnel and available collaborators, led to, after consultations with NASA personnel, a research emphasis specializing on compound drops consisting of hollow plastic or elastic spheroids filled with liquids.

  6. Intuitive Web-Based Experimental Design for High-Throughput Biomedical Data

    PubMed Central

    Friedrich, Andreas; Kenar, Erhan; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model. PMID:25954760

  7. Intuitive web-based experimental design for high-throughput biomedical data.

    PubMed

    Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.

  8. Inhalation experiments with mixtures of hydrocarbons. Experimental design, statistics and interpretation of kinetics and possible interactions.

    PubMed

    Eide, I; Zahlsen, K

    1996-01-01

    The paper describes experimental and statistical methods for toxicokinetic evaluation of mixtures in inhalation experiments. Synthetic mixtures of three C9 n-paraffinic, naphthenic and aromatic hydrocarbons (n-nonane, trimethylcyclohexane and trimethylbenzene, respectively) were studied in the rat after inhalation for 12h. The hydrocarbons were mixed according to principles for statistical experimental design using mixture design at four vapour levels (75, 150, 300 and 450 ppm) to support an empirical model with linear, interaction and quadratic terms (Taylor polynome). Immediately after exposure, concentrations of hydrocarbons were measured by head space gas chromatography in blood, brain, liver, kidneys and perirenal fat. Multivariate data analysis and modelling were performed with PLS (projections to latent structures). The best models were obtained after removing all interaction terms, suggesting that there were no interactions between the hydrocarbons with respect to absorption and distribution. Uptake of paraffins and particularly aromatics is best described by quadratic models, whereas the uptake of the naphthenic hydrocarbons is nearly linear. All models are good, with high correlation (r2) and prediction properties (Q2), the latter after cross validation. The concentrations of aromates in blood were high compared to the other hydrocarbons. At concentrations below 250 ppm, the naphthene reached higher concentrations in the brain compared to the paraffin and the aromate. Statistical experimental design, multivariate data analysis and modelling have proved useful for the evaluation of synthetic mixtures. The principles may also be used in the design of liquid mixtures, which may be evaporated partially or completely.

  9. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  10. A Bayesian active learning strategy for sequential experimental design in systems biology.

    PubMed

    Pauwels, Edouard; Lajaunie, Christian; Vert, Jean-Philippe

    2014-09-26

    BackgroundDynamical models used in systems biology involve unknown kinetic parameters. Setting these parameters is a bottleneck in many modeling projects. This motivates the estimation of these parameters from empirical data. However, this estimation problem has its own difficulties, the most important one being strong ill-conditionedness. In this context, optimizing experiments to be conducted in order to better estimate a system¿s parameters provides a promising direction to alleviate the difficulty of the task.ResultsBorrowing ideas from Bayesian experimental design and active learning, we propose a new strategy for optimal experimental design in the context of kinetic parameter estimation in systems biology. We describe algorithmic choices that allow to implement this method in a computationally tractable way and make it fully automatic. Based on simulation, we show that it outperforms alternative baseline strategies, and demonstrate the benefit to consider multiple posterior modes of the likelihood landscape, as opposed to traditional schemes based on local and Gaussian approximations.ConclusionThis analysis demonstrates that our new, fully automatic Bayesian optimal experimental design strategy has the potential to support the design of experiments for kinetic parameter estimation in systems biology.

  11. Intuitive web-based experimental design for high-throughput biomedical data.

    PubMed

    Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven

    2015-01-01

    Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model. PMID:25954760

  12. RNA-seq Data: Challenges in and Recommendations for Experimental Design and Analysis.

    PubMed

    Williams, Alexander G; Thomas, Sean; Wyman, Stacia K; Holloway, Alisha K

    2014-10-01

    RNA-seq is widely used to determine differential expression of genes or transcripts as well as identify novel transcripts, identify allele-specific expression, and precisely measure translation of transcripts. Thoughtful experimental design and choice of analysis tools are critical to ensure high-quality data and interpretable results. Important considerations for experimental design include number of replicates, whether to collect paired-end or single-end reads, sequence length, and sequencing depth. Common analysis steps in all RNA-seq experiments include quality control, read alignment, assigning reads to genes or transcripts, and estimating gene or transcript abundance. Our aims are two-fold: to make recommendations for common components of experimental design and assess tool capabilities for each of these steps. We also test tools designed to detect differential expression, since this is the most widespread application of RNA-seq. We hope that these analyses will help guide those who are new to RNA-seq and will generate discussion about remaining needs for tool improvement and development.

  13. A design of experiment study of plasma sprayed alumina-titania coatings

    SciTech Connect

    Steeper, T.J.; Varacalle, D.J. Jr.; Wilson, G.C.; Riggs, W.L. II; Rotolico, A.J.; Nerz, J.E.

    1992-08-01

    An experimental study of the plasma spraying of alumina-titania powder is presented in this paper. This powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. Coating experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical spray parameters in a systematic design of experiments in order to display the range of plasma processing conditions and their effect on the resultant coating. The coatings were characterized by hardness and electrical tests, image analysis, and optical metallography. Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, deposition efficiency, and microstructure. The attributes of the coatings are correlated with the changes in operating parameters.

  14. A design of experiment study of plasma sprayed alumina-titania coatings

    SciTech Connect

    Steeper, T.J. and Co., Aiken, SC . Savannah River Lab.); Varacalle, D.J. Jr.; Wilson, G.C. ); Riggs, W.L. II ); Rotolico, A.J.; Nerz, J.E. )

    1992-01-01

    An experimental study of the plasma spraying of alumina-titania powder is presented in this paper. This powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. Coating experiments were conducted using a Taguchi fractional-factorial design parametric study. Operating parameters were varied around the typical spray parameters in a systematic design of experiments in order to display the range of plasma processing conditions and their effect on the resultant coating. The coatings were characterized by hardness and electrical tests, image analysis, and optical metallography. Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, deposition efficiency, and microstructure. The attributes of the coatings are correlated with the changes in operating parameters.

  15. Design, Evaluation and Experimental Effort Toward Development of a High Strain Composite Wing for Navy Aircraft

    NASA Technical Reports Server (NTRS)

    Bruno, Joseph; Libeskind, Mark

    1990-01-01

    This design development effort addressed significant technical issues concerning the use and benefits of high strain composite wing structures (Epsilon(sub ult) = 6000 micro-in/in) for future Navy aircraft. These issues were concerned primarily with the structural integrity and durability of the innovative design concepts and manufacturing techniques which permitted a 50 percent increase in design ultimate strain level (while maintaining the same fiber/resin system) as well as damage tolerance and survivability requirements. An extensive test effort consisting of a progressive series of coupon and major element tests was an integral part of this development effort, and culminated in the design, fabrication and test of a major full-scale wing box component. The successful completion of the tests demonstrated the structural integrity, durability and benefits of the design. Low energy impact testing followed by fatigue cycling verified the damage tolerance concepts incorporated within the structure. Finally, live fire ballistic testing confirmed the survivability of the design. The potential benefits of combining newer/emerging composite materials and new or previously developed high strain wing design to maximize structural efficiency and reduce fabrication costs was the subject of subsequent preliminary design and experimental evaluation effort.

  16. Development of objective-oriented groundwater models: 2. Robust experimental design

    NASA Astrophysics Data System (ADS)

    Sun, Ne-Zheng; Yeh, William W.-G.

    2007-02-01

    This paper continues the discussion in part 1 by considering the data collection strategy problem when the existing data are judged to be insufficient for constructing a reliable model. Designing an experiment for identifying a distributed parameter is very difficult because the identification of a more complex parameter structure requires more data. Moreover, without knowing the sufficiency of a design, finding an optimal design becomes meaningless. These difficulties can be avoided if we turn to the construction of objective-oriented models. The identifiability of a distributed parameter, as defined in this paper, contains the reducibility of parameter structure. Sufficient conditions for this kind of identifiability are given. When the structure error associated with a structure reduction is too large, these conditions may not be satisfied no matter how much data are collected. In this paper we formulate a new experimental design problem that consists of two objectives: minimizing the cost and maximizing the information content, with robustness and feasibility as constraints. We develop an algorithm that can find a cost-effective robust design for objective-oriented parameter identification. We also present a heuristic algorithm that can find a suboptimal design with less computational effort for real case studies. The proposed methodology is used to design a pumping test for identifying a distributed hydraulic conductivity. We verify the robustness of the obtained design by assuming that the true parameter may have continuous, discrete, random, and fractured structures. Finally, the presented procedure of constructing objective-oriented models is described step by step.

  17. Design and Experimental Verification of a Scram-Jet Inlet in Frame of ESA's LAPCAT Program

    NASA Astrophysics Data System (ADS)

    Henckels, A.; Gruhn, P.; Gülhan, A.

    2009-01-01

    In 2005 ESA started the coordination of a research program LAPCAT (Long-Term Advanced Propulsion Concepts And Technologies) to identify and assess innovative propulsion technologies to reduce the duration of long distance flights. One of the studied configurations features an RBCC (Rocket Based Combined Cycle) engine, propelling the vehicle from Mach 4 up to the cruise Mach number of 8 by an air-breathing SCRAM- jet. In frame of LAPCAT, the suitable air inlet has been designed by DLR Cologne. Subject of this paper is the description of the experimental verification of its design requirements. Thereby, a test campaign at the H2K blow down facility proved the complete functionality of this inlet. Further tests provided valuable information about off design operation and internal flow topologies for future design optimizations.

  18. Engineering at SLAC: Designing and constructing experimental devices for the Stanford Synchrotron Radiation Lightsource - Final Paper

    SciTech Connect

    Djang, Austin

    2015-08-22

    Thanks to the versatility of the beam lines at SSRL, research there is varied and benefits multiple fields. Each experiment requires a particular set of experiment equipment, which in turns requires its own particular assembly. As such, new engineering challenges arise from each new experiment. My role as an engineering intern has been to help solve these challenges, by designing and assembling experimental devices. My first project was to design a heated sample holder, which will be used to investigate the effect of temperature on a sample's x-ray diffraction pattern. My second project was to help set up an imaging test, which involved designing a cooled grating holder and assembling multiple positioning stages. My third project was designing a 3D-printed pencil holder for the SSRL workstations.

  19. Experimental investigation of undesired stable equilibria in pumpkin shape super-pressure balloon designs

    NASA Astrophysics Data System (ADS)

    Schur, W. W.

    2004-01-01

    Excess in skin material of a pneumatic envelope beyond what is required for minimum enclosure of a gas bubble is a necessary but by no means sufficient condition for the existence of multiple equilibrium configurations for that pneumatic envelope. The very design of structurally efficient super-pressure balloons of the pumpkin shape type requires such excess. Undesired stable equilibria in pumpkin shape balloons have been observed on experimental pumpkin shape balloons. These configurations contain regions with stress levels far higher than those predicted for the cyclically symmetric design configuration under maximum pressurization. Successful designs of pumpkin shape super-pressure balloons do not allow such undesired stable equilibria under full pressurization. This work documents efforts made so far and describes efforts still underway by the National Aeronautics and Space Administration's Balloon Program Office to arrive on guidance on the design of pumpkin shape super-pressure balloons that guarantee full and proper deployment.

  20. A Bayesian experimental design approach to structural health monitoring with application to ultrasonic guided waves

    NASA Astrophysics Data System (ADS)

    Flynn, Eric Brian

    The dissertation will present the application of a Bayesian experimental design framework to structural health monitoring (SHM). When applied to SHM, Bayesian experimental design (BED) is founded on the minimization of the expected loss, i.e., Bayes Risk, of the SHM process through the optimization of the detection algorithm and system hardware design parameters. This expected loss is a function of the detector and system design, the cost of decision/detection error, and the distribution of prior probabilities of damage. While the presented framework is general to all SHM applications, particular attention is paid to guided wave-based SHM (GWSHM). GWSHM is the process of exciting user-defined mechanical waves in plate or beam-like structures and sensing the response in order to identify damage, which manifests itself though scattering and attenuation of the traveling waves. Using the BED framework, both a detection-centric and a localization-centric optimal detector are derived for GWSHM based on likelihood tests. In order to objectively evaluate the performance in practical terms for the users of SHM systems, the dissertation will introduce three new statistics-based tools: the Bayesian combined receiver operating characteristic (BCROC) curve, the localization probability density (LPDF) estimate, and the localizer operating characteristic (LOC) curve. It will demonstrate the superior performance of the BED-based detectors over existing GWSHM algorithms through application to a geometrically complex test structure. Next, the BED framework is used to establish both a model-based and data-driven system design process for GWSHM to ascertain the optimal placement of both actuators and sensors according to application-specific decision error cost functions. This design process considers, among other things, non-uniform probabilities of damage, non-symmetric scatterers, the optimization of both sensor placement and sensor count, and robustness to sensor failure. The

  1. Fermilab D-0 Experimental Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1987-10-31

    This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis.

  2. D-OPTIMAL EXPERIMENTAL DESIGNS TO TEST FOR DEPARTURE FROM ADDITIVITY IN A FIXED-RATIO MIXTURE RAY.

    EPA Science Inventory

    Traditional factorial designs for evaluating interactions among chemicals in a mixture are prohibitive when the number of chemicals is large. However, recent advances in statistically-based experimental design have made it easier to evaluate interactions involving many chemicals...

  3. Active vibration absorber for the CSI evolutionary model - Design and experimental results. [Controls Structures Interaction

    NASA Technical Reports Server (NTRS)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstrations to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility has been developed to study practical implementation of new control technologies under realistic conditions. The paper discusses the design of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. Experimental results in the presence of these factors are presented and discussed. The robustness of this design under model uncertainty is demonstrated.

  4. Designation and Implementation of Microcomputer Principle and Interface Technology Virtual Experimental Platform Website

    NASA Astrophysics Data System (ADS)

    Gao, JinYue; Tang, Yin

    This paper explicitly discusses the designation and implementation thought and method of Microcomputer Principle and Interface Technology virtual experimental platform website construction. The instructional design of this platform mainly follows with the students-oriented constructivism learning theory, and the overall structure is subject to the features of teaching aims, teaching contents and interactive methods. Virtual experiment platform production and development should fully take the characteristics of network operation into consideration and adopt relevant technologies to improve the effect and speed of network software application in internet.

  5. Optimal design and experimental analyses of a new micro-vibration control payload-platform

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen

    2016-07-01

    This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.

  6. Self-healing in segmented metallized film capacitors: Experimental and theoretical investigations for engineering design

    NASA Astrophysics Data System (ADS)

    Belko, V. O.; Emelyanov, O. A.

    2016-01-01

    A significant increase in the efficiency of modern metallized film capacitors has been achieved by the application of special segmented nanometer-thick electrodes. The proper design of the electrode segmentation guarantees the best efficiency of the capacitor's self-healing (SH) ability. Meanwhile, the reported theoretical and experimental results have not led to the commonly accepted model of the SH process, since the experimental SH dissipated energy value is several times higher than the calculated one. In this paper, we show that the difference is caused by the heat outflow into polymer film. Based on this, a mathematical model of the metallized electrode destruction is developed. These insights in turn are leading to a better understanding of the SH development. The adequacy of the model is confirmed by both the experiments and the numerical calculations. A procedure of optimal segmented electrode design is offered.

  7. Pliocene Model Intercomparison Project (PlioMIP): Experimental Design and Boundary Conditions (Experiment 2)

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  8. Design and Experimental Results for the S825 Airfoil; Period of Performance: 1998-1999

    SciTech Connect

    Somers, D. M.

    2005-01-01

    A 17%-thick, natural-laminar-flow airfoil, the S825, for the 75% blade radial station of 20- to 40-meter, variable-speed and variable-pitch (toward feather), horizontal-axis wind turbines has been designed and analyzed theoretically and verified experimentally in the NASA Langley Low-Turbulence Pressure Tunnel. The two primary objectives of high maximum lift, relatively insensitive to roughness and low-profile drag have been achieved. The airfoil exhibits a rapid, trailing-edge stall, which does not meet the design goal of a docile stall. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results generally show good agreement.

  9. Theoretical and Experimental Investigation of Mufflers with Comments on Engine-Exhaust Muffler Design

    NASA Technical Reports Server (NTRS)

    Davis, Don D , Jr; Stokes, George M; Moore, Dewey; Stevens, George L , Jr

    1954-01-01

    Equations are presented for the attenuation characteristics of single-chamber and multiple-chamber mufflers of both the expansion-chamber and resonator types, for tuned side-branch tubes, and for the combination of an expansion chamber with a resonator. Experimental curves of attenuation plotted against frequency are presented for 77 different mufflers with a reflection-free tailpipe termination. The experiments were made at room temperature without flow; the sound source was a loud-speaker. A method is given for including the tailpipe reflections in the calculations. Experimental attenuation curves are presented for four different muffler-tailpipe combinations, and the results are compared with the theory. The application of the theory to the design of engine-exhaust mufflers is discussed, and charts are included for the assistance of the designer.

  10. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands.

  11. Pliocene Model Intercomparison Project (PlioMIP): experimental design and boundary conditions (Experiment 2)

    USGS Publications Warehouse

    Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.

    2011-01-01

    The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.

  12. Experimental evaluation of the Battelle accelerated test design for the solar array at Mead, Nebraska

    NASA Technical Reports Server (NTRS)

    Frickland, P. O.; Repar, J.

    1982-01-01

    A previously developed test design for accelerated aging of photovoltaic modules was experimentally evaluated. The studies included a review of relevant field experience, environmental chamber cycling of full size modules, and electrical and physical evaluation of the effects of accelerated aging during and after the tests. The test results indicated that thermally induced fatigue of the interconnects was the primary mode of module failure as measured by normalized power output. No chemical change in the silicone encapsulant was detectable after 360 test cycles.

  13. Design and experimental characterization of a nonintrusive measurement system of rotating blade vibration

    SciTech Connect

    Nava, P. ); Paone, N.; Rossi, G.L.; Tomasini, E.P. . Dipt. di Meccanica)

    1994-07-01

    A measurement system for nonintrusive monitoring of rotating blade vibration in turbomachines based on fiber optic sensors is presented. The design of the whole system is discussed; the development of special purpose sensors, their interfacing to the data acquisition system, and the signal processing are outlined.The processing algorithms are tested by software simulation for several possible blade vibrations. Experimental tests performed on different bladed rotors are presented. Results are compared to simultaneous strain gage measurements.

  14. Design and application of FBG strain experimental apparatus in high temperature

    NASA Astrophysics Data System (ADS)

    Xia, Zhongcheng; Liu, Yueming; Gao, Xiaoliang

    2014-09-01

    Fiber Bragg Grating (FBG) sensing technology has many applications, and it's widely used in detection of temperature, strain and etc. Now the application of FBG sensor is limited to the temperature below 200°C owing to the so called High Temperature Erasing Phenomenon. Strain detection over 200°C is still an engineering challenge since high temperature has a bad influence on the sensor, testing equipment and test data, etc, thus effective measurement apparatus are needed to ensure the accuracy of the measurement over 200°C, but there are no suitable FBG strain experimental apparatus in high temperature to date. In this paper a high temperature FBG strain experimental apparatus has been designed to detect the strain in high temperature. In order to verify working condition of the high temperature FBG strain, an application of FBG strain sensing experiment was given in this paper. The high temperature FBG strain sensor was installed in the apparatus, the internal temperature of experimental apparatus was controlled from -20 to 300°C accurately, and strain loading was given by the counterweight, then the data was recorded through electrical resistance strain measurement and optical sensing interrogator. Experimental data result shows that the high temperature FBG strain experimental apparatus can work properly over 200°C. The design of the high temperature FBG strain experimental apparatus are demonstrated suitable for high temperature strain gauges and FBG strain sensors , etc, which can work under the temperature of -20 ~ 300°C, the strain of -1500 ~ +1500μepsilon and the wavelength resolution of 1pm.

  15. Experimental design and analysis for accelerated degradation tests with Li-ion cells.

    SciTech Connect

    Doughty, Daniel Harvey; Thomas, Edward Victor; Jungst, Rudolph George; Roth, Emanuel Peter

    2003-08-01

    This document describes a general protocol (involving both experimental and data analytic aspects) that is designed to be a roadmap for rapidly obtaining a useful assessment of the average lifetime (at some specified use conditions) that might be expected from cells of a particular design. The proposed experimental protocol involves a series of accelerated degradation experiments. Through the acquisition of degradation data over time specified by the experimental protocol, an unambiguous assessment of the effects of accelerating factors (e.g., temperature and state of charge) on various measures of the health of a cell (e.g., power fade and capacity fade) will result. In order to assess cell lifetime, it is necessary to develop a model that accurately predicts degradation over a range of the experimental factors. In general, it is difficult to specify an appropriate model form without some preliminary analysis of the data. Nevertheless, assuming that the aging phenomenon relates to a chemical reaction with simple first-order rate kinetics, a data analysis protocol is also provided to construct a useful model that relates performance degradation to the levels of the accelerating factors. This model can then be used to make an accurate assessment of the average cell lifetime. The proposed experimental and data analysis protocols are illustrated with a case study involving the effects of accelerated aging on the power output from Gen-2 cells. For this case study, inadequacies of the simple first-order kinetics model were observed. However, a more complex model allowing for the effects of two concurrent mechanisms provided an accurate representation of the experimental data.

  16. Selecting appropriate animal models and experimental designs for endocrine disruptor research and testing studies.

    PubMed

    Stokes, William S

    2004-01-01

    Evidence that chemicals in the environment may cause developmental and reproductive abnormalities in fish and wildlife by disrupting normal endocrine functions has increased concern about potential adverse human health effects from such chemicals. US laws have now been enacted that require the US Environmental Protection Agency (EPA) to develop and validate a screening program to identify chemicals in food and water with potential endocrine-disrupting activity. EPA subsequently proposed an Endocrine Disruptor Screening Program that uses in vitro and in vivo test systems to identify chemicals that may adversely affect humans and ecologically important animal species. However, the endocrine system can be readily modulated by many experimental factors, including diet and the genetic background of the selected animal strain or stock. It is therefore desirable to minimize or avoid factors that cause or contribute to experimental variation in endocrine disruptor research and testing studies. Standard laboratory animal diets contain high and variable levels of phytoestrogens, which can modulate physiologic and behavioral responses similar to both endogenous estrogen as well as exogenous estrogenic chemicals. Other studies have determined that some commonly used outbred mice and rats are less responsive to estrogenic substances than certain inbred mouse and rat strains for various estrogen-sensitive endpoints. It is therefore critical to select appropriate biological models and diets for endocrine disruptor studies that provide optimal sensitivity and specificity to accomplish the research or testing objectives. An introduction is provided to 11 other papers in this issue that review these and other important laboratory animal experimental design considerations in greater detail, and that review laboratory animal and in vitro models currently being used or evaluated for endocrine disruptor research and testing. Selection of appropriate animal models and experimental design

  17. JEAB Research Over Time: Species Used, Experimental Designs, Statistical Analyses, and Sex of Subjects.

    PubMed

    Zimmermann, Zachary J; Watkins, Erin E; Poling, Alan

    2015-10-01

    We examined the species used as subjects in every article published in the Journal of the Experimental Analysis of Behavior (JEAB) from 1958 through 2013. We also determined the sex of subjects in every article with human subjects (N = 524) and in an equal number of randomly selected articles with nonhuman subjects, as well as the general type of experimental designs used. Finally, the percentage of articles reporting an inferential statistic was determined at 5-year intervals. In all, 35,317 subjects were studied in 3,084 articles; pigeons ranked first and humans second in number used. Within-subject experimental designs were more popular than between-subjects designs regardless of whether human or nonhuman subjects were studied but were used in a higher percentage of articles with nonhumans (75.4 %) than in articles with humans (68.2 %). The percentage of articles reporting an inferential statistic has increased over time, and more than half of the articles published in 2005 and 2010 reported one. Researchers who publish in JEAB frequently depart from Skinner's preferred research strategy, but it is not clear whether such departures are harmful. Finally, the sex of subjects was not reported in a sizable percentage of articles with both human and nonhuman subjects. This is an unfortunate oversight. PMID:27606171

  18. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  19. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-31

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  20. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation.

  1. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation. PMID:26689874

  2. Experimental study of a cylindrical air inlet designed on the basis of plane flows

    NASA Astrophysics Data System (ADS)

    Vnuchkov, D. A.; Zvegintsev, V. I.; Nalivaichenko, D. G.

    2014-04-01

    Results of an experimental study of a cylindrical air inlet designed for high flight speeds on the basis of plane flows are reported. For an air inlet intended for Mach number M = 4, the flow-rate characteristics at M = 2.85, 3.83, and 4.95 for angles of attack ranging from 0 to 9 degrees have been measured. The results of tests have shown that at free-stream Mach number M = 3.83, close to the design Mach number, the mass rate of the air flow captured by the air inlet was 96 % of its design value, and this rate increased to 99 % as the Mach number was increased to 4.95. At a lower, in comparison with the design value, free-stream Mach number, M = 2.85, the mass rate of the air flow captured by the inlet installed under zero angle of attack has decreased to 68 %. For all the examined Mach numbers, an increase in the angle of attack from 0 to 9 degrees resulted in an 8-14 % decrease of the mass rate of inlet-captured air flow. For comparison, numerical calculation of the air-inlet flow at Mach number M = 3.83 was performed. The obtained data were found to be in a qualitative agreement with experimental data.

  3. Optimization of single-walled carbon nanotube solubility by noncovalent PEGylation using experimental design methods.

    PubMed

    Hadidi, Naghmeh; Kobarfard, Farzad; Nafissi-Varcheh, Nastaran; Aboofazeli, Reza

    2011-01-01

    In this study, noncovalent functionalization of single-walled carbon nanotubes (SWCNTs) with phospholipid-polyethylene glycols (Pl-PEGs) was performed to improve the solubility of SWCNTs in aqueous solution. Two kinds of PEG derivatives, ie, Pl-PEG 2000 and Pl-PEG 5000, were used for the PEGylation process. An experimental design technique (D-optimal design and second-order polynomial equations) was applied to investigate the effect of variables on PEGylation and the solubility of SWCNTs. The type of PEG derivative was selected as a qualitative parameter, and the PEG/SWCNT weight ratio and sonication time were applied as quantitative variables for the experimental design. Optimization was performed for two responses, aqueous solubility and loading efficiency. The grafting of PEG to the carbon nanostructure was determined by thermogravimetric analysis, Raman spectroscopy, and scanning electron microscopy. Aqueous solubility and loading efficiency were determined by ultraviolet-visible spectrophotometry and measurement of free amine groups, respectively. Results showed that Pl-PEGs were grafted onto SWCNTs. Aqueous solubility of 0.84 mg/mL and loading efficiency of nearly 98% were achieved for the prepared Pl-PEG 5000-SWCNT conjugates. Evaluation of functionalized SWCNTs showed that our noncovalent functionalization protocol could considerably increase aqueous solubility, which is an essential criterion in the design of a carbon nanotube-based drug delivery system and its biodistribution.

  4. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    PubMed

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data.

  5. Using an Animal Group Vigilance Practical Session to Give Learners a "Heads-Up" to Problems in Experimental Design

    ERIC Educational Resources Information Center

    Rands, Sean A.

    2011-01-01

    The design of experimental ecological fieldwork is difficult to teach to classes, particularly when protocols for data collection are normally carefully controlled by the class organiser. Normally, reinforcement of the some problems of experimental design such as the avoidance of pseudoreplication and appropriate sampling techniques does not occur…

  6. Development of a Model for Measuring Scientific Processing Skills Based on Brain-Imaging Technology: Focused on the Experimental Design Process

    ERIC Educational Resources Information Center

    Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju

    2014-01-01

    The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…

  7. Randomized block experimental designs can increase the power and reproducibility of laboratory animal experiments.

    PubMed

    Festing, Michael F W

    2014-01-01

    Randomized block experimental designs have been widely used in agricultural and industrial research for many decades. Usually they are more powerful, have higher external validity, are less subject to bias, and produce more reproducible results than the completely randomized designs typically used in research involving laboratory animals. Reproducibility can be further increased by using time as a blocking factor. These benefits can be achieved at no extra cost. A small experiment investigating the effect of an antioxidant on the activity of a liver enzyme in four inbred mouse strains, which had two replications (blocks) separated by a period of two months, illustrates this approach. The widespread failure to use these designs more widely in research involving laboratory animals has probably led to a substantial waste of animals, money, and scientific resources and slowed down the development of new treatments for human and animal diseases.

  8. Active vibration absorber for CSI evolutionary model: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Bruner, Anne M.; Belvin, W. Keith; Horta, Lucas G.; Juang, Jer-Nan

    1991-01-01

    The development of control of large flexible structures technology must include practical demonstration to aid in the understanding and characterization of controlled structures in space. To support this effort, a testbed facility was developed to study practical implementation of new control technologies under realistic conditions. The design is discussed of a second order, acceleration feedback controller which acts as an active vibration absorber. This controller provides guaranteed stability margins for collocated sensor/actuator pairs in the absence of sensor/actuator dynamics and computational time delay. The primary performance objective considered is damping augmentation of the first nine structural modes. Comparison of experimental and predicted closed loop damping is presented, including test and simulation time histories for open and closed loop cases. Although the simulation and test results are not in full agreement, robustness of this design under model uncertainty is demonstrated. The basic advantage of this second order controller design is that the stability of the controller is model independent.

  9. The Langley Research Center CSI phase-0 evolutionary model testbed-design and experimental results

    NASA Technical Reports Server (NTRS)

    Belvin, W. K.; Horta, Lucas G.; Elliott, K. B.

    1991-01-01

    A testbed for the development of Controls Structures Interaction (CSI) technology is described. The design philosophy, capabilities, and early experimental results are presented to introduce some of the ongoing CSI research at NASA-Langley. The testbed, referred to as the Phase 0 version of the CSI Evolutionary model (CEM), is the first stage of model complexity designed to show the benefits of CSI technology and to identify weaknesses in current capabilities. Early closed loop test results have shown non-model based controllers can provide an order of magnitude increase in damping in the first few flexible vibration modes. Model based controllers for higher performance will need to be robust to model uncertainty as verified by System ID tests. Data are presented that show finite element model predictions of frequency differ from those obtained from tests. Plans are also presented for evolution of the CEM to study integrated controller and structure design as well as multiple payload dynamics.

  10. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  11. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  12. Facility for Advanced Accelerator Experimental Tests at SLAC (FACET) Conceptual Design Report

    SciTech Connect

    Amann, J.; Bane, K.; /SLAC

    2009-10-30

    This Conceptual Design Report (CDR) describes the design of FACET. It will be updated to stay current with the developing design of the facility. This CDR begins as the baseline conceptual design and will evolve into an 'as-built' manual for the completed facility. The Executive Summary, Chapter 1, gives an introduction to the FACET project and describes the salient features of its design. Chapter 2 gives an overview of FACET. It describes the general parameters of the machine and the basic approaches to implementation. The FACET project does not include the implementation of specific scientific experiments either for plasma wake-field acceleration for other applications. Nonetheless, enough work has been done to define potential experiments to assure that the facility can meet the requirements of the experimental community. Chapter 3, Scientific Case, describes the planned plasma wakefield and other experiments. Chapter 4, Technical Description of FACET, describes the parameters and design of all technical systems of FACET. FACET uses the first two thirds of the existing SLAC linac to accelerate the beam to about 20GeV, and compress it with the aid of two chicanes, located in Sector 10 and Sector 20. The Sector 20 area will include a focusing system, the generic experimental area and the beam dump. Chapter 5, Management of Scientific Program, describes the management of the scientific program at FACET. Chapter 6, Environment, Safety and Health and Quality Assurance, describes the existing programs at SLAC and their application to the FACET project. It includes a preliminary analysis of safety hazards and the planned mitigation. Chapter 7, Work Breakdown Structure, describes the structure used for developing the cost estimates, which will also be used to manage the project. The chapter defines the scope of work of each element down to level 3.

  13. Use of experimental data in testing methods for design against uncertainty

    NASA Astrophysics Data System (ADS)

    Rosca, Raluca Ioana

    Modern methods of design take into consideration the fact that uncertainty is present in everyday life, whether in the form of variable loads (the strongest wind that would affect a building), material properties of an alloy, or future demand for the product or cost of labor. Moreover, the Japanese example showed that it may be more cost-effective to design taking into account the existence of the uncertainty rather than to plan to eliminate or greatly reduce it. The dissertation starts by comparing the theoretical basis of two methods for design against uncertainty, namely probability theory and possibility theory. A two-variable design problem is then used to show the differences. It is concluded that for design problems with two or more cases of failure of very different magnitude (as the stop of a car due to lack of gas or motor failure), probability theory divides existent resources in a more intuitive way than possibility theory. The dissertation continues with the description of simple experiments (building towers of dominoes) and then it presents the methodology to increase the amount of information that can be drawn from a given data set. The methodology is shown on the Bidder-Challenger problem, a simulation of a problem of a company that makes microchips to set a target speed for its next microchip. The simulations use the domino experimental data. It is demonstrated that important insights into methods of probability and possibility based design can be gained from experiments.

  14. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  15. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  16. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  17. Design and Experimental Demonstration of Cherenkov Radiation Source Based on Metallic Photonic Crystal Slow Wave Structure

    NASA Astrophysics Data System (ADS)

    Fu, Tao; Yang, Zi-Qiang; Ouyang, Zheng-Biao

    2016-11-01

    This paper presents a kind of Cherenkov radiation source based on metallic photonic crystal (MPC) slow-wave structure (SWS) cavity. The Cherenkov source designed by linear theory works at 34.7 GHz when the cathode voltage is 550 kV. The three-dimensional particle-in-cell (PIC) simulation of the SWS shows the operating frequency of 35.56 GHz with a single TM01 mode is basically consistent with the theoretically one under the same parameters. An experiment was implemented to testify the results of theory and PIC simulation. The experimental system includes a cathode emitting unit, the SWS, a magnetic system, an output antenna, and detectors. Experimental results show that the operating frequency through detecting the retarded time of wave propagation in waveguides is around 35.5 GHz with a single TM01 mode and an output power reaching 54 MW. It indicates that the MPC structure can reduce mode competition. The purpose of the paper is to show in theory and in preliminary experiment that a SWS with PBG can produce microwaves in TM01 mode. But it still provides a good experimental and theoretical foundation for designing high-power microwave devices.

  18. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    SciTech Connect

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  19. A MILP-based flux alternative generation and NMR experimental design strategy for metabolic engineering.

    PubMed

    Phalakornkule, C; Lee, S; Zhu, T; Koepsel, R; Ataai, M M; Grossmann, I E; Domach, M M

    2001-04-01

    A mixed-integer linear program (MILP) is described that can enumerate all the ways fluxes can distribute in a metabolic network while still satisfying the same constraints and objective function. The multiple solutions can be used to (1) generate alternative flux scenarios that can account for limited experimental observations, (2) forecast the potential responses to mutation (e.g., new reaction pathways may be used), and (3) (as illustrated) design (13)C NMR experiments such that different potential flux patterns in a mutant can be distinguished. The experimental design is enabled by using the MILP results as an input to an isotopomer mapping matrices (IMM)-based program, which accounts for the network circulation of (13)C from a precursor such as glucose. The IMM-based program can interface to common plotting programs with the result that the user is provided with predicted NMR spectra that are complete with splittings and Lorentzian line-shape features. The example considered is the trafficking of carbon in an Escherichia coli mutant, which has pyruvate kinase activity deleted for the purpose of eliminating acetate production. Similar yields and extracellular measurements would be manifested by the flux alternatives. The MILP-IMM results suggest how NMR experiments can be designed such that the spectra of glutamate for two flux distribution scenarios differ significantly.

  20. Computational simulations of frictional losses in pipe networks confirmed in experimental apparatusses designed by honors students

    NASA Astrophysics Data System (ADS)

    Pohlman, Nicholas A.; Hynes, Eric; Kutz, April

    2015-11-01

    Lectures in introductory fluid mechanics at NIU are a combination of students with standard enrollment and students seeking honors credit for an enriching experience. Most honors students dread the additional homework problems or an extra paper assigned by the instructor. During the past three years, honors students of my class have instead collaborated to design wet-lab experiments for their peers to predict variable volume flow rates of open reservoirs driven by gravity. Rather than learn extra, the honors students learn the Bernoulli head-loss equation earlier to design appropriate systems for an experimental wet lab. Prior designs incorporated minor loss features such as sudden contraction or multiple unions and valves. The honors students from Spring 2015 expanded the repertoire of available options by developing large scale set-ups with multiple pipe networks that could be combined together to test the flexibility of the student team's computational programs. The engagement of bridging the theory with practice was appreciated by all of the students such that multiple teams were able to predict performance within 4% accuracy. The challenges, schedules, and cost estimates of incorporating the experimental lab into an introductory fluid mechanics course will be reported.

  1. Experimental Investigation of a Point Design Optimized Arrow Wing HSCT Configuration

    NASA Technical Reports Server (NTRS)

    Narducci, Robert P.; Sundaram, P.; Agrawal, Shreekant; Cheung, S.; Arslan, A. E.; Martin, G. L.

    1999-01-01

    The M2.4-7A Arrow Wing HSCT configuration was optimized for straight and level cruise at a Mach number of 2.4 and a lift coefficient of 0.10. A quasi-Newton optimization scheme maximized the lift-to-drag ratio (by minimizing drag-to-lift) using Euler solutions from FL067 to estimate the lift and drag forces. A 1.675% wind-tunnel model of the Opt5 HSCT configuration was built to validate the design methodology. Experimental data gathered at the NASA Langley Unitary Plan Wind Tunnel (UPWT) section #2 facility verified CFL3D Euler and Navier-Stokes predictions of the Opt5 performance at the design point. In turn, CFL3D confirmed the improvement in the lift-to-drag ratio obtained during the optimization, thus validating the design procedure. A data base at off-design conditions was obtained during three wind-tunnel tests. The entry into NASA Langley UPWT section #2 obtained data at a free stream Mach number, M(sub infinity), of 2.55 as well as the design Mach number, M(sub infinity)=2.4. Data from a Mach number range of 1.8 to 2.4 was taken at UPWT section #1. Transonic and low supersonic Mach numbers, M(sub infinity)=0.6 to 1.2, was gathered at the NASA Langley 16 ft. Transonic Wind Tunnel (TWT). In addition to good agreement between CFD and experimental data, highlights from the wind-tunnel tests include a trip dot study suggesting a linear relationship between trip dot drag and Mach number, an aeroelastic study that measured the outboard wing deflection and twist, and a flap scheduling study that identifies the possibility of only one leading-edge and trailing-edge flap setting for transonic cruise and another for low supersonic acceleration.

  2. Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.

    PubMed

    Michaud, J-P; Schoenly, Kenneth G; Moreau, G

    2012-01-01

    Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.

  3. Experimental design in caecilian systematics: phylogenetic information of mitochondrial genomes and nuclear rag1.

    PubMed

    San Mauro, Diego; Gower, David J; Massingham, Tim; Wilkinson, Mark; Zardoya, Rafael; Cotton, James A

    2009-08-01

    In molecular phylogenetic studies, a major aspect of experimental design concerns the choice of markers and taxa. Although previous studies have investigated the phylogenetic performance of different genes and the effectiveness of increasing taxon sampling, their conclusions are partly contradictory, probably because they are highly context specific and dependent on the group of organisms used in each study. Goldman introduced a method for experimental design in phylogenetics based on the expected information to be gained that has barely been used in practice. Here we use this method to explore the phylogenetic utility of mitochondrial (mt) genes, mt genomes, and nuclear rag1 for studies of the systematics of caecilian amphibians, as well as the effect of taxon addition on the stabilization of a controversial branch of the tree. Overall phylogenetic information estimates per gene, specific estimates per branch of the tree, estimates for combined (mitogenomic) data sets, and estimates as a hypothetical new taxon is added to different parts of the caecilian tree are calculated and compared. In general, the most informative data sets are those for mt transfer and ribosomal RNA genes. Our results also show at which positions in the caecilian tree the addition of taxa have the greatest potential to increase phylogenetic information with respect to the controversial relationships of Scolecomorphus, Boulengerula, and all other teresomatan caecilians. These positions are, as intuitively expected, mostly (but not all) adjacent to the controversial branch. Generating whole mitogenomic and rag1 data for additional taxa joining the Scolecomorphus branch may be a more efficient strategy than sequencing a similar amount of additional nucleotides spread across the current caecilian taxon sampling. The methodology employed in this study allows an a priori evaluation and testable predictions of the appropriateness of particular experimental designs to solve specific questions at

  4. Experimental design in caecilian systematics: phylogenetic information of mitochondrial genomes and nuclear rag1.

    PubMed

    San Mauro, Diego; Gower, David J; Massingham, Tim; Wilkinson, Mark; Zardoya, Rafael; Cotton, James A

    2009-08-01

    In molecular phylogenetic studies, a major aspect of experimental design concerns the choice of markers and taxa. Although previous studies have investigated the phylogenetic performance of different genes and the effectiveness of increasing taxon sampling, their conclusions are partly contradictory, probably because they are highly context specific and dependent on the group of organisms used in each study. Goldman introduced a method for experimental design in phylogenetics based on the expected information to be gained that has barely been used in practice. Here we use this method to explore the phylogenetic utility of mitochondrial (mt) genes, mt genomes, and nuclear rag1 for studies of the systematics of caecilian amphibians, as well as the effect of taxon addition on the stabilization of a controversial branch of the tree. Overall phylogenetic information estimates per gene, specific estimates per branch of the tree, estimates for combined (mitogenomic) data sets, and estimates as a hypothetical new taxon is added to different parts of the caecilian tree are calculated and compared. In general, the most informative data sets are those for mt transfer and ribosomal RNA genes. Our results also show at which positions in the caecilian tree the addition of taxa have the greatest potential to increase phylogenetic information with respect to the controversial relationships of Scolecomorphus, Boulengerula, and all other teresomatan caecilians. These positions are, as intuitively expected, mostly (but not all) adjacent to the controversial branch. Generating whole mitogenomic and rag1 data for additional taxa joining the Scolecomorphus branch may be a more efficient strategy than sequencing a similar amount of additional nucleotides spread across the current caecilian taxon sampling. The methodology employed in this study allows an a priori evaluation and testable predictions of the appropriateness of particular experimental designs to solve specific questions at

  5. Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Cheng, W.; Yeh, W. W.

    2012-12-01

    An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.

  6. Aerodynamic Design of Axial-flow Compressors. VI - Experimental Flow in Two-Dimensional Cascades

    NASA Technical Reports Server (NTRS)

    Lieblein, Seymour

    1955-01-01

    Available experimental two-dimensional cascade data for conventional compressor blade sections are correlated at a reference incidence angle in the region of minimum loss. Variations of reference incidence angle, total-pressure loss, and deviation angle with cascade geometry, inlet Mach number, and Reynolds number are investigated. From the analysis and the correlations of the available data, rules and relations are evolved for the prediction of blade-profile performance. These relations are developed in simplified forms readily applicable to compressor design procedures.

  7. Quiet Clean Short-Haul Experimental Engine (QCSEE): Acoustic treatment development and design

    NASA Technical Reports Server (NTRS)

    Clemons, A.

    1979-01-01

    Acoustic treatment designs for the quiet clean short-haul experimental engines are defined. The procedures used in the development of each noise-source suppressor device are presented and discussed in detail. A complete description of all treatment concepts considered and the test facilities utilized in obtaining background data used in treatment development are also described. Additional supporting investigations that are complementary to the treatment development work are presented. The expected suppression results for each treatment configuration are given in terms of delta SPL versus frequency and in terms of delta PNdB.

  8. Design and experimental investigations on a small scale traveling wave thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Chen, M.; Ju, Y. L.

    2013-02-01

    A small scale traveling wave or Stirling thermoacoustic engine with a resonator of only 1 m length was designed, constructed and tested by using nitrogen as working gas. The small heat engine achieved a steady working frequency of 45 Hz. The pressure ratio reached 1.189, with an average charge pressure of 0.53 MPa and a heating power of 1.14 kW. The temperature and the pressure characteristics during the onset and damping processes were also observed and discussed. The experimental results demonstrated that the small engine possessed the potential to drive a Stirling-type pulse tube cryocooler.

  9. Optimization of polyvinylidene fluoride (PVDF) membrane fabrication for protein binding using statistical experimental design.

    PubMed

    Ahmad, A L; Ideris, N; Ooi, B S; Low, S C; Ismail, A

    2016-01-01

    Statistical experimental design was employed to optimize the preparation conditions of polyvinylidenefluoride (PVDF) membranes. Three variables considered were polymer concentration, dissolving temperature, and casting thickness, whereby the response variable was membrane-protein binding. The optimum preparation for the PVDF membrane was a polymer concentration of 16.55 wt%, a dissolving temperature of 27.5°C, and a casting thickness of 450 µm. The statistical model exhibits a deviation between the predicted and actual responses of less than 5%. Further characterization of the formed PVDF membrane showed that the morphology of the membrane was in line with the membrane-protein binding performance. PMID:27088961

  10. Experimental Design for CMIP6: Aerosol, Land Use, and Future Scenarios Final Report

    SciTech Connect

    Arnott, James

    2015-10-30

    The Aspen Global Change Institute hosted a technical science workshop entitled, “Experimental design for CMIP6: Aerosol, Land Use, and Future Scenarios,” on August 3-8, 2014 in Aspen, CO. Claudia Tebaldi (NCAR) and Brian O’Neill (NCAR) served as co-chairs for the workshop. The Organizing committee also included Dave Lawrence (NCAR), Jean-Francois Lamarque (NCAR), George Hurtt (University of Maryland), & Detlef van Vuuren (PBL Netherlands Environmental Change). The meeting included the participation of 22 scientists representing many of the major climate modeling centers for a total of 110 participant days.

  11. Experimental evaluation of an advanced Space Shuttle Main Engine hot-gas manifold design concept

    NASA Technical Reports Server (NTRS)

    Pelaccio, D. G.; Lepore, F. F.; Oconnor, G. M.; Rao, G. V. R.; Ratekin, G. H.; Vogt, S. T.

    1984-01-01

    The Space Shuttle Main Engine's hot gas manifold (HGM) has been the subject of an experimental study aimed at the establishment of an aerodynamic data base to support the development of an advanced, three-dimensional, fluid dynamic analysis computer model. The advanced HGM design used in the study demonstrated improved flow uniformity in the fuel-side turbine exit and transfer duct exit regions. Major modifications were incorporated in the HGM flow test article model, using two large transfer ducts on the fuel turbine side in place of the three small transfer ducts of the present design. The HGM flow field data were found to be essentially independent of Reynolds number over the range examined.

  12. Inlet Flow Test Calibration for a Small Axial Compressor Facility. Part 1: Design and Experimental Results

    NASA Technical Reports Server (NTRS)

    Miller, D. P.; Prahst, P. S.

    1994-01-01

    An axial compressor test rig has been designed for the operation of small turbomachines. The inlet region consisted of a long flowpath region with two series of support struts and a flapped inlet guide vane. A flow test was run to calibrate and determine the source and magnitudes of the loss mechanisms in the inlet for a highly loaded two-stage axial compressor test. Several flow conditions and IGV angle settings were established in which detailed surveys were completed. Boundary layer bleed was also provided along the casing of the inlet behind the support struts and ahead of the IGV. A detailed discussion of the flowpath design along with a summary of the experimental results are provided in Part 1.

  13. Design and Experimental Performance of a Two Stage Partial Admission Turbine, Task B.1/B.4

    NASA Technical Reports Server (NTRS)

    Sutton, R. F.; Boynton, J. L.; Akian, R. A.; Shea, Dan; Roschak, Edmund; Rojas, Lou; Orr, Linsey; Davis, Linda; King, Brad; Bubel, Bill

    1992-01-01

    A three-inch mean diameter, two-stage turbine with partial admission in each stage was experimentally investigated over a range of admissions and angular orientations of admission arcs. Three configurations were tested in which first stage admission varied from 37.4 percent (10 of 29 passages open, 5 per side) to 6.9 percent (2 open, 1 per side). Corresponding second stage admissions were 45.2 percent (14 of 31 passages open, 7 per side) and 12.9 percent (4 open, 2 per side). Angular positions of the second stage admission arcs with respect to the first stage varied over a range of 70 degrees. Design and off-design efficiency and flow characteristics for the three configurations are presented. The results indicated that peak efficiency and the corresponding isentropic velocity ratio decreased as the arcs of admission were decreased. Both efficiency and flow characteristics were sensitive to the second stage nozzle orientation angles.

  14. Design of charge exchange recombination spectroscopy for the joint Texas experimental tokamak

    SciTech Connect

    Chi, Y.; Zhuang, G. Cheng, Z. F.; Hou, S. Y.; Cheng, C.; Li, Z.; Wang, J. R.; Wang, Z. J.

    2014-11-15

    The old diagnostic neutral beam injector first operated at the University of Texas at Austin is ready for rejoining the joint Texas experimental tokamak (J-TEXT). A new set of high voltage power supplies has been equipped and there is no limitation for beam modulation or beam pulse duration henceforth. Based on the spectra of fully striped impurity ions induced by the diagnostic beam the design work for toroidal charge exchange recombination spectroscopy (CXRS) system is presented. The 529 nm carbon VI (n = 8 − 7 transition) line seems to be the best choice for ion temperature and plasma rotation measurements and the considered hardware is listed. The design work of the toroidal CXRS system is guided by essential simulation of expected spectral results under the J-TEXT tokamak operation conditions.

  15. Good experimental design and statistics can save animals, but how can it be promoted?

    PubMed

    Festing, Michael F W

    2004-06-01

    Surveys of published papers show that there are many errors both in the design of the experiments and in the statistical analysis of the resulting data. This must result in a waste of animals and scientific resources, and it is surely unethical. Scientific quality might be improved, to some extent, by journal editors, but they are constrained by lack of statistical referees and inadequate statistical training of those referees that they do use. Other parties, such as welfare regulators, ethical review committees and individual scientists also have an interest in scientific quality, but they do not seem to be well placed to make the required changes. However, those who fund research would have the power to do something if they could be convinced that it is in their best interests to do so. More examples of the way in which better experimental design has led to improved experiments would be helpful in persuading these funding organisations to take further action. PMID:23577446

  16. Mechanical design of experimental apparatus for FIREX cryo-target cooling

    NASA Astrophysics Data System (ADS)

    Iwamoto, A.; Norimatsu, T.; Nakai, M.; Sakagami, H.; Fujioka, S.; Shiraga, H.; Azechi, H.

    2016-05-01

    Mechanical design of an experimental apparatus for FIREX cryo-target cooling is described. Gaseous helium (GHe) sealing system at a cryogenic environment is an important issue for laser fusion experiments. The dedicated loading system was designed for a metal gasket. We take U-TIGHTSEAL® (Usui Kokusai Sangyo Kaisha. Ltd.) with an indium plated copper jacket as an example. According to its specification, a linear load of 110 N/m along its circumference is the optimum compression; however a lower load would still maintain helium (He) leak below the required level. Its sealing performance was investigated systematically. Our system demanded 27 N/mm of the load to keep He leak tightness in a cryogenic environment. Once leak tightness was obtained, it could be reduced to 9.5 N/mm.

  17. Development and design of a multi-column experimental setup for Kr/Xe separation

    SciTech Connect

    Garn, Troy G.; Greenhalgh, Mitchell; Watson, Tony

    2014-12-01

    As a precursor to FY-15 Kr/Xe separation testing, design modifications to an existing experimental setup are warranted. The modifications would allow for multi-column testing to facilitate a Xe separation followed by a Kr separation using engineered form sorbents prepared using an INL patented process. A new cooling apparatus capable of achieving test temperatures to -40° C and able to house a newly designed Xe column was acquired. Modifications to the existing setup are being installed to allow for multi-column testing and gas constituent analyses using evacuated sample bombs. The new modifications will allow for independent temperature control for each column enabling a plethora of test conditions to be implemented. Sample analyses will be used to evaluate the Xe/Kr selectivity of the AgZ-PAN sorbent and determine the Kr purity of the effluent stream following Kr capture using the HZ-PAN sorbent.

  18. Design of an experimental electric arc furnace. Report of investigations/1992

    SciTech Connect

    Hartman, A.D.; Ochs, T.L.

    1992-01-01

    Instabilities in electric steelmaking furnace arcs cause electrical and acoustical noise, reduce operating efficiency, increase refractory erosion, and increase electrode usage. The U.S. Bureau of Mines has an ongoing research project investigating methods to stabilize these arcs to improve productivity in steel production. To perform experiments to test new hypotheses, researchers designed and instrumented an advanced, experimental single-phase furnace. The paper describes the furnace, which was equipped with high-speed data acquisition capabilities for electrical, temperature, pressure and flow rate measurements; automated atmosphere control; ballistic calorimetry; and viewports for high-speed cinematography. Precise environmental control and accurate data acquisition allow the statistical design of experiments and assignment of rigorous confidence limits when testing potential furnace or procedural modifications.

  19. Experimental measurement of human head motion for high-resolution computed tomography system design

    NASA Astrophysics Data System (ADS)

    Li, Liang; Chen, Zhiqiang; Jin, Xin; Yu, Hengyong; Wang, Ge

    2010-06-01

    Human head motion has been experimentally measured for high-resolution computed tomography (CT) design using a Canon digital camera. Our goal is to identify the minimal movements of the human head under ideal conditions without rigid fixation. In our experiments, all the 19 healthy volunteers were lying down with strict self-control. All of them were asked to be calm without pressures. Our results showed that the mean absolute value of the measured translation excursion was about 0.35 mm, which was much less than the measurements on real patients. Furthermore, the head motions in different directions were correlated. These results are useful for the design of the new instant CT system for in vivo high-resolution imaging (about 40 μm).

  20. Design of charge exchange recombination spectroscopy for the joint Texas experimental tokamak.

    PubMed

    Chi, Y; Zhuang, G; Cheng, Z F; Hou, S Y; Cheng, C; Li, Z; Wang, J R; Wang, Z J

    2014-11-01

    The old diagnostic neutral beam injector first operated at the University of Texas at Austin is ready for rejoining the joint Texas experimental tokamak (J-TEXT). A new set of high voltage power supplies has been equipped and there is no limitation for beam modulation or beam pulse duration henceforth. Based on the spectra of fully striped impurity ions induced by the diagnostic beam the design work for toroidal charge exchange recombination spectroscopy (CXRS) system is presented. The 529 nm carbon VI (n = 8 - 7 transition) line seems to be the best choice for ion temperature and plasma rotation measurements and the considered hardware is listed. The design work of the toroidal CXRS system is guided by essential simulation of expected spectral results under the J-TEXT tokamak operation conditions.

  1. Flowing lead spallation target design for use in an ADTT experimental facility located at LAMPF

    SciTech Connect

    Beard, C.A.; Bracht, R.R.; Buksa, J.J.

    1994-08-01

    A conceptual design has been initiated for a flowing lead spallation target for use in an ADTT experimental facility located at LAMPF. The lead is contained using Nb-1Zr as the structural material. This material was selected based on its favorable material properties as well as its compatibility with the flowing lead. Heat deposited in the lead and the Nb-1Zr container by the 800-MeV, 1-mA beam is removed by the flowing lead and transferred to helium via a conventional heat exchanger. The neutronic, thermal hydraulic, and stress characteristics of the system have been determined. In addition, a module to control the thaw and freeze of the lead has been developed and incorporated into the target system design. The entire primary target system (spallation target, thaw/freeze system, and intermediate heat exchanger) has been designed to be built as a contained module to allow easy insertion into an experimental ADTT blanket assembly and to provide multiple levels of containment for the lead. For the 800-MeV LAMPF beam, the target delivers a source of approximately 18 neutrons/proton. A total of 540 kW are deposited in the target. The lead temperature ranges from 400 to 500 C. The peak structural heating occurs at the beam interface, and the target is designed to maximize cooling at this point. An innovative thin-window structure has been incorporated that allows direct, convective cooling of the window by the inlet flowing lead. Safe, and reliable operation of the target has been maximized through simple, robust engineering

  2. Design and Development of a Composite Dome for Experimental Characterization of Material Permeability

    NASA Technical Reports Server (NTRS)

    Estrada, Hector; Smeltzer, Stanley S., III

    1999-01-01

    This paper presents the design and development of a carbon fiber reinforced plastic dome, including a description of the dome fabrication, method for sealing penetrations in the dome, and a summary of the planned test series. This dome will be used for the experimental permeability characterization and leakage validation of composite vessels pressurized using liquid hydrogen and liquid nitrogen at the Cryostat Test Facility at the NASA Marshall Space Flight Center (MSFC). The preliminary design of the dome was completed using membrane shell analysis. Due to the configuration of the test setup, the dome will experience some flexural stresses and stress concentrations in addition to membrane stresses. Also, a potential buckling condition exists for the dome due to external pressure during the leak testing of the cryostat facility lines. Thus, a finite element analysis was conducted to assess the overall strength and stability of the dome for each required test condition. Based on these results, additional plies of composite reinforcement material were applied to local regions on the dome to alleviate stress concentrations and limit deflections. The dome design includes a circular opening in the center for the installation of a polar boss, which introduces a geometric discontinuity that causes high stresses in the region near the hole. To attenuate these high stresses, a reinforcement system was designed using analytical and finite element analyses. The development of a low leakage polar boss system is also investigated.

  3. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    PubMed

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  4. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    PubMed

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them. PMID:25311906

  5. Using experimental design to evaluate processing parameters in soft ferrite manufacture

    SciTech Connect

    Taylor, J.A.T.; Reczek, S.T.; Rosen, A.

    1995-09-01

    The manufacture of soft ferrites is a complex process that involves multitudinous, interrelated steps. Seldom can one variable, such as the effect of the particle size of a raw material, be evaluated without considering other parameters. The particle size of hematite, for example, may affect reaction rate during calcination and consequently the spinel formation. Spinel content affects hardness and is reflected in grinding time. Grinding time affects the amount of iron introduced from the milling media, and so on. To optimize a process, the primary effect of a parameter and the interactions of that affect with other variables should be evaluated. This can be accomplished by using an experimental design that maximizes the information that can be obtained from each set of samples. The purpose of this study was to investigate the impact of several processing parameters on the magnetic properties of MnZn spinels. An experimental design using an orthogonal array of three variables at two levels was used to examine sintering time, atmosphere containment and part size effects. Four and eight hours at the maximum firing temperature were used for this investigation. Atmosphere containment was achieved by covering saggers with shielding of the same composition as the cores. Size effects, which could result from surface to volume ratio differences, were addressed by comparing the data for large and small cores. The surface to volume ratio of small cores was 1.4 times that of large cores.

  6. Sonophotolytic degradation of synthetic pharmaceutical wastewater: statistical experimental design and modeling.

    PubMed

    Ghafoori, Samira; Mowla, Amir; Jahani, Ramtin; Mehrvar, Mehrab; Chan, Philip K

    2015-03-01

    The merits of the sonophotolysis as a combination of sonolysis (US) and photolysis (UV/H2O2) are investigated in a pilot-scale external loop airlift sonophotoreactor for the treatment of a synthetic pharmaceutical wastewater (SPWW). In the first part of this study, the multivariate experimental design is carried out using Box-Behnken design (BBD). The effluent is characterized by the total organic carbon (TOC) percent removal as a surrogate parameter. The results indicate that the response of the TOC percent removal is significantly affected by the synergistic effects of the linear term of H2O2 dosage and ultrasound power with the antagonistic effect of quadratic term of H2O2 dosage. The statistical analysis of the results indicates a satisfactory prediction of the system behavior by the developed model. In the second part of this study, a novel rigorous mathematical model for the sonophotolytic process is developed to predict the TOC percent removal as a function of time. The mathematical model is based on extensively accepted sonophotochemical reactions and the rate constants in advanced oxidation processes. A good agreement between the model predictions and experimental data indicates that the proposed model could successfully describe the sonophotolysis of the pharmaceutical wastewater.

  7. Adaptive Signal Recovery on Graphs via Harmonic Analysis for Experimental Design in Neuroimaging

    PubMed Central

    Kim, Won Hwa; Hwang, Seong Jae; Adluru, Nagesh; Johnson, Sterling C.; Singh, Vikas

    2016-01-01

    Consider an experimental design of a neuroimaging study, where we need to obtain p measurements for each participant in a setting where p′ (< p) are cheaper and easier to acquire while the remaining (p – p′) are expensive. For example, the p′ measurements may include demographics, cognitive scores or routinely offered imaging scans while the (p – p′) measurements may correspond to more expensive types of brain image scans with a higher participant burden. In this scenario, it seems reasonable to seek an “adaptive” design for data acquisition so as to minimize the cost of the study without compromising statistical power. We show how this problem can be solved via harmonic analysis of a band-limited graph whose vertices correspond to participants and our goal is to fully recover a multi-variate signal on the nodes, given the full set of cheaper features and a partial set of more expensive measurements. This is accomplished using an adaptive query strategy derived from probing the properties of the graph in the frequency space. To demonstrate the benefits that this framework can provide, we present experimental evaluations on two independent neuroimaging studies and show that our proposed method can reliably recover the true signal with only partial observations directly yielding substantial financial savings. PMID:27807594

  8. Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.

    2011-01-01

    The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.

  9. Experimental Design Optimization of a Sequential Injection Method for Promazine Assay in Bulk and Pharmaceutical Formulations

    PubMed Central

    Idris, Abubakr M.; Assubaie, Fahad N.; Sultan, Salah M.

    2007-01-01

    Experimental design optimization approach was utilized to develop a sequential injection analysis (SIA) method for promazine assay in bulk and pharmaceutical formulations. The method was based on the oxidation of promazine by Ce(IV) in sulfuric acidic media resulting in a spectrophotometrically detectable species at 512 nm. A 33 full factorial design and response surface methods were applied to optimize experimental conditions potentially controlling the analysis. The optimum conditions obtained were 1.0 × 10−4 M sulphuric acid, 0.01 M Ce(IV), and 10 μL/s flow rate. Good analytical parameters were obtained including range of linearity 1–150 μg/mL, linearity with correlation coefficient 0.9997, accuracy with mean recovery 98.2%, repeatability with RSD 1.4% (n = 7 consequent injections), intermediate precision with RSD 2.1% (n = 5 runs over a week), limits of detection 0.34 μg/mL, limits of quantification 0.93 μg/mL, and sampling frequency 23 samples/h. The obtained results were realized by the British Pharmacopoeia method and comparable results were obtained. The provided SIA method enjoys the advantages of the technique with respect to rapidity, reagent/sample saving, and safety in solution handling and to the environment. PMID:18350124

  10. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.; Le,Hung

    2012-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the model development and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  11. Analysis of an apoptotic core model focused on experimental design using artificial data.

    PubMed

    Schlatter, R; Conzelmann, H; Gilles, E D; Sawodny, O; Sauter, T

    2009-07-01

    The activation of caspases is a central mechanism in apoptosis. To gain further insights into complex processes like this, mathematical modelling using ordinary differential equations (ODEs) can be a very powerful research tool. Unfortunately, the lack of measurement data is a common problem in building such kinetic models, because it practically constrains the identifiability of the model parameters. An existing mathematical model of caspase activation during apoptosis was used in order to design future experimental setups that will help to maximise the obtained information. For this purpose, artificial measurement data are generated in silico to simulate potential experiments, and the model is fitted to this data. The model is also analysed using observability gramian and sensitivity analyses. The used analysis methods are compared. The artificial data approach allows one to make conclusions about system properties, identifiability of parameters and the potential information content of additional measurements for the used caspase activation model. The latter facilitates to improve the experimental design of further measurements significantly. The performed analyses reveal that several kinetic parameters are not at all, or only scarcely, identifiable, and that measurements of activated caspase 8 will maximally improve the parameter estimates. Furthermore, we can show that many assays with inhibitor of apoptosis protein (IAP) knockout cells only provide redundant information for our needs and as such do not have to be carried out. PMID:19640164

  12. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas; Sheth, Rubik; Le, Hung

    2013-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the modeling and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  13. Optimization of process parameters for drilled hole quality characteristics during cortical bone drilling using Taguchi method.

    PubMed

    Singh, Gurmeet; Jain, Vivek; Gupta, Dheeraj; Ghai, Aman

    2016-09-01

    Orthopaedic surgery involves drilling of bones to get them fixed at their original position. The drilling process used in orthopaedic surgery is most likely to the mechanical drilling process and there is all likelihood that it may harm the already damaged bone, the surrounding bone tissue and nerves, and the peril is not limited at that. It is very much feared that the recovery of that part may be impeded so that it may not be able to sustain life long. To achieve sustainable orthopaedic surgery, a surgeon must try to control the drilling damage at the time of bone drilling. The area around the holes decides the life of bone joint and so, the contiguous area of drilled hole must be intact and retain its properties even after drilling. This study mainly focuses on optimization of drilling parameters like rotational speed, feed rate and the type of tool at three levels each used by Taguchi optimization for surface roughness and material removal rate. The confirmation experiments were also carried out and results found with the confidence interval. Scanning electrode microscopy (SEM) images assisted in getting the micro level information of bone damage.

  14. Using Taguchi method to optimize differential evolution algorithm parameters to minimize workload smoothness index in SALBP

    NASA Astrophysics Data System (ADS)

    Mozdgir, A.; Mahdavi, Iraj; Seyyedi, I.; Shiraqei, M. E.

    2011-06-01

    An assembly line is a flow-oriented production system where the productive units performing the operations, referred to as stations, are aligned in a serial manner. The assembly line balancing problem arises and has to be solved when an assembly line has to be configured or redesigned. The so-called simple assembly line balancing problem (SALBP), a basic version of the general problem, has attracted attention of researchers and practitioners of operations research for almost half a century. There are four types of objective functions which are considered to this kind of problem. The versions of SALBP may be complemented by a secondary objective which consists of smoothing station loads. Many heuristics have been proposed for the assembly line balancing problem due to its computational complexity and difficulty in identifying an optimal solution and so many heuristic solutions are supposed to solve this problem. In this paper a differential evolution algorithm is developed to minimize workload smoothness index in SALBP-2 and the algorithm parameters are optimized using Taguchi method.

  15. Taguchi approach for co-gasification optimization of torrefied biomass and coal.

    PubMed

    Chen, Wei-Hsin; Chen, Chih-Jung; Hung, Chen-I

    2013-09-01

    This study employs the Taguchi method to approach the optimum co-gasification operation of torrefied biomass (eucalyptus) and coal in an entrained flow gasifier. The cold gas efficiency is adopted as the performance index of co-gasification. The influences of six parameters, namely, the biomass blending ratio, oxygen-to-fuel mass ratio (O/F ratio), biomass torrefaction temperature, gasification pressure, steam-to-fuel mass ratio (S/F ratio), and inlet temperature of the carrier gas, on the performance of co-gasification are considered. The analysis of the signal-to-noise ratio suggests that the O/F ratio is the most important factor in determining the performance and the appropriate O/F ratio is 0.7. The performance is also significantly affected by biomass along with torrefaction, where a torrefaction temperature of 300°C is sufficient to upgrade eucalyptus. According to the recommended operating conditions, the values of cold gas efficiency and carbon conversion at the optimum co-gasification are 80.99% and 94.51%, respectively.

  16. Optimization of process parameters for drilled hole quality characteristics during cortical bone drilling using Taguchi method.

    PubMed

    Singh, Gurmeet; Jain, Vivek; Gupta, Dheeraj; Ghai, Aman

    2016-09-01

    Orthopaedic surgery involves drilling of bones to get them fixed at their original position. The drilling process used in orthopaedic surgery is most likely to the mechanical drilling process and there is all likelihood that it may harm the already damaged bone, the surrounding bone tissue and nerves, and the peril is not limited at that. It is very much feared that the recovery of that part may be impeded so that it may not be able to sustain life long. To achieve sustainable orthopaedic surgery, a surgeon must try to control the drilling damage at the time of bone drilling. The area around the holes decides the life of bone joint and so, the contiguous area of drilled hole must be intact and retain its properties even after drilling. This study mainly focuses on optimization of drilling parameters like rotational speed, feed rate and the type of tool at three levels each used by Taguchi optimization for surface roughness and material removal rate. The confirmation experiments were also carried out and results found with the confidence interval. Scanning electrode microscopy (SEM) images assisted in getting the micro level information of bone damage. PMID:27254280

  17. Experimental design, modeling and optimization of polyplex formation between DNA oligonucleotides and branched polyethylenimine.

    PubMed

    Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana

    2015-09-28

    The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.

  18. Quantifying the effect of experimental design choices for in vitro scratch assays.

    PubMed

    Johnston, Stuart T; Ross, Joshua V; Binder, Benjamin J; Sean McElwain, D L; Haridas, Parvathi; Simpson, Matthew J

    2016-07-01

    Scratch assays are often used to investigate potential drug treatments for chronic wounds and cancer. Interpreting these experiments with a mathematical model allows us to estimate the cell diffusivity, D, and the cell proliferation rate, λ. However, the influence of the experimental design on the estimates of D and λ is unclear. Here we apply an approximate Bayesian computation (ABC) parameter inference method, which produces a posterior distribution of D and λ, to new sets of synthetic data, generated from an idealised mathematical model, and experimental data for a non-adhesive mesenchymal population of fibroblast cells. The posterior distribution allows us to quantify the amount of information obtained about D and λ. We investigate two types of scratch assay, as well as varying the number and timing of the experimental observations captured. Our results show that a scrape assay, involving one cell front, provides more precise estimates of D and λ, and is more computationally efficient to interpret than a wound assay, with two opposingly directed cell fronts. We find that recording two observations, after making the initial observation, is sufficient to estimate D and λ, and that the final observation time should correspond to the time taken for the cell front to move across the field of view. These results provide guidance for estimating D and λ, while simultaneously minimising the time and cost associated with performing and interpreting the experiment.

  19. Experimental Guidelines for Studies Designed to Investigate the Impact of Antioxidant Supplementation on Exercise Performance

    PubMed Central

    Powers, Scott K.; Smuder, Ashley J.; Kavazis, Andreas N.; Hudson, Matthew B.

    2010-01-01

    Research interest in the effects of antioxidants on exercise-induced oxidative stress and human performance continues to grow as new scientists enter this field. Consequently, there is a need to establish an acceptable set of criteria for monitoring antioxidant capacity and oxidative damage in tissues. Numerous reports have described a wide range of assays to detect both antioxidant capacity and oxidative damage to biomolecules, but many techniques are not appropriate in all experimental conditions. Here, the authors present guidelines for selecting and interpreting methods that can be used by scientists to investigate the impact of antioxidants on both exercise performance and the redox status of tissues. Moreover, these guidelines will be useful for reviewers who are assigned the task of evaluating studies on this topic. The set of guidelines contained in this report is not designed to be a strict set of rules, because often the appropriate procedures depend on the question being addressed and the experimental model. Furthermore, because no individual assay is guaranteed to be the most appropriate in every experimental situation, the authors strongly recommend using multiple assays to verify a change in biomarkers of oxidative stress or redox balance. PMID:20190346

  20. Network Pharmacology Strategies Toward Multi-Target Anticancer Therapies: From Computational Models to Experimental Design Principles

    PubMed Central

    Tang, Jing; Aittokallio, Tero

    2014-01-01

    Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.

  1. A validated spectrofluorimetric method for the determination of nifuroxazide through coumarin formation using experimental design

    PubMed Central

    2013-01-01

    Background Nifuroxazide (NF) is an oral nitrofuran antibiotic, having a wide range of bactericidal activity against gram positive and gram negative enteropathogenic organisms. It is formulated either in single form, as intestinal antiseptic or in combination with drotaverine (DV) for the treatment of gastroenteritis accompanied with gastrointestinal spasm. Spectrofluorimetry is a convenient and sensitive technique for pharmaceutical quality control. The new proposed spectrofluorimetric method allows its determination either in single form or in binary mixture with DV. Furthermore, experimental conditions were optimized using the new approach: Experimental design, which has many advantages over the old one, one variable at a time (OVAT approach). Results A novel and sensitive spectrofluorimetric method was designed and validated for the determination of NF in pharmaceutical formulation. The method was based upon the formation of a highly fluorescent coumarin compound by the reaction between NF and ethylacetoacetate (EAA) using sulfuric acid as catalyst. The fluorescence was measured at 390 nm upon excitation at 340 nm. Experimental design was used to optimize experimental conditions. Volumes of EAA and sulfuric acid, temperature and heating time were considered the critical factors to be studied in order to establish an optimum fluorescence. Each two factors were co-tried at three levels. Regression analysis revealed good correlation between fluorescence intensity and concentration over the range 20–400 ng ml-1. The suggested method was successfully applied for the determination of NF in pure and capsule forms. The procedure was validated in terms of linearity, accuracy, precision, limit of detection and limit of quantification. The selectivity of the method was investigated by analysis of NF in presence of the co-mixed drug DV where no interference was observed. The reaction pathway was suggested and the structure of the fluorescent product was proposed

  2. Experimental validation of an integrated controls-structures design methodology for a class of flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.

    1994-01-01

    This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.

  3. An innovative magnetorheological damper for automotive suspension: from design to experimental characterization

    NASA Astrophysics Data System (ADS)

    Sassi, Sadok; Cherif, Khaled; Mezghani, Lotfi; Thomas, Marc; Kotrane, Asma

    2005-08-01

    The development of a powerful new magnetorheological fluid (MRF), together with recent progress in the understanding of the behavior of such fluids, has convinced researchers and engineers that MRF dampers are among the most promising devices for semi-active automotive suspension vibration control, because of their large force capacity and their inherent ability to provide a simple, fast and robust interface between electronic controls and mechanical components. In this paper, theoretical and experimental studies are performed for the design, development and testing of a completely new MRF damper model that can be used for the semi-active control of automotive suspensions. The MR damper technology presented in this paper is based on a completely new approach where, in contrast to in the conventional solutions where the coil axis is usually superposed on the damper axis and where the inner cylindrical housing is part of the magnetic circuit, the coils are wound in a direction perpendicular to the damper axis. The paper investigates approaches to optimizing the dynamic response and provides experimental verification. Both experimental and theoretical results have shown that, if this particular model is filled with an 'MRF 336AG' MR fluid, it can provide large controllable damping forces that require only a small amount of energy. For a magnetizing system with four coils, the damping coefficient could be increased by up to three times for an excitation current of only 2 A. Such current could be reduced to less than 1 A if the magnetizing system used eight small cores. In this case, the magnetic field will be more powerful and more regularly distributed. In the presence of harmonic excitation, such a design will allow the optimum compromise between comfort and stability to be reached over different intervals of the excitation frequencies.

  4. Experimental investigation of undesired stable equilibria in pumpkin shape super-pressure balloon designs

    NASA Astrophysics Data System (ADS)

    Schur, W.

    a visco-elastic film. The balloons of a third and fourth full-scale test flights experienced structural problems during a campaign in Australia in 2001. Post-flight investigations identified two problems. The first problem was apparently caused by lack of dynamic strength of the film material in its transverse direction, a property that has theretofore not been tested in balloon films. The second problem was identified through photographic evidence on the second of the two balloons. Images of the launch spool configuration and of the balloon at float altitude, indicated that excess gore-width might prevent full deployment to the design shape. This is a dangerous situation, as the proper functioning of the design requires full deployment. Search in the literature confirmed one other case of flawed but stable deployment of a pumpkin shape balloon that has been investigated by researchers. This balloon is the "Endeavor", which is an adventurer balloon that was intended for manned circumnavigation. The experimental work documented in this paper sought to identify what design aspects of pumpkin shape balloons promote faulty deployment into undesired stable equilibria and w at design aspects assure full deployment ofh pumpkin type balloons. It is argued that the features of a constant bulge shape design (the apparent design of the "Endeavor") make it unnecessarily prone to flawed deployment. The constant bulge radius design is a superior choice, but could be improved by using a smaller bulge radius between the "tropics" of the quasi-spheroid while using a larger bulge radius for the remainder of the balloon when deployment issue become critical. In that case, of course, the strength critical region is the one with the larger bulge radius. Adequate understanding of these aspects is required to design pumpkin shape super-pressure balloons with confidence. Results from studies and tests conducted as a part of the ULDB Project are discussed.

  5. Experimental study designs to improve the evaluation of road mitigation measures for wildlife.

    PubMed

    Rytwinski, Trina; van der Ree, Rodney; Cunnington, Glenn M; Fahrig, Lenore; Findlay, C Scott; Houlahan, Jeff; Jaeger, Jochen A G; Soanes, Kylie; van der Grift, Edgar A

    2015-05-01

    An experimental approach to road mitigation that maximizes inferential power is essential to ensure that mitigation is both ecologically-effective and cost-effective. Here, we set out the need for and standards of using an experimental approach to road mitigation, in order to improve knowledge of the influence of mitigation measures on wildlife populations. We point out two key areas that need to be considered when conducting mitigation experiments. First, researchers need to get involved at the earliest stage of the road or mitigation project to ensure the necessary planning and funds are available for conducting a high quality experiment. Second, experimentation will generate new knowledge about the parameters that influence mitigation effectiveness, which ultimately allows better prediction for future road mitigation projects. We identify seven key questions about mitigation structures (i.e., wildlife crossing structures and fencing) that remain largely or entirely unanswered at the population-level: (1) Does a given crossing structure work? What type and size of crossing structures should we use? (2) How many crossing structures should we build? (3) Is it more effective to install a small number of large-sized crossing structures or a large number of small-sized crossing structures? (4) How much barrier fencing is needed for a given length of road? (5) Do we need funnel fencing to lead animals to crossing structures, and how long does such fencing have to be? (6) How should we manage/manipulate the environment in the area around the crossing structures and fencing? (7) Where should we place crossing structures and barrier fencing? We provide experimental approaches to answering each of them using example Before-After-Control-Impact (BACI) study designs for two stages in the road/mitigation project where researchers may become involved: (1) at the beginning of a road/mitigation project, and (2) after the mitigation has been constructed; highlighting real case

  6. Kriging for Simulation Metamodeling: Experimental Design, Reduced Rank Kriging, and Omni-Rank Kriging

    NASA Astrophysics Data System (ADS)

    Hosking, Michael Robert

    This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our

  7. Factors influencing the extraction of pharmaceuticals from sewage sludge and soil: an experimental design approach.

    PubMed

    Ferhi, Sabrina; Bourdat-Deschamps, Marjolaine; Daudin, Jean-Jacques; Houot, Sabine; Nélieu, Sylvie

    2016-09-01

    Pharmaceuticals can enter the environment when organic waste products are recycled on agricultural soils. The extraction of pharmaceuticals is a challenging step in their analysis. The very different extraction conditions proposed in the literature make the choice of the right method for multi-residue analysis difficult. This study aimed at evaluating, with experimental design methodology, the influence of the nature, pH and composition of the extraction medium on the extraction recovery of 14 pharmaceuticals, including 8 antibiotics, from soil and sewage sludge. Preliminary experimental designs showed that acetonitrile and citrate-phosphate buffer were the best extractants. Then, a response surface design demonstrated that many cross-product and squared terms had significant effects, explaining the shapes of the response surfaces. It also allowed optimising the pharmaceutical recoveries in soil and sludge. The optimal conditions were interpreted considering the ionisation states of the compounds, their solubility in the extraction medium and their interactions with the solid matrix. To perform the analysis, a compromise was made for each matrix. After a QuEChERS purification, the samples were analysed by online SPE-UHPLC-MS-MS. Both methods were simple and economical. They were validated with the accuracy profile methodology for soil and sludge and characterised for another type of soil, digested sludge and composted sludge. Trueness globally ranged between 80 and 120 % recovery, and inter- and intra-day precisions were globally below 20 % relative standard deviation. Various pharmaceuticals were present in environmental samples, with concentration levels ranging from a few micrograms per kilogramme up to thousands of micrograms per kilogramme. Graphical abstract Influence of the extraction medium on the extraction recovery of 14 pharmaceuticals. Influence of the ionisation state, the solubility and the interactions of pharmaceuticals with solid matrix. Analysis

  8. Passing of northern pike and common carp through experimental barriers designed for use in wetland restoration

    USGS Publications Warehouse

    French, John R. P.; Wilcox, Douglas A.; Nichols, S. Jerrine

    1999-01-01

    Restoration plans for Metzger Marsh, a coastal wetland on the south shore of western Lake Erie, incorporated a fish-control system designed to restrict access to the wetland by large common carp (Cyprinus carpio). Ingress fish passageways in the structure contain slots into which experimental grates of varying size and shape can be placed to selectively allow entry and transfer of other large fish species while minimizing the number of common carp to be handled. We tested different sizes and shapes of grates in experimental tanks in the laboratory to determine the best design for testing in the field. We also tested northern pike (Esox lucius) because lack of access to wetland spawning habitat has greatly reduced their populations in western Lake Erie. Based on our results, vertical bar grates were chosen for installation because common carp were able to pass through circular grates smaller than body height by compressing their soft abdomens; they passed through rectangular grates on the diagonal. Vertical bar grates with 5-cm spacing that were installed across much of the control structure should limit access of common carp larger than 34 cm total length (TL) and northern pike larger than 70 cm. Vertical bar grates selected for initial field trials in the fish passageway had spacings of 5.8 and 6.6 cm, which increased access by common carp to 40 and 47 cm TL and by northern pike to 76 and 81 cm, respectively. The percentage of potential common carp biomass (fish seeking entry) that must be handled in lift baskets in the passageway increased from 0.9 to 4.8 to 15.4 with each increase in spacing between bars. Further increases in spacing would greatly increase the number of common carp that would have to be handled. The results of field testing should be useful in designing selective fish-control systems for other wetland restoration sites adjacent to large water bodies.

  9. Development of a semidefined growth medium for Pedobacter cryoconitis BG5 using statistical experimental design.

    PubMed

    Ong, Magdalena; Ongkudon, Clarence M; Wong, Clemente Michael Vui Ling

    2016-10-01

    Pedobacter cryoconitis BG5 are psychrophiles isolated from the cold environment and capable of proliferating and growing well at low temperature regime. Their cellular products have found a broad spectrum of applications, including in food, medicine, and bioremediation. Therefore, it is imperative to develop a high-cell density cultivation strategy coupled with optimized growth medium for P. cryoconitis BG5. To date, there has been no published report on the design and optimization of growth medium for P. cryoconitis, hence the objective of this research project. A preliminary screening of four commercially available media, namely tryptic soy broth, R2A, Luria Bertani broth, and nutrient broth, was conducted to formulate the basal medium. Based on the preliminary screening, tryptone, glucose, NaCl, and K2HPO4 along with three additional nutrients (yeast extract, MgSO4, and NH4Cl) were identified to form the basal medium which was further analyzed by Plackett-Burman experimental design. Central composite experimental design using response surface methodology was adopted to optimize tryptone, yeast extract, and NH4Cl concentrations in the formulated growth medium. Statistical data analysis showed a high regression factor of 0.84 with a predicted optimum optical (600 nm) cell density of 7.5 using 23.7 g/L of tryptone, 8.8 g/L of yeast extract, and 0.7 g/L of NH4Cl. The optimized medium for P. cryoconitis BG5 was tested, and the observed optical density was 7.8. The cost-effectiveness of the optimized medium was determined as 6.25 unit prices per gram of cell produced in a 250-ml Erlenmeyer flask. PMID:26759918

  10. Development of a semidefined growth medium for Pedobacter cryoconitis BG5 using statistical experimental design.

    PubMed

    Ong, Magdalena; Ongkudon, Clarence M; Wong, Clemente Michael Vui Ling

    2016-10-01

    Pedobacter cryoconitis BG5 are psychrophiles isolated from the cold environment and capable of proliferating and growing well at low temperature regime. Their cellular products have found a broad spectrum of applications, including in food, medicine, and bioremediation. Therefore, it is imperative to develop a high-cell density cultivation strategy coupled with optimized growth medium for P. cryoconitis BG5. To date, there has been no published report on the design and optimization of growth medium for P. cryoconitis, hence the objective of this research project. A preliminary screening of four commercially available media, namely tryptic soy broth, R2A, Luria Bertani broth, and nutrient broth, was conducted to formulate the basal medium. Based on the preliminary screening, tryptone, glucose, NaCl, and K2HPO4 along with three additional nutrients (yeast extract, MgSO4, and NH4Cl) were identified to form the basal medium which was further analyzed by Plackett-Burman experimental design. Central composite experimental design using response surface methodology was adopted to optimize tryptone, yeast extract, and NH4Cl concentrations in the formulated growth medium. Statistical data analysis showed a high regression factor of 0.84 with a predicted optimum optical (600 nm) cell density of 7.5 using 23.7 g/L of tryptone, 8.8 g/L of yeast extract, and 0.7 g/L of NH4Cl. The optimized medium for P. cryoconitis BG5 was tested, and the observed optical density was 7.8. The cost-effectiveness of the optimized medium was determined as 6.25 unit prices per gram of cell produced in a 250-ml Erlenmeyer flask.

  11. Design and Experimental Verification of Deployable/Inflatable Ultra-Lightweight Structures

    NASA Technical Reports Server (NTRS)

    Pai, P. Frank

    2004-01-01

    geometrically exact elastic analysis and elastoplastic analysis. The objectives of this research project were: (1) to study the modeling, design, and analysis of deployable/inflatable ultra-lightweight structures, (2) to perform numerical and experimental studies on the static and dynamic characteristics and deployability of HFSs, (3) to derive guidelines for designing HFSs, (4) to develop a MATLAB toolbox for the design, analysis, and dynamic animation of HFSs, and (5) to perform experiments and establish an adequate database of post-buckling characteristics of HFSs.

  12. De Novo Peptide Design and Experimental Validation of Histone Methyltransferase Inhibitors

    PubMed Central

    Smadbeck, James; Peterson, Meghan B.; Zee, Barry M.; Garapaty, Shivani; Mago, Aashna; Lee, Christina; Giannis, Athanassios; Trojer, Patrick; Garcia, Benjamin A.; Floudas, Christodoulos A.

    2014-01-01

    Histones are small proteins critical to the efficient packaging of DNA in the nucleus. DNA–protein complexes, known as nucleosomes, are formed when the DNA winds itself around the surface of the histones. The methylation of histone residues by enhancer of zeste homolog 2 (EZH2) maintains gene repression over successive cell generations. Overexpression of EZH2 can silence important tumor suppressor genes leading to increased invasiveness of many types of cancers. This makes the inhibition of EZH2 an important target in the development of cancer therapeutics. We employed a three-stage computational de novo peptide design method to design inhibitory peptides of EZH2. The method consists of a sequence selection stage and two validation stages for fold specificity and approximate binding affinity. The sequence selection stage consists of an integer linear optimization model that was solved to produce a rank-ordered list of amino acid sequences with increased stability in the bound peptide-EZH2 structure. These sequences were validated through the calculation of the fold specificity and approximate binding affinity of the designed peptides. Here we report the discovery of novel EZH2 inhibitory peptides using the de novo peptide design method. The computationally discovered peptides were experimentally validated in vitro using dose titrations and mechanism of action enzymatic assays. The peptide with the highest in vitro response, SQ037, was validated in nucleo using quantitative mass spectrometry-based proteomics. This peptide had an IC50 of 13.5 M, demonstrated greater potency as an inhibitor when compared to the native and K27A mutant control peptides, and demonstrated competitive inhibition versus the peptide substrate. Additionally, this peptide demonstrated high specificity to the EZH2 target in comparison to other histone methyltransferases. The validated peptides are the first computationally designed peptides that directly inhibit EZH2. These inhibitors should

  13. Experimental design and simulation of a metal hydride hydrogen storage system

    NASA Astrophysics Data System (ADS)

    Gadre, Sarang Ajit

    Metal hydrides, as a hydrogen storage medium, have been under consideration for many years because they have the ability to store hydrogen reversibly in the solid state at relatively low pressures and ambient temperatures. The utility of metal hydrides as a hydrogen storage medium was demonstrated recently by the Savannah River Technology Center (SRTC) in an on-board hydrogen storage system for a hybrid electric bus project. The complex geometry and the intricate design of the SRTC bed presents quite a challenge to the development of a mathematical model that can be used for design and optimization. In a new approach introduced here, the reversible reaction kinetics and the empirical Van't Hoff relationship used in a typical reactor model are replaced by a solid phase diffusion equation and one of the two semi-empirical equilibrium P-C-T relationships based on modified virial and composite Langmuir isotherm expressions. Starting with the simplest mathematical formulation, which resulted in an analytical expression, various models were developed and successively improved by relaxing certain assumptions, eventually resulting in the most rigorous model yet developed for this system. All of these models were calibrated using experimental pressure and temperature histories obtained from a bench scale hydrogen storage test facility. The heat and mass transfer coefficients or the thermal conductivity were the only adjustable parameters in these models. A design of experiments approach was also used for studying the effect of various factors on the performance of this bench scale hydrogen storage unit. Overall, the results of this study demonstrated that even a fairly simple numerical model could do a reasonable job in predicting the discharge behavior of a fairly complicated, metal hydride hydrogen storage bed over a wide range of operating conditions. The more rigorous 2-D model gave considerable insight into the dynamics of the hydrogen discharge process from an

  14. Optimization and evaluation of clarithromycin floating tablets using experimental mixture design.

    PubMed

    Uğurlu, Timucin; Karaçiçek, Uğur; Rayaman, Erkan

    2014-01-01

    The purpose of the study was to prepare and evaluate clarithromycin (CLA) floating tablets using experimental mixture design for treatment of Helicobacter pylori provided by prolonged gastric residence time and controlled plasma level. Ten different formulations were generated based on different molecular weight of hypromellose (HPMC K100, K4M, K15M) by using simplex lattice design (a sub-class of mixture design) with Minitab 16 software. Sodium bicarbonate and anhydrous citric acid were used as gas generating agents. Tablets were prepared by wet granulation technique. All of the process variables were fixed. Results of cumulative drug release at 8th h (CDR 8th) were statistically analyzed to get optimized formulation (OF). Optimized formulation, which gave floating lag time lower than 15 s and total floating time more than 10 h, was analyzed and compared with target for CDR 8th (80%). A good agreement was shown between predicted and actual values of CDR 8th with a variation lower than 1%. The activity of clarithromycin contained optimizedformula against H. pylori were quantified using well diffusion agar assay. Diameters of inhibition zones vs. log10 clarithromycin concentrations were plotted in order to obtain a standard curve and clarithromycin activity. PMID:25272652

  15. Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines

    PubMed Central

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729

  16. MetLab: An In Silico Experimental Design, Simulation and Analysis Tool for Viral Metagenomics Studies

    PubMed Central

    Gourlé, Hadrien; Bongcam-Rudloff, Erik; Hayer, Juliette

    2016-01-01

    Metagenomics, the sequence characterization of all genomes within a sample, is widely used as a virus discovery tool as well as a tool to study viral diversity of animals. Metagenomics can be considered to have three main steps; sample collection and preparation, sequencing and finally bioinformatics. Bioinformatic analysis of metagenomic datasets is in itself a complex process, involving few standardized methodologies, thereby hampering comparison of metagenomics studies between research groups. In this publication the new bioinformatics framework MetLab is presented, aimed at providing scientists with an integrated tool for experimental design and analysis of viral metagenomes. MetLab provides support in designing the metagenomics experiment by estimating the sequencing depth needed for the complete coverage of a species. This is achieved by applying a methodology to calculate the probability of coverage using an adaptation of Stevens’ theorem. It also provides scientists with several pipelines aimed at simplifying the analysis of viral metagenomes, including; quality control, assembly and taxonomic binning. We also implement a tool for simulating metagenomics datasets from several sequencing platforms. The overall aim is to provide virologists with an easy to use tool for designing, simulating and analyzing viral metagenomes. The results presented here include a benchmark towards other existing software, with emphasis on detection of viruses as well as speed of applications. This is packaged, as comprehensive software, readily available for Linux and OSX users at https://github.com/norling/metlab. PMID:27479078

  17. Design and experimental validation for direct-drive fault-tolerant permanent-magnet vernier machines.

    PubMed

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729

  18. MetLab: An In Silico Experimental Design, Simulation and Analysis Tool for Viral Metagenomics Studies.

    PubMed

    Norling, Martin; Karlsson-Lindsjö, Oskar E; Gourlé, Hadrien; Bongcam-Rudloff, Erik; Hayer, Juliette

    2016-01-01

    Metagenomics, the sequence characterization of all genomes within a sample, is widely used as a virus discovery tool as well as a tool to study viral diversity of animals. Metagenomics can be considered to have three main steps; sample collection and preparation, sequencing and finally bioinformatics. Bioinformatic analysis of metagenomic datasets is in itself a complex process, involving few standardized methodologies, thereby hampering comparison of metagenomics studies between research groups. In this publication the new bioinformatics framework MetLab is presented, aimed at providing scientists with an integrated tool for experimental design and analysis of viral metagenomes. MetLab provides support in designing the metagenomics experiment by estimating the sequencing depth needed for the complete coverage of a species. This is achieved by applying a methodology to calculate the probability of coverage using an adaptation of Stevens' theorem. It also provides scientists with several pipelines aimed at simplifying the analysis of viral metagenomes, including; quality control, assembly and taxonomic binning. We also implement a tool for simulating metagenomics datasets from several sequencing platforms. The overall aim is to provide virologists with an easy to use tool for designing, simulating and analyzing viral metagenomes. The results presented here include a benchmark towards other existing software, with emphasis on detection of viruses as well as speed of applications. This is packaged, as comprehensive software, readily available for Linux and OSX users at https://github.com/norling/metlab. PMID:27479078

  19. Design considerations for ITER (International Thermonuclear Experimental Reactor) toroidal field coils

    SciTech Connect

    Kalsi, S.S.; Lousteau, D.C.; Miller, J.R.

    1987-01-01

    The International Thermonuclear Experimental Reactor (ITER) is a new tokamak design project with joint participation from Europe, Japan, the Union of Soviet Socialist Republics (USSR), and the United States. This paper describes a magnetic and mechanical design methodology for toroidal field (TF) coils that employs Nb/sub 3/Sn superconductor technology. Coil winding is sized by using conductor concepts developed for the US TIBER concept. The nuclear heating generated during operation is removed from the windings by helium flowing through the conductor. The heat in the coil case is removed through a separate cooling circuit operating at approximately 20 K. Manifold concepts are presented for the complete coil cooling system. Also included are concepts for the coil structural arrangement. The effects of in-plane and out-of-plane loads are included in the design considerations for the windings and case. Concepts are presented for reacting these loads with a minimum amount of additional structural material. Concepts discussed in this paper could be considered for the ITER TF coils. 6 refs., 5 figs., 1 tab.

  20. A Fundamental Study of Smoldering with Emphasis on Experimental Design for Zero-G

    NASA Technical Reports Server (NTRS)

    Fernandez-Pello, Carlos; Pagni, Patrick J.

    1995-01-01

    A research program to study smoldering combustion with emphasis on the design of an experiment to be conducted in the space shuttle was conducted at the Department of Mechanical Engineering, University of California, Berkeley. The motivation of the research is the interest in smoldering both as a fundamental combustion problem and as a serious fire risk. Research conducted included theoretical and experimental studies that have brought considerable new information about smolder combustion, the effect that buoyancy has on the process, and specific information for the design of a space experiment. Experiments were conducted at normal gravity, in opposed and forward mode of propagation and in the upward and downward direction to determine the effect and range of influence of gravity on smolder. Experiments were also conducted in microgravity, in a drop tower and in parabolic aircraft flights, where the brief microgravity periods were used to analyze transient aspects of the problem. Significant progress was made on the study of one-dimensional smolder, particularly in the opposed-flow configuration. These studies provided enough information to design a small-scale space-based experiment that was successfully conducted in the Spacelab Glovebox in the June 1992 USML-1/STS-50 mission of the Space Shuttle Columbia.